Science.gov

Sample records for adaptive sampling approach

  1. Adaptive Sampling approach to environmental site characterization: Phase 1 demonstration

    SciTech Connect

    Floran, R.J.; Bujewski, G.E.; Johnson, R.L.

    1995-07-01

    A technology demonstration that optimizes sampling strategies and real-time data collection was carried out at the Kirtland Air Force Base (KAFB) RB-11 Radioactive Burial Site, Albuquerque, New Mexico in August 1994. The project, which was funded by the Strategic Environmental Research and Development Program (SERDP), involved the application of a geostatistical-based Adaptive Sampling methodology and software with on-site field screening of soils for radiation, organic compounds and metals. The software, known as Plume{trademark}, was developed at Argonne National Laboratory as part of the DOE/OTD-funded Mixed Waste Landfill Integrated Demonstration (MWLID). The objective of the investigation was to compare an innovative Adaptive Sampling approach that stressed real-time decision-making with a conventional RCRA-driven site characterization carried out by the Air Force. The latter investigation used a standard drilling and sampling plan as mandated by the Environmental Protection Agency (EPA). To make the comparison realistic, the same contractors and sampling equipment (Geoprobe{reg_sign} soil samplers) were used. In both investigations, soil samples were collected at several depths at numerous locations adjacent to burial trenches that contain low-level radioactive waste and animal carcasses; some trenches may also contain mixed waste. Neither study revealed the presence of contaminants appreciably above risk based action levels, indicating that minimal to no migration has occurred away from the trenches. The combination of Adaptive Sampling with field screening achieved a similar level of confidence compared to the Resource Conservation and Recovery Act (RCRA) investigation regarding the potential migration of contaminants at the site.

  2. A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification

    SciTech Connect

    Li, Weixuan; Zhang, Dongxiao; Lin, Guang

    2015-02-25

    A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a sample of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching

  3. Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach

    PubMed Central

    Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei

    2016-01-01

    Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795

  4. Adaptive Sampling Proxy Application

    2012-10-22

    ASPA is an implementation of an adaptive sampling algorithm [1-3], which is used to reduce the computational expense of computer simulations that couple disparate physical scales. The purpose of ASPA is to encapsulate the algorithms required for adaptive sampling independently from any specific application, so that alternative algorithms and programming models for exascale computers can be investigated more easily.

  5. Adaptive sampling for noisy problems

    SciTech Connect

    Cantu-Paz, E

    2004-03-26

    The usual approach to deal with noise present in many real-world optimization problems is to take an arbitrary number of samples of the objective function and use the sample average as an estimate of the true objective value. The number of samples is typically chosen arbitrarily and remains constant for the entire optimization process. This paper studies an adaptive sampling technique that varies the number of samples based on the uncertainty of deciding between two individuals. Experiments demonstrate the effect of adaptive sampling on the final solution quality reached by a genetic algorithm and the computational cost required to find the solution. The results suggest that the adaptive technique can effectively eliminate the need to set the sample size a priori, but in many cases it requires high computational costs.

  6. A Predictive Approach to Nonparametric Inference for Adaptive Sequential Sampling of Psychophysical Experiments

    PubMed Central

    Benner, Philipp; Elze, Tobias

    2012-01-01

    We present a predictive account on adaptive sequential sampling of stimulus-response relations in psychophysical experiments. Our discussion applies to experimental situations with ordinal stimuli when there is only weak structural knowledge available such that parametric modeling is no option. By introducing a certain form of partial exchangeability, we successively develop a hierarchical Bayesian model based on a mixture of Pólya urn processes. Suitable utility measures permit us to optimize the overall experimental sampling process. We provide several measures that are either based on simple count statistics or more elaborate information theoretic quantities. The actual computation of information theoretic utilities often turns out to be infeasible. This is not the case with our sampling method, which relies on an efficient algorithm to compute exact solutions of our posterior predictions and utility measures. Finally, we demonstrate the advantages of our framework on a hypothetical sampling problem. PMID:22822269

  7. Adaptive Sampling approach to environmental site characterization at Joliet Army Ammunition Plant: Phase 2 demonstration

    SciTech Connect

    Bujewski, G.E.; Johnson, R.L.

    1996-04-01

    Adaptive sampling programs provide real opportunities to save considerable time and money when characterizing hazardous waste sites. This Strategic Environmental Research and Development Program (SERDP) project demonstrated two decision-support technologies, SitePlanner{trademark} and Plume{trademark}, that can facilitate the design and deployment of an adaptive sampling program. A demonstration took place at Joliet Army Ammunition Plant (JAAP), and was unique in that it was tightly coupled with ongoing Army characterization work at the facility, with close scrutiny by both state and federal regulators. The demonstration was conducted in partnership with the Army Environmental Center`s (AEC) Installation Restoration Program and AEC`s Technology Development Program. AEC supported researchers from Tufts University who demonstrated innovative field analytical techniques for the analysis of TNT and DNT. SitePlanner{trademark} is an object-oriented database specifically designed for site characterization that provides an effective way to compile, integrate, manage and display site characterization data as it is being generated. Plume{trademark} uses a combination of Bayesian analysis and geostatistics to provide technical staff with the ability to quantitatively merge soft and hard information for an estimate of the extent of contamination. Plume{trademark} provides an estimate of contamination extent, measures the uncertainty associated with the estimate, determines the value of additional sampling, and locates additional samples so that their value is maximized.

  8. Adaptive Sampling Designs.

    ERIC Educational Resources Information Center

    Flournoy, Nancy

    Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…

  9. Adaptive Peer Sampling with Newscast

    NASA Astrophysics Data System (ADS)

    Tölgyesi, Norbert; Jelasity, Márk

    The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.

  10. Adaptive Sampling in Hierarchical Simulation

    SciTech Connect

    Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R

    2007-07-09

    We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.

  11. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  12. Adaptive approaches to biosecurity governance.

    PubMed

    Cook, David C; Liu, Shuang; Murphy, Brendan; Lonsdale, W Mark

    2010-09-01

    This article discusses institutional changes that may facilitate an adaptive approach to biosecurity risk management where governance is viewed as a multidisciplinary, interactive experiment acknowledging uncertainty. Using the principles of adaptive governance, evolved from institutional theory, we explore how the concepts of lateral information flows, incentive alignment, and policy experimentation might shape Australia's invasive species defense mechanisms. We suggest design principles for biosecurity policies emphasizing overlapping complementary response capabilities and the sharing of invasive species risks via a polycentric system of governance. PMID:20561262

  13. The Limits to Adaptation; A Systems Approach

    EPA Science Inventory

    The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...

  14. Adaptive sampling program support for expedited site characterization

    SciTech Connect

    Johnson, R.

    1993-10-01

    Expedited site characterizations offer substantial savings in time and money when assessing hazardous waste sites. Key to some of these savings is the ability to adapt a sampling program to the ``real-time`` data generated by an expedited site characterization. This paper presents a two-prong approach to supporting adaptive sampling programs: a specialized object-oriented database/geographical information system for data fusion, management and display; and combined Bayesian/geostatistical methods for contamination extent estimation and sample location selection.

  15. Adaptive Sampling for High Throughput Data Using Similarity Measures

    SciTech Connect

    Bulaevskaya, V.; Sales, A. P.

    2015-05-06

    The need for adaptive sampling arises in the context of high throughput data because the rates of data arrival are many orders of magnitude larger than the rates at which they can be analyzed. A very fast decision must therefore be made regarding the value of each incoming observation and its inclusion in the analysis. In this report we discuss one approach to adaptive sampling, based on the new data point’s similarity to the other data points being considered for inclusion. We present preliminary results for one real and one synthetic data set.

  16. Feature Adaptive Sampling for Scanning Electron Microscopy.

    PubMed

    Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp

    2016-01-01

    A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning. PMID:27150131

  17. Feature Adaptive Sampling for Scanning Electron Microscopy

    PubMed Central

    Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp

    2016-01-01

    A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning. PMID:27150131

  18. Feature Adaptive Sampling for Scanning Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp

    2016-05-01

    A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning.

  19. Adapting Courses to Distance Delivery: Three Approaches.

    ERIC Educational Resources Information Center

    Landis, Melodee

    1999-01-01

    Describes three approaches to adapting courses to distance delivery: the most common "dive-in" technique (little preparation other than adapting print on transparencies, practicing with technology controls, and test-running); the "chunking" approach (considering how the major "chunks" of teaching can be transported to new technologies); and the…

  20. Acquiring case adaptation knowledge: A hybrid approach

    SciTech Connect

    Leake, D.B.; Kinley, A.; Wilson, D.

    1996-12-31

    The ability of case-based reasoning (CBR) systems to apply cases to novel situations depends on their case adaptation knowledge. However, endowing CBR systems with adequate adaptation knowledge has proven to be a very difficult task. This paper describes a hybrid method for performing case adaptation, using a combination of rule-based and case-based reasoning. It shows how this approach provides a framework for acquiring flexible adaptation knowledge from experiences with autonomous adaptation and suggests its potential as a basis for acquisition of adaptation knowledge from interactive user guidance. It also presents initial experimental results examining the benefits of the approach and comparing the relative contributions of case learning and adaptation learning to reasoning performance.

  1. Sampling and surface reconstruction with adaptive-size meshes

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Chen; Goldgof, Dmitry B.

    1992-03-01

    This paper presents a new approach to sampling and surface reconstruction which uses the physically based models. We introduce adaptive-size meshes which automatically update the size of the meshes as the distance between the nodes changes. We have implemented the adaptive-size algorithm to the following three applications: (1) Sampling of the intensity data. (2) Surface reconstruction of the range data. (3) Surface reconstruction of the 3-D computed tomography left ventricle data. The LV data was acquired by the 3-D computed tomography (CT) scanner. It was provided by Dr. Eric Hoffman at University of Pennsylvania Medical school and consists of 16 volumetric (128 X 128 X 118) images taken through the heart cycle.

  2. Flight Test Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  3. A Predictive Analysis Approach to Adaptive Testing.

    ERIC Educational Resources Information Center

    Kirisci, Levent; Hsu, Tse-Chi

    The predictive analysis approach to adaptive testing originated in the idea of statistical predictive analysis suggested by J. Aitchison and I.R. Dunsmore (1975). The adaptive testing model proposed is based on parameter-free predictive distribution. Aitchison and Dunsmore define statistical prediction analysis as the use of data obtained from an…

  4. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2016-01-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189

  5. Phobos Sample Return: Next Approach

    NASA Astrophysics Data System (ADS)

    Zelenyi, Lev; Martynov, Maxim; Zakharov, Alexander; Korablev, Oleg; Ivanov, Alexey; Karabadzak, George

    The Martian moons still remain a mystery after numerous studies by Mars orbiting spacecraft. Their study cover three major topics related to (1) Solar system in general (formation and evolution, origin of planetary satellites, origin and evolution of life); (2) small bodies (captured asteroid, or remnants of Mars formation, or reaccreted Mars ejecta); (3) Mars (formation and evolution of Mars; Mars ejecta at the satellites). As reviewed by Galimov [2010] most of the above questions require the sample return from the Martian moon, while some (e.g. the characterization of the organic matter) could be also answered by in situ experiments. There is the possibility to obtain the sample of Mars material by sampling Phobos: following to Chappaz et al. [2012] a 200-g sample could contain 10-7 g of Mars surface material launched during the past 1 mln years, or 5*10-5 g of Mars material launched during the past 10 mln years, or 5*1010 individual particles from Mars, quantities suitable for accurate laboratory analyses. The studies of Phobos have been of high priority in the Russian program on planetary research for many years. Phobos-88 mission consisted of two spacecraft (Phobos-1, Phobos-2) and aimed the approach to Phobos at 50 m and remote studies, and also the release of small landers (long-living stations DAS). This mission implemented the program incompletely. It was returned information about the Martian environment and atmosphere. The next profect Phobos Sample Return (Phobos-Grunt) initially planned in early 2000 has been delayed several times owing to budget difficulties; the spacecraft failed to leave NEO in 2011. The recovery of the science goals of this mission and the delivery of the samples of Phobos to Earth remain of highest priority for Russian scientific community. The next Phobos SR mission named Boomerang was postponed following the ExoMars cooperation, but is considered the next in the line of planetary exploration, suitable for launch around 2022. A

  6. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  7. Adaptive Sampling of Time Series During Remote Exploration

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models

  8. Connectionist approach to adaptive reasoning

    NASA Astrophysics Data System (ADS)

    Reddy, Mohan S.; Pandya, Abhijit S.; Reddy, D. V.

    1995-06-01

    This paper illustrates the neural net approach to constructing a fuzzy logic decision system. This technique employs an artificial neural network (ANN) to recognize the relationships that exist between the various inputs and outputs. An ANN is constructed based on the variable present in the application. The network is trained and tested. After successful testing, the ANN is exposed to new data and the results are grouped into fuzzy membership sets. This data grouping forms the basis of a new ANN. The network is now trained and tested with the fuzzy membership data. New data is presented to the trained network and the results from the fuzzy implications. This approach is used to compute skid resistance values from G-analyst accelerometer readings on open grid bridge decks.

  9. Distributed database kriging for adaptive sampling (D²KAS)

    DOE PAGESBeta

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-03-18

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less

  10. Distributed database kriging for adaptive sampling (D²KAS)

    SciTech Connect

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-03-18

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.

  11. Distributed Database Kriging for Adaptive Sampling (D2 KAS)

    NASA Astrophysics Data System (ADS)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph

    2015-07-01

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5-25, while retaining high accuracy for various choices of the algorithm parameters.

  12. Flight Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The testbed served as a full-scale vehicle to test and validate adaptive flight control research addressing technical challenges involved with reducing risk to enable safe flight in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  13. Estimation of cosmological parameters using adaptive importance sampling

    SciTech Connect

    Wraith, Darren; Kilbinger, Martin; Benabed, Karim; Prunet, Simon; Cappe, Olivier; Fort, Gersende; Cardoso, Jean-Francois; Robert, Christian P.

    2009-07-15

    We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.

  14. A modular approach to adaptive structures.

    PubMed

    Pagitz, Markus; Pagitz, Manuel; Hühne, Christian

    2014-01-01

    A remarkable property of nastic, shape changing plants is their complete fusion between actuators and structure. This is achieved by combining a large number of cells whose geometry, internal pressures and material properties are optimized for a given set of target shapes and stiffness requirements. An advantage of such a fusion is that cell walls are prestressed by cell pressures which increases, decreases the overall structural stiffness, weight. Inspired by the nastic movement of plants, Pagitz et al (2012 Bioinspir. Biomim. 7) published a novel concept for pressure actuated cellular structures. This article extends previous work by introducing a modular approach to adaptive structures. An algorithm that breaks down any continuous target shapes into a small number of standardized modules is presented. Furthermore it is shown how cytoskeletons within each cell enhance the properties of adaptive modules. An adaptive passenger seat and an aircrafts leading, trailing edge is used to demonstrate the potential of a modular approach. PMID:25289521

  15. Cross-Cultural Adaptation: Current Approaches.

    ERIC Educational Resources Information Center

    Kim, Young Yun, Ed.; Gudykunst, William B., Ed.

    1988-01-01

    Reflecting multidisciplinary and multisocietal approaches, this collection presents 14 theoretical or research-based essays dealing with cross-cultural adaptation of individuals who are born and raised in one culture and find themselves in need of modifying their customary life patterns in a foreign culture. Papers in the collection are:…

  16. Adaptive Importance Sampling for Control and Inference

    NASA Astrophysics Data System (ADS)

    Kappen, H. J.; Ruiz, H. C.

    2016-03-01

    Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.

  17. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  18. Averaging analysis for discrete time and sampled data adaptive systems

    NASA Technical Reports Server (NTRS)

    Fu, Li-Chen; Bai, Er-Wei; Sastry, Shankar S.

    1986-01-01

    Earlier continuous time averaging theorems are extended to the nonlinear discrete time case. Theorems for the study of the convergence analysis of discrete time adaptive identification and control systems are used. Instability theorems are also derived and used for the study of robust stability and instability of adaptive control schemes applied to sampled data systems. As a by product, the effects of sampling on unmodeled dynamics in continuous time systems are also studied.

  19. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.

    PubMed

    Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui

    2016-01-01

    To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758

  20. Adaptive importance sampling of random walks on continuous state spaces

    SciTech Connect

    Baggerly, K.; Cox, D.; Picard, R.

    1998-11-01

    The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material.

  1. Adaptive video compressed sampling in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi

    2016-07-01

    In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.

  2. Approaching neuropsychological tasks through adaptive neurorobots

    NASA Astrophysics Data System (ADS)

    Gigliotta, Onofrio; Bartolomeo, Paolo; Miglino, Orazio

    2015-04-01

    Neuropsychological phenomena have been modelized mainly, by the mainstream approach, by attempting to reproduce their neural substrate whereas sensory-motor contingencies have attracted less attention. In this work, we introduce a simulator based on the evolutionary robotics platform Evorobot* in order to setting up in silico neuropsychological tasks. Moreover, in this study we trained artificial embodied neurorobotic agents equipped with a pan/tilt camera, provided with different neural and motor capabilities, to solve a well-known neuropsychological test: the cancellation task in which an individual is asked to cancel target stimuli surrounded by distractors. Results showed that embodied agents provided with additional motor capabilities (a zooming/attentional actuator) outperformed simple pan/tilt agents, even those equipped with more complex neural controllers and that the zooming ability is exploited to correctly categorising presented stimuli. We conclude that since the sole neural computational power cannot explain the (artificial) cognition which emerged throughout the adaptive process, such kind of modelling approach can be fruitful in neuropsychological modelling where the importance of having a body is often neglected.

  3. The Limits to Adaptation: A Systems Approach

    EPA Science Inventory

    The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed th...

  4. An Adaptive Critic Approach to Reference Model Adaptation

    NASA Technical Reports Server (NTRS)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  5. Local Adaptation in European Firs Assessed through Extensive Sampling across Altitudinal Gradients in Southern Europe

    PubMed Central

    Postolache, Dragos; Lascoux, Martin; Drouzas, Andreas D.; Källman, Thomas; Leonarduzzi, Cristina; Liepelt, Sascha; Piotti, Andrea; Popescu, Flaviu; Roschanski, Anna M.; Zhelev, Peter; Fady, Bruno; Vendramin, Giovanni Giuseppe

    2016-01-01

    Background Local adaptation is a key driver of phenotypic and genetic divergence at loci responsible for adaptive traits variations in forest tree populations. Its experimental assessment requires rigorous sampling strategies such as those involving population pairs replicated across broad spatial scales. Methods A hierarchical Bayesian model of selection (HBM) that explicitly considers both the replication of the environmental contrast and the hierarchical genetic structure among replicated study sites is introduced. Its power was assessed through simulations and compared to classical ‘within-site’ approaches (FDIST, BAYESCAN) and a simplified, within-site, version of the model introduced here (SBM). Results HBM demonstrates that hierarchical approaches are very powerful to detect replicated patterns of adaptive divergence with low false-discovery (FDR) and false-non-discovery (FNR) rates compared to the analysis of different sites separately through within-site approaches. The hypothesis of local adaptation to altitude was further addressed by analyzing replicated Abies alba population pairs (low and high elevations) across the species’ southern distribution range, where the effects of climatic selection are expected to be the strongest. For comparison, a single population pair from the closely related species A. cephalonica was also analyzed. The hierarchical model did not detect any pattern of adaptive divergence to altitude replicated in the different study sites. Instead, idiosyncratic patterns of local adaptation among sites were detected by within-site approaches. Conclusion Hierarchical approaches may miss idiosyncratic patterns of adaptation among sites, and we strongly recommend the use of both hierarchical (multi-site) and classical (within-site) approaches when addressing the question of adaptation across broad spatial scales. PMID:27392065

  6. Adaptive sampling strategy support for the unlined chromic acid pit, chemical waste landfill, Sandia National Laboratories, Albuquerque, New Mexico

    SciTech Connect

    Johnson, R.L.

    1993-11-01

    Adaptive sampling programs offer substantial savings in time and money when assessing hazardous waste sites. Key to some of these savings is the ability to adapt a sampling program to the real-time data generated by an adaptive sampling program. This paper presents a two-prong approach to supporting adaptive sampling programs: a specialized object-oriented database/geographical information system (SitePlanner{trademark} ) for data fusion, management, and display and combined Bayesian/geostatistical methods (PLUME) for contamination-extent estimation and sample location selection. This approach is applied in a retrospective study of a subsurface chromium plume at Sandia National Laboratories` chemical waste landfill. Retrospective analyses suggest the potential for characterization cost savings on the order of 60% through a reduction in the number of sampling programs, total number of soil boreholes, and number of samples analyzed from each borehole.

  7. Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations

    SciTech Connect

    Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both

  8. Anomalous human behavior detection: an adaptive approach

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Coen; Halma, Arvid; Schutte, Klamer

    2013-05-01

    Detection of anomalies (outliers or abnormal instances) is an important element in a range of applications such as fault, fraud, suspicious behavior detection and knowledge discovery. In this article we propose a new method for anomaly detection and performed tested its ability to detect anomalous behavior in videos from DARPA's Mind's Eye program, containing a variety of human activities. In this semi-unsupervised task a set of normal instances is provided for training, after which unknown abnormal behavior has to be detected in a test set. The features extracted from the video data have high dimensionality, are sparse and inhomogeneously distributed in the feature space making it a challenging task. Given these characteristics a distance-based method is preferred, but choosing a threshold to classify instances as (ab)normal is non-trivial. Our novel aproach, the Adaptive Outlier Distance (AOD) is able to detect outliers in these conditions based on local distance ratios. The underlying assumption is that the local maximum distance between labeled examples is a good indicator of the variation in that neighborhood, and therefore a local threshold will result in more robust outlier detection. We compare our method to existing state-of-art methods such as the Local Outlier Factor (LOF) and the Local Distance-based Outlier Factor (LDOF). The results of the experiments show that our novel approach improves the quality of the anomaly detection.

  9. Adaptive sample map for Monte Carlo ray tracing

    NASA Astrophysics Data System (ADS)

    Teng, Jun; Luo, Lixin; Chen, Zhibo

    2010-07-01

    Monte Carlo ray tracing algorithm is widely used by production quality renderers to generate synthesized images in films and TV programs. Noise artifact exists in synthetic images generated by Monte Carlo ray tracing methods. In this paper, a novel noise artifact detection and noise level representation method is proposed. We first apply discrete wavelet transform (DWT) on a synthetic image; the high frequency sub-bands of the DWT result encode the noise information. The sub-bands coefficients are then combined to generate a noise level description of the synthetic image, which is called noise map in the paper. This noise map is then subdivided into blocks for robust noise level metric calculation. Increasing the samples per pixel in Monte Carlo ray tracer can reduce the noise of a synthetic image to visually unnoticeable level. A noise-to-sample number mapping algorithm is thus performed on each block of the noise map, higher noise value is mapped to larger sample number, and lower noise value is mapped to smaller sample number, the result of mapping is called sample map. Each pixel in a sample map can be used by Monte Carlo ray tracer to reduce the noise level in the corresponding block of pixels in a synthetic image. However, this block based scheme produces blocky artifact as appeared in video and image compression algorithms. We use Gaussian filter to smooth the sample map, the result is adaptive sample map (ASP). ASP serves two purposes in rendering process; its statistics information can be used as noise level metric in synthetic image, and it can also be used by a Monte Carlo ray tracer to refine the synthetic image adaptively in order to reduce the noise to unnoticeable level but with less rendering time than the brute force method.

  10. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  11. Improving Wang-Landau sampling with adaptive windows

    NASA Astrophysics Data System (ADS)

    Cunha-Netto, A. G.; Caparica, A. A.; Tsai, Shan-Ho; Dickman, Ronald; Landau, D. P.

    2008-11-01

    Wang-Landau sampling (WLS) of large systems requires dividing the energy range into “windows” and joining the results of simulations in each window. The resulting density of states (and associated thermodynamic functions) is shown to suffer from boundary effects in simulations of lattice polymers and the five-state Potts model. Here, we implement WLS using adaptive windows. Instead of defining fixed energy windows (or windows in the energy-magnetization plane for the Potts model), the boundary positions depend on the set of energy values on which the histogram is flat at a given stage of the simulation. Shifting the windows each time the modification factor f is reduced, we eliminate border effects that arise in simulations using fixed windows. Adaptive windows extend significantly the range of system sizes that may be studied reliably using WLS.

  12. Binary hologram generation based on shape adaptive sampling

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.; Pan, Y.; Poon, T.-C.

    2014-05-01

    Past research has revealed that by down-sampling the projected intensity profile of a source object scene with a regular sampling lattice, a binary Fresnel hologram can be generated swiftly to preserve favorable quality on its reconstructed image. However, this method also results in a prominent textural pattern which is conflicting to the geometrical profile of the object scene, leading to an unnatural visual perception. In this paper, we shall overcome this problem with a down-sampling process that is adaptive to the geometry of the object. Experimental results demonstrate that by applying our proposed method to generate a binary hologram, the reconstructed image is rendered with a texture which abides with the shape of the three-dimensional object(s).

  13. Effect of imperfect detectability on adaptive and conventional sampling: simulated sampling of freshwater mussels in the upper Mississippi River.

    PubMed

    Smith, David R; Gray, Brian R; Newton, Teresa J; Nichols, Doug

    2010-11-01

    Adaptive sampling designs are recommended where, as is typical with freshwater mussels, the outcome of interest is rare and clustered. However, the performance of adaptive designs has not been investigated when outcomes are not only rare and clustered but also imperfectly detected. We address this combination of challenges using data simulated to mimic properties of freshwater mussels from a reach of the upper Mississippi River. Simulations were conducted under a range of sample sizes and detection probabilities. Under perfect detection, efficiency of the adaptive sampling design increased relative to the conventional design as sample size increased and as density decreased. Also, the probability of sampling occupied habitat was four times higher for adaptive than conventional sampling of the lowest density population examined. However, imperfect detection resulted in substantial biases in sample means and variances under both adaptive sampling and conventional designs. The efficiency of adaptive sampling declined with decreasing detectability. Also, the probability of encountering an occupied unit during adaptive sampling, relative to conventional sampling declined with decreasing detectability. Thus, the potential gains in the application of adaptive sampling to rare and clustered populations relative to conventional sampling are reduced when detection is imperfect. The results highlight the need to increase or estimate detection to improve performance of conventional and adaptive sampling designs. PMID:19946742

  14. Effect of imperfect detectability on adaptive and conventional sampling: Simulated sampling of freshwater mussels in the upper Mississippi River

    USGS Publications Warehouse

    Smith, D.R.; Gray, B.R.; Newton, T.J.; Nichols, D.

    2010-01-01

    Adaptive sampling designs are recommended where, as is typical with freshwater mussels, the outcome of interest is rare and clustered. However, the performance of adaptive designs has not been investigated when outcomes are not only rare and clustered but also imperfectly detected. We address this combination of challenges using data simulated to mimic properties of freshwater mussels from a reach of the upper Mississippi River. Simulations were conducted under a range of sample sizes and detection probabilities. Under perfect detection, efficiency of the adaptive sampling design increased relative to the conventional design as sample size increased and as density decreased. Also, the probability of sampling occupied habitat was four times higher for adaptive than conventional sampling of the lowest density population examined. However, imperfect detection resulted in substantial biases in sample means and variances under both adaptive sampling and conventional designs. The efficiency of adaptive sampling declined with decreasing detectability. Also, the probability of encountering an occupied unit during adaptive sampling, relative to conventional sampling declined with decreasing detectability. Thus, the potential gains in the application of adaptive sampling to rare and clustered populations relative to conventional sampling are reduced when detection is imperfect. The results highlight the need to increase or estimate detection to improve performance of conventional and adaptive sampling designs.

  15. Elucidating Microbial Adaptation Dynamics via Autonomous Exposure and Sampling

    NASA Astrophysics Data System (ADS)

    Grace, J. M.; Verseux, C.; Gentry, D.; Moffet, A.; Thayabaran, R.; Wong, N.; Rothschild, L.

    2013-12-01

    The adaptation of micro-organisms to their environments is a complex process of interaction between the pressures of the environment and of competition. Reducing this multifactorial process to environmental exposure in the laboratory is a common tool for elucidating individual mechanisms of evolution, such as mutation rates[Wielgoss et al., 2013]. Although such studies inform fundamental questions about the way adaptation and even speciation occur, they are often limited by labor-intensive manual techniques[Wassmann et al., 2010]. Current methods for controlled study of microbial adaptation limit the length of time, the depth of collected data, and the breadth of applied environmental conditions. Small idiosyncrasies in manual techniques can have large effects on outcomes; for example, there are significant variations in induced radiation resistances following similar repeated exposure protocols[Alcántara-Díaz et al., 2004; Goldman and Travisano, 2011]. We describe here a project under development to allow rapid cycling of multiple types of microbial environmental exposure. The system allows continuous autonomous monitoring and data collection of both single species and sampled communities, independently and concurrently providing multiple types of controlled environmental pressure (temperature, radiation, chemical presence or absence, and so on) to a microbial community in dynamic response to the ecosystem's current status. When combined with DNA sequencing and extraction, such a controlled environment can cast light on microbial functional development, population dynamics, inter- and intra-species competition, and microbe-environment interaction. The project's goal is to allow rapid, repeatable iteration of studies of both natural and artificial microbial adaptation. As an example, the same system can be used both to increase the pH of a wet soil aliquot over time while periodically sampling it for genetic activity analysis, or to repeatedly expose a culture of

  16. Temporally adaptive sampling: a case study in rare species survey design with marbled salamanders (Ambystoma opacum).

    PubMed

    Charney, Noah D; Kubel, Jacob E; Eiseman, Charles S

    2015-01-01

    Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224

  17. Temporally Adaptive Sampling: A Case Study in Rare Species Survey Design with Marbled Salamanders (Ambystoma opacum)

    PubMed Central

    Charney, Noah D.; Kubel, Jacob E.; Eiseman, Charles S.

    2015-01-01

    Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224

  18. Using continuous in-situ measurements to adaptively trigger urban storm water samples

    NASA Astrophysics Data System (ADS)

    Wong, B. P.; Kerkez, B.

    2015-12-01

    Until cost-effective in-situ sensors are available for biological parameters, nutrients and metals, automated samplers will continue to be the primary source of reliable water quality measurements. Given limited samples bottles, however, autosamplers often obscure insights on nutrient sources and biogeochemical processes which would otherwise be captured using a continuous sampling approach. To that end, we evaluate the efficacy a novel method to measure first-flush nutrient dynamics in flashy, urban watersheds. Our approach reduces the number of samples required to capture water quality dynamics by leveraging an internet-connected sensor node, which is equipped with a suite of continuous in-situ sensors and an automated sampler. To capture both the initial baseflow as well as storm concentrations, a cloud-hosted adaptive algorithm analyzes the high-resolution sensor data along with local weather forecasts to optimize a sampling schedule. The method was tested in a highly developed urban catchment in Ann Arbor, Michigan and collected samples of nitrate, phosphorus, and suspended solids throughout several storm events. Results indicate that the watershed does not exhibit first flush dynamics, a behavior that would have been obscured when using a non-adaptive sampling approach.

  19. The iterative adaptive approach in medical ultrasound imaging.

    PubMed

    Jensen, Are Charles; Austeng, Andreas

    2014-10-01

    Many medical ultrasound imaging systems are based on sweeping the image plane with a set of narrow beams. Usually, the returning echo from each of these beams is used to form one or a few azimuthal image samples. We model, for each radial distance, jointly the full azimuthal scanline. The model consists of the amplitudes of a set of densely placed potential reflectors (or scatterers), cf. sparse signal representation. To fit the model, we apply the iterative adaptive approach (IAA) on data formed by a sequenced time delay and phase shift. The performance of the IAA in combination with our time-delayed and phase-shifted data are studied on both simulated data of scenes consisting of point targets and hollow cyst-like structures, and recorded ultrasound phantom data from a specially adapted commercially available scanner. The results show that the proposed IAA is more capable of resolving point targets and gives better defined and more geometrically correct cyst-like structures in speckle images compared with the conventional delay-and-sum (DAS) approach. Compared with a Capon beamformer, the IAA showed an improved rendering of cyst-like structures and a similar point-target resolvability. Unlike the Capon beamformer, the IAA has no user parameters and seems unaffected by signal cancellation. The disadvantage of the IAA is a high computational load. PMID:25265177

  20. POF-Darts: Geometric adaptive sampling for probability of failure

    DOE PAGESBeta

    Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; Romero, Vicente J.; Rushdi, Ahmad A.

    2016-06-18

    We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less

  1. Passive and active adaptive management: Approaches and an example

    USGS Publications Warehouse

    Williams, B.K.

    2011-01-01

    Adaptive management is a framework for resource conservation that promotes iterative learning-based decision making. Yet there remains considerable confusion about what adaptive management entails, and how to actually make resource decisions adaptively. A key but somewhat ambiguous distinction in adaptive management is between active and passive forms of adaptive decision making. The objective of this paper is to illustrate some approaches to active and passive adaptive management with a simple example involving the drawdown of water impoundments on a wildlife refuge. The approaches are illustrated for the drawdown example, and contrasted in terms of objectives, costs, and potential learning rates. Some key challenges to the actual practice of AM are discussed, and tradeoffs between implementation costs and long-term benefits are highlighted. ?? 2010 Elsevier Ltd.

  2. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  3. Structured estimation - Sample size reduction for adaptive pattern classification

    NASA Technical Reports Server (NTRS)

    Morgera, S.; Cooper, D. B.

    1977-01-01

    The Gaussian two-category classification problem with known category mean value vectors and identical but unknown category covariance matrices is considered. The weight vector depends on the unknown common covariance matrix, so the procedure is to estimate the covariance matrix in order to obtain an estimate of the optimum weight vector. The measure of performance for the adapted classifier is the output signal-to-interference noise ratio (SIR). A simple approximation for the expected SIR is gained by using the general sample covariance matrix estimator; this performance is both signal and true covariance matrix independent. An approximation is also found for the expected SIR obtained by using a Toeplitz form covariance matrix estimator; this performance is found to be dependent on both the signal and the true covariance matrix.

  4. Concept Based Approach for Adaptive Personalized Course Learning System

    ERIC Educational Resources Information Center

    Salahli, Mehmet Ali; Özdemir, Muzaffer; Yasar, Cumali

    2013-01-01

    One of the most important factors for improving the personalization aspects of learning systems is to enable adaptive properties to them. The aim of the adaptive personalized learning system is to offer the most appropriate learning path and learning materials to learners by taking into account their profiles. In this paper, a new approach to…

  5. Responsiveness-to-Intervention: A "Systems" Approach to Instructional Adaptation

    ERIC Educational Resources Information Center

    Fuchs, Douglas; Fuchs, Lynn S.

    2016-01-01

    Classroom research on adaptive teaching indicates few teachers modify instruction for at-risk students in a manner that benefits them. Responsiveness-To-Intervention, with its tiers of increasingly intensive instruction, represents an alternative approach to adaptive instruction that may prove more workable in today's schools.

  6. Superresolution restoration of an image sequence: adaptive filtering approach.

    PubMed

    Elad, M; Feuer, A

    1999-01-01

    This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented. PMID:18262881

  7. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  8. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  9. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGESBeta

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  10. Approach for reconstructing anisoplanatic adaptive optics images.

    PubMed

    Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J

    2007-08-20

    Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366

  11. Sample Sealing Approaches for Mars Sample Return Caching

    NASA Technical Reports Server (NTRS)

    Younse, Paulo; deAlwis, Thimal; Backes, Paul; Trebi-Ollennu, Ashitey

    2012-01-01

    Objective ot this project was to investigate sealing methods for encapsulating samples in 1 cm diameter thin-walled sample tubes applicable to future proposed Mars Sample Return Techniques implemented include a spring energized Teflon sleeve plug, a crimped tube seal, a heat-activated shape memory alloy plug, a shape memory alloy activated cap, a solder-based plug, and a solder-based cap

  12. Innovation and adaptation in a Turkish sample: a preliminary study.

    PubMed

    Oner, B

    2000-11-01

    The aim of this study was to examine the representations of adaptation and innovation among adults in Turkey. Semi-structured interviews were carried out with a sample of 20 Turkish adults (10 men, 10 women) from various occupations. The participants' ages ranged from 21 to 58 years. Results of content analysis showed that the representation of innovation varied with the type of context. Innovation was not preferred within the family and interpersonal relationship contexts, whereas it was relatively more readily welcomed within the contexts of work, science, and technology. This finding may indicate that the concept of innovation that is assimilated in traditional Turkish culture has limits. Contents of the interviews were also analyzed with respect to M. J. Kirton's (1976) subscales of originality, efficiency, and rule-group conformity. The participants favored efficient innovators, whereas they thought that the risk of failure was high in cases of inefficient innovation. The reasons for and indications of the representations of innovativeness among Turkish people are discussed in relation to their social structure and cultural expectations. PMID:11092420

  13. Adapting to the Digital Age: A Narrative Approach

    ERIC Educational Resources Information Center

    Cousins, Sarah; Bissar, Dounia

    2012-01-01

    The article adopts a narrative inquiry approach to foreground informal learning and exposes a collection of stories from tutors about how they adapted comfortably to the digital age.We were concerned that despite substantial evidence that bringing about changes in pedagogic practices can be difficult, there is a gap in convincing approaches to…

  14. The AdaptiV Approach to Verification of Adaptive Systems

    SciTech Connect

    Rouff, Christopher; Buskens, Richard; Pullum, Laura L; Cui, Xiaohui; Hinchey, Mike

    2012-01-01

    Adaptive systems are critical for future space and other unmanned and intelligent systems. Verification of these systems is also critical for their use in systems with potential harm to human life or with large financial investments. Due to their nondeterministic nature and extremely large state space, current methods for verification of software systems are not adequate to provide a high level of assurance. The combination of stabilization science, high performance computing simulations, compositional verification and traditional verification techniques, plus operational monitors, provides a complete approach to verification and deployment of adaptive systems that has not been used before. This paper gives an overview of this approach.

  15. Adaptive millimeter-wave synthetic aperture imaging for compressive sampling of sparse scenes.

    PubMed

    Mrozack, Alex; Heimbeck, Martin; Marks, Daniel L; Richard, Jonathan; Everitt, Henry O; Brady, David J

    2014-06-01

    We apply adaptive sensing techniques to the problem of locating sparse metallic scatterers using high-resolution, frequency modulated continuous wave W-band RADAR. Using a single detector, a frequency stepped source, and a lateral translation stage, inverse synthetic aperture RADAR reconstruction techniques are used to search for one or two wire scatterers within a specified range, while an adaptive algorithm determined successive sampling locations. The two-dimensional location of each scatterer is thereby identified with sub-wavelength accuracy in as few as 1/4 the number of lateral steps required for a simple raster scan. The implications of applying this approach to more complex scattering geometries are explored in light of the various assumptions made. PMID:24921545

  16. A Monte Carlo Approach to the Design, Assembly, and Evaluation of Multistage Adaptive Tests

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2008-01-01

    This article presents an application of Monte Carlo methods for developing and assembling multistage adaptive tests (MSTs). A major advantage of the Monte Carlo assembly over other approaches (e.g., integer programming or enumerative heuristics) is that it provides a uniform sampling from all MSTs (or MST paths) available from a given item pool.…

  17. A new approach to adaptive control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    An approach in which the manipulator inverse is used as a feedforward controller is employed in the adaptive control of manipulators in order to achieve trajectory tracking by the joint angles. The desired trajectory is applied as an input to the feedforward controller, and the controller output is used as the driving torque for the manipulator. An adaptive algorithm obtained from MRAC theory is used to update the controller gains to cope with variations in the manipulator inverse due to changes of the operating point. An adaptive feedback controller and an auxiliary signal enhance closed-loop stability and achieve faster adaptation. Simulation results demonstrate the effectiveness of the proposed control scheme for different reference trajectories, and despite large variations in the payload.

  18. Approach to nonparametric cooperative multiband segmentation with adaptive threshold.

    PubMed

    Sebari, Imane; He, Dong-Chen

    2009-07-10

    We present a new nonparametric cooperative approach to multiband image segmentation. It is based on cooperation between region-growing segmentation and edge segmentation. This approach requires no input data other than the images to be processed. It uses a spectral homogeneity criterion whose threshold is determined automatically. The threshold is adaptive and varies depending on the objects to be segmented. Applying this new approach to very high resolution satellite imagery has yielded satisfactory results. The approach demonstrated its performance on images of varied complexity and was able to detect objects of great spatial and spectral heterogeneity. PMID:19593349

  19. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach

    PubMed Central

    Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.

    2014-01-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  20. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.

    PubMed

    Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A

    2013-02-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  1. Searching for adaptive traits in genetic resources - phenology based approach

    NASA Astrophysics Data System (ADS)

    Bari, Abdallah

    2015-04-01

    Searching for adaptive traits in genetic resources - phenology based approach Abdallah Bari, Kenneth Street, Eddy De Pauw, Jalal Eddin Omari, and Chandra M. Biradar International Center for Agricultural Research in the Dry Areas, Rabat Institutes, Rabat, Morocco Phenology is an important plant trait not only for assessing and forecasting food production but also for searching in genebanks for adaptive traits. Among the phenological parameters we have been considering to search for such adaptive and rare traits are the onset (sowing period) and the seasonality (growing period). Currently an application is being developed as part of the focused identification of germplasm strategy (FIGS) approach to use climatic data in order to identify crop growing seasons and characterize them in terms of onset and duration. These approximations of growing period characteristics can then be used to estimate flowering and maturity dates for dryland crops, such as wheat, barley, faba bean, lentils and chickpea, and assess, among others, phenology-related traits such as days to heading [dhe] and grain filling period [gfp]. The approach followed here is based on first calculating long term average daily temperatures by fitting a curve to the monthly data over days from beginning of the year. Prior to the identification of these phenological stages the onset is extracted first from onset integer raster GIS layers developed based on a model of the growing period that considers both moisture and temperature limitations. The paper presents some examples of real applications of the approach to search for rare and adaptive traits.

  2. Sampling of Complex Networks: A Datamining Approach

    NASA Astrophysics Data System (ADS)

    Loecher, Markus; Dohrmann, Jakob; Bauer, Gernot

    2007-03-01

    Efficient and accurate sampling of big complex networks is still an unsolved problem. As the degree distribution is one of the most commonly used attributes to characterize a network, there have been many attempts in recent papers to derive the original degree distribution from the data obtained during a traceroute- like sampling process. This talk describes a strategy for predicting the original degree of a node using the data obtained from a network by traceroute-like sampling making use of datamining techniques. Only local quantities (the sampled degree k, the redundancy of node detection r, the time of the first discovery of a node t and the distance to the sampling source d) are used as input for the datamining models. Global properties like the betweenness centrality are ignored. These local quantities are examined theoretically and in simulations to increase their value for the predictions. The accuracy of the models is discussed as a function of the number of sources used in the sampling process and the underlying topology of the network. The purpose of this work is to introduce the techniques of the relatively young field of datamining to the discussion on network sampling.

  3. The relative power of genome scans to detect local adaptation depends on sampling design and statistical method.

    PubMed

    Lotterhos, Katie E; Whitlock, Michael C

    2015-03-01

    Although genome scans have become a popular approach towards understanding the genetic basis of local adaptation, the field still does not have a firm grasp on how sampling design and demographic history affect the performance of genome scans on complex landscapes. To explore these issues, we compared 20 different sampling designs in equilibrium (i.e. island model and isolation by distance) and nonequilibrium (i.e. range expansion from one or two refugia) demographic histories in spatially heterogeneous environments. We simulated spatially complex landscapes, which allowed us to exploit local maxima and minima in the environment in 'pair' and 'transect' sampling strategies. We compared F(ST) outlier and genetic-environment association (GEA) methods for each of two approaches that control for population structure: with a covariance matrix or with latent factors. We show that while the relative power of two methods in the same category (F(ST) or GEA) depended largely on the number of individuals sampled, overall GEA tests had higher power in the island model and F(ST) had higher power under isolation by distance. In the refugia models, however, these methods varied in their power to detect local adaptation at weakly selected loci. At weakly selected loci, paired sampling designs had equal or higher power than transect or random designs to detect local adaptation. Our results can inform sampling designs for studies of local adaptation and have important implications for the interpretation of genome scans based on landscape data. PMID:25648189

  4. An adaptive two-stage sequential design for sampling rare and clustered populations

    USGS Publications Warehouse

    Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.

    2008-01-01

    How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.

  5. An information theoretic approach of designing sparse kernel adaptive filters.

    PubMed

    Liu, Weifeng; Park, Il; Principe, José C

    2009-12-01

    This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented. PMID:19923047

  6. Variable neural adaptive robust control: a switched system approach.

    PubMed

    Lian, Jianming; Hu, Jianghai; Żak, Stanislaw H

    2015-05-01

    Variable neural adaptive robust control strategies are proposed for the output tracking control of a class of multiinput multioutput uncertain systems. The controllers incorporate a novel variable-structure radial basis function (RBF) network as the self-organizing approximator for unknown system dynamics. It can determine the network structure online dynamically by adding or removing RBFs according to the tracking performance. The structure variation is systematically considered in the stability analysis of the closed-loop system using a switched system approach with the piecewise quadratic Lyapunov function. The performance of the proposed variable neural adaptive robust controllers is illustrated with simulations. PMID:25881366

  7. Novel Approaches to Adaptive Angular Approximations in Computational Transport

    SciTech Connect

    Marvin L. Adams; Igor Carron; Paul Nelson

    2006-06-04

    The particle-transport equation is notoriously difficult to discretize accurately, largely because the solution can be discontinuous in every variable. At any given spatial position and energy E, for example, the transport solution  can be discontinuous at an arbitrary number of arbitrary locations in the direction domain. Even if the solution is continuous it is often devoid of smoothness. This makes the direction variable extremely difficult to discretize accurately. We have attacked this problem with adaptive discretizations in the angle variables, using two distinctly different approaches. The first approach used wavelet function expansions directly and exploited their ability to capture sharp local variations. The second used discrete ordinates with a spatially varying quadrature set that adapts to the local solution. The first approach is very different from that in today’s transport codes, while the second could conceivably be implemented in such codes. Both approaches succeed in reducing angular discretization error to any desired level. The work described and results presented in this report add significantly to the understanding of angular discretization in transport problems and demonstrate that it is possible to solve this important long-standing problem in deterministic transport. Our results show that our adaptive discrete-ordinates (ADO) approach successfully: 1) Reduces angular discretization error to user-selected “tolerance” levels in a variety of difficult test problems; 2) Achieves a given error with significantly fewer unknowns than non-adaptive discrete ordinates methods; 3) Can be implemented within standard discrete-ordinates solution techniques, and thus could generate a significant impact on the field in a relatively short time. Our results show that our adaptive wavelet approach: 1) Successfully reduces the angular discretization error to arbitrarily small levels in a variety of difficult test problems, even when using the

  8. Application of adaptive cluster sampling to low-density populations of freshwater mussels

    USGS Publications Warehouse

    Smith, D.R.; Villella, R.F.; Lemarie, D.P.

    2003-01-01

    Freshwater mussels appear to be promising candidates for adaptive cluster sampling because they are benthic macroinvertebrates that cluster spatially and are frequently found at low densities. We applied adaptive cluster sampling to estimate density of freshwater mussels at 24 sites along the Cacapon River, WV, where a preliminary timed search indicated that mussels were present at low density. Adaptive cluster sampling increased yield of individual mussels and detection of uncommon species; however, it did not improve precision of density estimates. Because finding uncommon species, collecting individuals of those species, and estimating their densities are important conservation activities, additional research is warranted on application of adaptive cluster sampling to freshwater mussels. However, at this time we do not recommend routine application of adaptive cluster sampling to freshwater mussel populations. The ultimate, and currently unanswered, question is how to tell when adaptive cluster sampling should be used, i.e., when is a population sufficiently rare and clustered for adaptive cluster sampling to be efficient and practical? A cost-effective procedure needs to be developed to identify biological populations for which adaptive cluster sampling is appropriate.

  9. Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids

    SciTech Connect

    Stoyanov, Miroslav K

    2013-09-01

    We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.

  10. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    SciTech Connect

    Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  11. Camera calibration approach based on adaptive active target

    NASA Astrophysics Data System (ADS)

    Zhang, Yalin; Zhou, Fuqiang; Deng, Peng

    2011-12-01

    Aiming at calibrating camera on site, where the lighting condition is hardly controlled and the quality of target images would be declined when the angle between camera and target changes, an adaptive active target is designed and the camera calibration approach based on the target is proposed. The active adaptive target in which LEDs are embedded is flat, providing active feature point. Therefore the brightness of the feature point can be modified via adjusting the electricity, judging from the threshold of image feature criteria. In order to extract features of the image accurately, the concept of subpixel-precise thresholding is also proposed. It converts the discrete representation of the digital image to continuous function by bilinear interpolation, and the sub-pixel contours are acquired by the intersection of the continuous function and the appropriate selection of threshold. According to analysis of the relationship between the features of the image and the brightness of the target, the area ratio of convex hulls and the grey value variance are adopted as the criteria. Result of experiments revealed that the adaptive active target accommodates well to the changing of the illumination in the environment, the camera calibration approach based on adaptive active target can obtain high level of accuracy and fit perfectly for image targeting in various industrial sites.

  12. An Approach to V&V of Embedded Adaptive Systems

    NASA Technical Reports Server (NTRS)

    Liu, Yan; Yerramalla, Sampath; Fuller, Edgar; Cukic, Bojan; Gururajan, Srikaruth

    2004-01-01

    Rigorous Verification and Validation (V&V) techniques are essential for high assurance systems. Lately, the performance of some of these systems is enhanced by embedded adaptive components in order to cope with environmental changes. Although the ability of adapting is appealing, it actually poses a problem in terms of V&V. Since uncertainties induced by environmental changes have a significant impact on system behavior, the applicability of conventional V&V techniques is limited. In safety-critical applications such as flight control system, the mechanisms of change must be observed, diagnosed, accommodated and well understood prior to deployment. In this paper, we propose a non-conventional V&V approach suitable for online adaptive systems. We apply our approach to an intelligent flight control system that employs a particular type of Neural Networks (NN) as the adaptive learning paradigm. Presented methodology consists of a novelty detection technique and online stability monitoring tools. The novelty detection technique is based on Support Vector Data Description that detects novel (abnormal) data patterns. The Online Stability Monitoring tools based on Lyapunov's Stability Theory detect unstable learning behavior in neural networks. Cases studies based on a high fidelity simulator of NASA's Intelligent Flight Control System demonstrate a successful application of the presented V&V methodology. ,

  13. Free-space fluorescence tomography with adaptive sampling based on anatomical information from microCT

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofeng; Badea, Cristian T.; Hood, Greg; Wetzel, Arthur W.; Stiles, Joel R.; Johnson, G. Allan

    2010-02-01

    Image reconstruction is one of the main challenges for fluorescence tomography. For in vivo experiments on small animals, in particular, the inhomogeneous optical properties and irregular surface of the animal make free-space image reconstruction challenging because of the difficulties in accurately modeling the forward problem and the finite dynamic range of the photodetector. These two factors are fundamentally limited by the currently available forward models and photonic technologies. Nonetheless, both limitations can be significantly eased using a signal processing approach. We have recently constructed a free-space panoramic fluorescence diffuse optical tomography system to take advantage of co-registered microCT data acquired from the same animal. In this article, we present a data processing strategy that adaptively selects the optical sampling points in the raw 2-D fluorescent CCD images. Specifically, the general sampling area and sampling density are initially specified to create a set of potential sampling points sufficient to cover the region of interest. Based on 3-D anatomical information from the microCT and the fluorescent CCD images, data points are excluded from the set when they are located in an area where either the forward model is known to be problematic (e.g., large wrinkles on the skin) or where the signal is unreliable (e.g., saturated or low signal-to-noise ratio). Parallel Monte Carlo software was implemented to compute the sensitivity function for image reconstruction. Animal experiments were conducted on a mouse cadaver with an artificial fluorescent inclusion. Compared to our previous results using a finite element method, the newly developed parallel Monte Carlo software and the adaptive sampling strategy produced favorable reconstruction results.

  14. The adaptive, cut-cell Cartesian approach (warts and all)

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.

    1995-01-01

    Solution-adaptive methods based on cutting bodies out of Cartesian grids are gaining popularity now that the ways of circumventing the accuracy problems associated with small cut cells have been developed. Researchers are applying Cartesian-based schemes to a broad class of problems now, and, although there is still development work to be done, it is becoming clearer which problems are best suited to the approach (and which are not). The purpose of this paper is to give a candid assessment, based on applying Cartesian schemes to a variety of problems, of the strengths and weaknesses of the approach as it is currently implemented.

  15. Using archaeogenomic and computational approaches to unravel the history of local adaptation in crops

    PubMed Central

    Allaby, Robin G.; Gutaker, Rafal; Clarke, Andrew C.; Pearson, Neil; Ware, Roselyn; Palmer, Sarah A.; Kitchen, James L.; Smith, Oliver

    2015-01-01

    Our understanding of the evolution of domestication has changed radically in the past 10 years, from a relatively simplistic rapid origin scenario to a protracted complex process in which plants adapted to the human environment. The adaptation of plants continued as the human environment changed with the expansion of agriculture from its centres of origin. Using archaeogenomics and computational models, we can observe genome evolution directly and understand how plants adapted to the human environment and the regional conditions to which agriculture expanded. We have applied various archaeogenomics approaches as exemplars to study local adaptation of barley to drought resistance at Qasr Ibrim, Egypt. We show the utility of DNA capture, ancient RNA, methylation patterns and DNA from charred remains of archaeobotanical samples from low latitudes where preservation conditions restrict ancient DNA research to within a Holocene timescale. The genomic level of analyses that is now possible, and the complexity of the evolutionary process of local adaptation means that plant studies are set to move to the genome level, and account for the interaction of genes under selection in systems-level approaches. This way we can understand how plants adapted during the expansion of agriculture across many latitudes with rapidity. PMID:25487329

  16. Using archaeogenomic and computational approaches to unravel the history of local adaptation in crops.

    PubMed

    Allaby, Robin G; Gutaker, Rafal; Clarke, Andrew C; Pearson, Neil; Ware, Roselyn; Palmer, Sarah A; Kitchen, James L; Smith, Oliver

    2015-01-19

    Our understanding of the evolution of domestication has changed radically in the past 10 years, from a relatively simplistic rapid origin scenario to a protracted complex process in which plants adapted to the human environment. The adaptation of plants continued as the human environment changed with the expansion of agriculture from its centres of origin. Using archaeogenomics and computational models, we can observe genome evolution directly and understand how plants adapted to the human environment and the regional conditions to which agriculture expanded. We have applied various archaeogenomics approaches as exemplars to study local adaptation of barley to drought resistance at Qasr Ibrim, Egypt. We show the utility of DNA capture, ancient RNA, methylation patterns and DNA from charred remains of archaeobotanical samples from low latitudes where preservation conditions restrict ancient DNA research to within a Holocene timescale. The genomic level of analyses that is now possible, and the complexity of the evolutionary process of local adaptation means that plant studies are set to move to the genome level, and account for the interaction of genes under selection in systems-level approaches. This way we can understand how plants adapted during the expansion of agriculture across many latitudes with rapidity. PMID:25487329

  17. Resolution-adapted recombination of structural features significantly improves sampling in restraint-guided structure calculation

    PubMed Central

    Lange, Oliver F; Baker, David

    2012-01-01

    Recent work has shown that NMR structures can be determined by integrating sparse NMR data with structure prediction methods such as Rosetta. The experimental data serve to guide the search for the lowest energy state towards the deep minimum at the native state which is frequently missed in Rosetta de novo structure calculations. However, as the protein size increases, sampling again becomes limiting; for example, the standard Rosetta protocol involving Monte Carlo fragment insertion starting from an extended chain fails to converge for proteins over 150 amino acids even with guidance from chemical shifts (CS-Rosetta) and other NMR data. The primary limitation of this protocol—that every folding trajectory is completely independent of every other—was recently overcome with the development of a new approach involving resolution-adapted structural recombination (RASREC). Here we describe the RASREC approach in detail and compare it to standard CS-Rosetta. We show that the improved sampling of RASREC is essential in obtaining accurate structures over a benchmark set of 11 proteins in the 15-25 kDa size range using chemical shifts, backbone RDCs and HN-HN NOE data; in a number of cases the improved sampling methodology makes a larger contribution than incorporation of additional experimental data. Experimental data are invaluable for guiding sampling to the vicinity of the global energy minimum, but for larger proteins, the standard Rosetta fold-from-extended-chain protocol does not converge on the native minimum even with experimental data and the more powerful RASREC approach is necessary to converge to accurate solutions. PMID:22423358

  18. Efficient estimation of abundance for patchily distributed populations via two-phase, adaptive sampling.

    USGS Publications Warehouse

    Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.

    2008-01-01

    Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of

  19. SAR imaging via iterative adaptive approach and sparse Bayesian learning

    NASA Astrophysics Data System (ADS)

    Xue, Ming; Santiago, Enrique; Sedehi, Matteo; Tan, Xing; Li, Jian

    2009-05-01

    We consider sidelobe reduction and resolution enhancement in synthetic aperture radar (SAR) imaging via an iterative adaptive approach (IAA) and a sparse Bayesian learning (SBL) method. The nonparametric weighted least squares based IAA algorithm is a robust and user parameter-free adaptive approach originally proposed for array processing. We show that it can be used to form enhanced SAR images as well. SBL has been used as a sparse signal recovery algorithm for compressed sensing. It has been shown in the literature that SBL is easy to use and can recover sparse signals more accurately than the l 1 based optimization approaches, which require delicate choice of the user parameter. We consider using a modified expectation maximization (EM) based SBL algorithm, referred to as SBL-1, which is based on a three-stage hierarchical Bayesian model. SBL-1 is not only more accurate than benchmark SBL algorithms, but also converges faster. SBL-1 is used to further enhance the resolution of the SAR images formed by IAA. Both IAA and SBL-1 are shown to be effective, requiring only a limited number of iterations, and have no need for polar-to-Cartesian interpolation of the SAR collected data. This paper characterizes the achievable performance of these two approaches by processing the complex backscatter data from both a sparse case study and a backhoe vehicle in free space with different aperture sizes.

  20. Adaptive optics for deeper imaging of biological samples.

    PubMed

    Girkin, John M; Poland, Simon; Wright, Amanda J

    2009-02-01

    Optical microscopy has been a cornerstone of life science investigations since its first practical application around 400 years ago with the goal being subcellular resolution, three-dimensional images, at depth, in living samples. Nonlinear microscopy brought this dream a step closer, but as one images more deeply the material through which you image can greatly distort the view. By using optical devices, originally developed for astronomy, whose optical properties can be changed in real time, active compensation for sample-induced aberrations is possible. Submicron resolution images are now routinely recorded from depths over 1mm into tissue. Such active optical elements can also be used to keep conventional microscopes, both confocal and widefield, in optimal alignment. PMID:19272766

  1. Variable Neural Adaptive Robust Control: A Switched System Approach

    SciTech Connect

    Lian, Jianming; Hu, Jianghai; Zak, Stanislaw H.

    2015-05-01

    Variable neural adaptive robust control strategies are proposed for the output tracking control of a class of multi-input multi-output uncertain systems. The controllers incorporate a variable-structure radial basis function (RBF) network as the self-organizing approximator for unknown system dynamics. The variable-structure RBF network solves the problem of structure determination associated with fixed-structure RBF networks. It can determine the network structure on-line dynamically by adding or removing radial basis functions according to the tracking performance. The structure variation is taken into account in the stability analysis of the closed-loop system using a switched system approach with the aid of the piecewise quadratic Lyapunov function. The performance of the proposed variable neural adaptive robust controllers is illustrated with simulations.

  2. Adaptive virulence evolution: the good old fitness-based approach.

    PubMed

    Alizon, Samuel; Michalakis, Yannis

    2015-05-01

    Infectious diseases could be expected to evolve towards complete avirulence to their hosts if given enough time. However, this is not the case. Often, virulence is maintained because it is linked to adaptive advantages to the parasite, a situation that is often associated with the hypothesis known as the transmission-virulence trade-off hypothesis. Here, we argue that this hypothesis has three limitations, which are related to how virulence is defined, the possibility of multiple trade-offs, and the difficulty of testing the hypothesis empirically. By adopting a fitness-based approach, where the relation between virulence and the fitness of the parasite throughout its life cycle is directly assessed, it is possible to address these limitations and to determine directly whether virulence is adaptive. PMID:25837917

  3. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  4. Sample Size Reassessment and Hypothesis Testing in Adaptive Survival Trials.

    PubMed

    Magirr, Dominic; Jaki, Thomas; Koenig, Franz; Posch, Martin

    2016-01-01

    Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies. PMID:26863139

  5. Sample Size Reassessment and Hypothesis Testing in Adaptive Survival Trials

    PubMed Central

    Magirr, Dominic; Jaki, Thomas; Koenig, Franz; Posch, Martin

    2016-01-01

    Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies. PMID:26863139

  6. Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Thompson, David R.; Hsiang, Kian

    2013-01-01

    This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.

  7. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  8. A ``Limited First Sample'' Approach to Mars Sample Return — Lessons from the Apollo Program

    NASA Astrophysics Data System (ADS)

    Eppler, D. B.; Draper, D.; Gruener, J.

    2012-06-01

    Complex, multi-opportunity Mars sample return approaches have failed to be selected as a new start twice since 1985. We advocate adopting a simpler strategy of "grab-and-go" for the initial sample return, similar to the approach taken on Apollo 11.

  9. Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology

    NASA Technical Reports Server (NTRS)

    Mandic, Milan; Acikmese, Behcet; Blackmore, Lars

    2011-01-01

    The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal

  10. Block-adaptive quantum mechanics: an adaptive divide-and-conquer approach to interactive quantum chemistry.

    PubMed

    Bosson, Maël; Grudinin, Sergei; Redon, Stephane

    2013-03-01

    We present a novel Block-Adaptive Quantum Mechanics (BAQM) approach to interactive quantum chemistry. Although quantum chemistry models are known to be computationally demanding, we achieve interactive rates by focusing computational resources on the most active parts of the system. BAQM is based on a divide-and-conquer technique and constrains some nucleus positions and some electronic degrees of freedom on the fly to simplify the simulation. As a result, each time step may be performed significantly faster, which in turn may accelerate attraction to the neighboring local minima. By applying our approach to the nonself-consistent Atom Superposition and Electron Delocalization Molecular Orbital theory, we demonstrate interactive rates and efficient virtual prototyping for systems containing more than a thousand of atoms on a standard desktop computer. PMID:23108532

  11. The Formative Method for Adapting Psychotherapy (FMAP): A community-based developmental approach to culturally adapting therapy

    PubMed Central

    Hwang, Wei-Chin

    2010-01-01

    How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458

  12. A mixed signal ECG processing platform with an adaptive sampling ADC for portable monitoring applications.

    PubMed

    Kim, Hyejung; Van Hoof, Chris; Yazicioglu, Refet Firat

    2011-01-01

    This paper describes a mixed-signal ECG processing platform with an 12-bit ADC architecture that can adapt its sampling rate according to the input signals rate of change. This enables the sampling of ECG signals with significantly reduced data rate without loss of information. The presented adaptive sampling scheme reduces the ADC power consumption, enables the processing of ECG signals with lower power consumption, and reduces the power consumption of the radio while streaming the ECG signals. The test results show that running a CWT-based R peak detection algorithm using the adaptively sampled ECG signals consumes only 45.6 μW and it leads to 36% less overall system power consumption. PMID:22254775

  13. Pi sampling: a methodical and flexible approach to initial macromolecular crystallization screening

    SciTech Connect

    Gorrec, Fabrice Palmer, Colin M.; Lebon, Guillaume; Warne, Tony

    2011-05-01

    Pi sampling, derived from the incomplete factorial approach, is an effort to maximize the diversity of macromolecular crystallization conditions and to facilitate the preparation of 96-condition initial screens. The Pi sampling method is derived from the incomplete factorial approach to macromolecular crystallization screen design. The resulting ‘Pi screens’ have a modular distribution of a given set of up to 36 stock solutions. Maximally diverse conditions can be produced by taking into account the properties of the chemicals used in the formulation and the concentrations of the corresponding solutions. The Pi sampling method has been implemented in a web-based application that generates screen formulations and recipes. It is particularly adapted to screens consisting of 96 different conditions. The flexibility and efficiency of Pi sampling is demonstrated by the crystallization of soluble proteins and of an integral membrane-protein sample.

  14. A fast approach for accurate content-adaptive mesh generation.

    PubMed

    Yang, Yongyi; Wernick, Miles N; Brankov, Jovan G

    2003-01-01

    Mesh modeling is an important problem with many applications in image processing. A key issue in mesh modeling is how to generate a mesh structure that well represents an image by adapting to its content. We propose a new approach to mesh generation, which is based on a theoretical result derived on the error bound of a mesh representation. In the proposed method, the classical Floyd-Steinberg error-diffusion algorithm is employed to place mesh nodes in the image domain so that their spatial density varies according to the local image content. Delaunay triangulation is next applied to connect the mesh nodes. The result of this approach is that fine mesh elements are placed automatically in regions of the image containing high-frequency features while coarse mesh elements are used to represent smooth areas. The proposed algorithm is noniterative, fast, and easy to implement. Numerical results demonstrate that, at very low computational cost, the proposed approach can produce mesh representations that are more accurate than those produced by several existing methods. Moreover, it is demonstrated that the proposed algorithm performs well with images of various kinds, even in the presence of noise. PMID:18237961

  15. Estimating the abundance of clustered animal population by using adaptive cluster sampling and negative binomial distribution

    NASA Astrophysics Data System (ADS)

    Bo, Yizhou; Shifa, Naima

    2013-09-01

    An estimator for finding the abundance of a rare, clustered and mobile population has been introduced. This model is based on adaptive cluster sampling (ACS) to identify the location of the population and negative binomial distribution to estimate the total in each site. To identify the location of the population we consider both sampling with replacement (WR) and sampling without replacement (WOR). Some mathematical properties of the model are also developed.

  16. Adaptive Neuro-fuzzy approach in friction identification

    NASA Astrophysics Data System (ADS)

    Zaiyad Muda @ Ismail, Muhammad

    2016-05-01

    Friction is known to affect the performance of motion control system, especially in terms of its accuracy. Therefore, a number of techniques or methods have been explored and implemented to alleviate the effects of friction. In this project, the Artificial Intelligent (AI) approach is used to model the friction which will be then used to compensate the friction. The Adaptive Neuro-Fuzzy Inference System (ANFIS) is chosen among several other AI methods because of its reliability and capabilities of solving complex computation. ANFIS is a hybrid AI-paradigm that combines the best features of neural network and fuzzy logic. This AI method (ANFIS) is effective for nonlinear system identification and compensation and thus, being used in this project.

  17. Adapting to Uncertainty: Comparing Methodological Approaches to Climate Adaptation and Mitigation Policy

    NASA Astrophysics Data System (ADS)

    Huda, J.; Kauneckis, D. L.

    2013-12-01

    Climate change adaptation represents a number of unique policy-making challenges. Foremost among these is dealing with the range of future climate impacts to a wide scope of inter-related natural systems, their interaction with social and economic systems, and uncertainty resulting from the variety of downscaled climate model scenarios and climate science projections. These cascades of uncertainty have led to a number of new approaches as well as a reexamination of traditional methods for evaluating risk and uncertainty in policy-making. Policy makers are required to make decisions and formulate policy irrespective of the level of uncertainty involved and while a debate continues regarding the level of scientific certainty required in order to make a decision, incremental change in the climate policy continues at multiple governance levels. This project conducts a comparative analysis of the range of methodological approaches that are evolving to address uncertainty in climate change policy. It defines 'methodologies' to include a variety of quantitative and qualitative approaches involving both top-down and bottom-up policy processes that attempt to enable policymakers to synthesize climate information into the policy process. The analysis examines methodological approaches to decision-making in climate policy based on criteria such as sources of policy choice information, sectors to which the methodology has been applied, sources from which climate projections were derived, quantitative and qualitative methods used to deal with uncertainty, and the benefits and limitations of each. A typology is developed to better categorize the variety of approaches and methods, examine the scope of policy activities they are best suited for, and highlight areas for future research and development.

  18. DiffeRential Evolution Adaptive Metropolis with Sampling From Past States

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.; Laloy, E.; Ter Braak, C.

    2010-12-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. In a previous paper te{vrugt_1} we have presented the {D}iffe{R}ential {E}volution {A}daptive {M}etropolis (DREAM) MCMC scheme that automatically tunes the scale and orientation of the proposal distribution during evolution to the posterior target distribution. In the same paper, detailed balance and ergodicity of DREAM have been proved, and various examples involving nonlinearity, high-dimensionality, and multimodality have shown that DREAM is generally superior to other adaptive MCMC sampling approaches. Standard DREAM requires at least N = d chains to be run in parallel, where d is the dimensionality of the posterior. Unfortunately, running many parallel chains is a potential source of inefficiency, as each individual chain must travel to high density region of the posterior. The lower the number of parallel chains required, the greater the practical applicability of DREAM for computationally demanding problems. This paper extends DREAM with a snooker updater and shows by simulation and real examples that DREAM can work for d up to 50-100 with far fewer parallel chains (e.g. N = 3) by generating jumps using differences of pairs of past states

  19. Sample Size Estimation for Non-Inferiority Trials: Frequentist Approach versus Decision Theory Approach

    PubMed Central

    Bouman, A. C.; ten Cate-Hoek, A. J.; Ramaekers, B. L. T.; Joore, M. A.

    2015-01-01

    Background Non-inferiority trials are performed when the main therapeutic effect of the new therapy is expected to be not unacceptably worse than that of the standard therapy, and the new therapy is expected to have advantages over the standard therapy in costs or other (health) consequences. These advantages however are not included in the classic frequentist approach of sample size calculation for non-inferiority trials. In contrast, the decision theory approach of sample size calculation does include these factors. The objective of this study is to compare the conceptual and practical aspects of the frequentist approach and decision theory approach of sample size calculation for non-inferiority trials, thereby demonstrating that the decision theory approach is more appropriate for sample size calculation of non-inferiority trials. Methods The frequentist approach and decision theory approach of sample size calculation for non-inferiority trials are compared and applied to a case of a non-inferiority trial on individually tailored duration of elastic compression stocking therapy compared to two years elastic compression stocking therapy for the prevention of post thrombotic syndrome after deep vein thrombosis. Results The two approaches differ substantially in conceptual background, analytical approach, and input requirements. The sample size calculated according to the frequentist approach yielded 788 patients, using a power of 80% and a one-sided significance level of 5%. The decision theory approach indicated that the optimal sample size was 500 patients, with a net value of €92 million. Conclusions This study demonstrates and explains the differences between the classic frequentist approach and the decision theory approach of sample size calculation for non-inferiority trials. We argue that the decision theory approach of sample size estimation is most suitable for sample size calculation of non-inferiority trials. PMID:26076354

  20. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2016-01-01

    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  1. Sample preparation and biomass determination of SRF model mixture using cryogenic milling and the adapted balance method.

    PubMed

    Schnöller, Johannes; Aschenbrenner, Philipp; Hahn, Manuel; Fellner, Johann; Rechberger, Helmut

    2014-11-01

    The biogenic fraction of a simple solid recovered fuel (SRF) mixture (80 wt% printer paper/20 wt% high density polyethylene) is analyzed with the in-house developed adapted balance method (aBM). This fairly new approach is a combination of combustion elemental analysis (CHNS) and a data reconciliation algorithm based on successive linearisation for evaluation of the analysis results. This method shows a great potential as an alternative way to determine the biomass content in SRF. However, the employed analytical technique (CHNS elemental analysis) restricts the probed sample mass to low amounts in the range of a few hundred milligrams. This requires sample comminution to small grain sizes (<200 μm) to generate representative SRF specimen. This is not easily accomplished for certain material mixtures (e.g. SRF with rubber content) by conventional means of sample size reduction. This paper presents a proof of principle investigation of the sample preparation and analysis of an SRF model mixture with the use of cryogenic impact milling (final sample comminution) and the adapted balance method (determination of biomass content). The so derived sample preparation methodology (cutting mills and cryogenic impact milling) shows a better performance in accuracy and precision for the determination of the biomass content than one solely based on cutting mills. The results for the determination of the biogenic fraction are within 1-5% of the data obtained by the reference methods, selective dissolution method (SDM) and (14)C-method ((14)C-M). PMID:25060675

  2. Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach

    ERIC Educational Resources Information Center

    Wang, Yuling

    2010-01-01

    Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.

  3. Sample preparation and biomass determination of SRF model mixture using cryogenic milling and the adapted balance method

    SciTech Connect

    Schnöller, Johannes Aschenbrenner, Philipp; Hahn, Manuel; Fellner, Johann; Rechberger, Helmut

    2014-11-15

    Highlights: • An alternative sample comminution procedure for SRF is tested. • Proof of principle is shown on a SRF model mixture. • The biogenic content of the SRF is analyzed with the adapted balance method. • The novel method combines combustion analysis and a data reconciliation algorithm. • Factors for the variance of the analysis results are statistically quantified. - Abstract: The biogenic fraction of a simple solid recovered fuel (SRF) mixture (80 wt% printer paper/20 wt% high density polyethylene) is analyzed with the in-house developed adapted balance method (aBM). This fairly new approach is a combination of combustion elemental analysis (CHNS) and a data reconciliation algorithm based on successive linearisation for evaluation of the analysis results. This method shows a great potential as an alternative way to determine the biomass content in SRF. However, the employed analytical technique (CHNS elemental analysis) restricts the probed sample mass to low amounts in the range of a few hundred milligrams. This requires sample comminution to small grain sizes (<200 μm) to generate representative SRF specimen. This is not easily accomplished for certain material mixtures (e.g. SRF with rubber content) by conventional means of sample size reduction. This paper presents a proof of principle investigation of the sample preparation and analysis of an SRF model mixture with the use of cryogenic impact milling (final sample comminution) and the adapted balance method (determination of biomass content). The so derived sample preparation methodology (cutting mills and cryogenic impact milling) shows a better performance in accuracy and precision for the determination of the biomass content than one solely based on cutting mills. The results for the determination of the biogenic fraction are within 1–5% of the data obtained by the reference methods, selective dissolution method (SDM) and {sup 14}C-method ({sup 14}C-M)

  4. A sampling approach for protein backbone fragment conformations.

    PubMed

    Yu, J Y; Zhang, W

    2013-01-01

    In protein structure prediction, backbone fragment bias information can narrow down the conformational space of the whole polypeptide chain significantly. Unlike existing methods that use fragments as building blocks, the paper presents a probabilistic sampling approach for protein backbone torsion angles by modelling angular correlation of (phi, psi) with a directional statistics distribution. Given a protein sequence and secondary structure information, this method samples backbone fragments conformations by using a backtrack sampling algorithm for the hidden Markov model with multiple inputs and a single output. The proposed approach is applied to a fragment library, and some well-known structural motifs are sampled very well on the optimal path. Computational results show that the method can help to obtain native-like backbone fragments conformations. PMID:23777175

  5. Career Adapt-Abilities Scale in a French-Speaking Swiss Sample: Psychometric Properties and Relationships to Personality and Work Engagement

    ERIC Educational Resources Information Center

    Rossier, Jerome; Zecca, Gregory; Stauffer, Sarah D.; Maggiori, Christian; Dauwalder, Jean-Pierre

    2012-01-01

    The aim of this study was to analyze the psychometric properties of the Career Adapt-Abilities Scale (CAAS) in a French-speaking Swiss sample and its relationship with personality dimensions and work engagement. The heterogeneous sample of 391 participants (M[subscript age] = 39.59, SD = 12.30) completed the CAAS-International and a short version…

  6. Adaptation of the Athlete Burnout Questionnaire in a Spanish sample of athletes.

    PubMed

    Arce, Constantino; De Francisco, Cristina; Andrade, Elena; Seoane, Gloria; Raedeke, Thomas

    2012-11-01

    In this paper, we offer a general version of the Spanish adaptation of Athlete Burnout Questionnaire (ABQ) designed to measure the syndrome of burnout in athletes of different sports. In previous works, the Spanish version of ABQ was administered to different samples of soccer players. Its psychometric properties were appropriate and similar to the findings in original ABQ. The purpose of this study was to examine the generalization to others sports of the Spanish adaptation. We started from this adaptation, but we included three alternative statements (one for each dimension of the questionnaire), and we replaced the word "soccer" with the word "sport". An 18-item version was administered to a sample of 487 athletes aged 13 and 29 years old. Confirmatory factor analyses replicated the factor structure, but two items modification were necessary in order to obtain a good overall fit of the model. The internal consistency and test-retest reliability of the questionnaire were satisfactory. PMID:23156955

  7. A Functional Approach To Uncover the Low-Temperature Adaptation Strategies of the Archaeon Methanosarcina barkeri

    PubMed Central

    McCay, Paul; Fuszard, Matthew; Botting, Catherine H.; Abram, Florence; O'Flaherty, Vincent

    2013-01-01

    Low-temperature anaerobic digestion (LTAD) technology is underpinned by a diverse microbial community. The methanogenic archaea represent a key functional group in these consortia, undertaking CO2 reduction as well as acetate and methylated C1 metabolism with subsequent biogas (40 to 60% CH4 and 30 to 50% CO2) formation. However, the cold adaptation strategies, which allow methanogens to function efficiently in LTAD, remain unclear. Here, a pure-culture proteomic approach was employed to study the functional characteristics of Methanosarcina barkeri (optimum growth temperature, 37°C), which has been detected in LTAD bioreactors. Two experimental approaches were undertaken. The first approach aimed to characterize a low-temperature shock response (LTSR) of M. barkeri DSMZ 800T grown at 37°C with a temperature drop to 15°C, while the second experimental approach aimed to examine the low-temperature adaptation strategies (LTAS) of the same strain when it was grown at 15°C. The latter experiment employed cell viability and growth measurements (optical density at 600 nm [OD600]), which directly compared M. barkeri cells grown at 15°C with those grown at 37°C. During the LTSR experiment, a total of 127 proteins were detected in 37°C and 15°C samples, with 20 proteins differentially expressed with respect to temperature, while in the LTAS experiment 39% of proteins identified were differentially expressed between phases of growth. Functional categories included methanogenesis, cellular information processing, and chaperones. By applying a polyphasic approach (proteomics and growth studies), insights into the low-temperature adaptation capacity of this mesophilically characterized methanogen were obtained which suggest that the metabolically diverse Methanosarcinaceae could be functionally relevant for LTAD systems. PMID:23645201

  8. Region and edge-adaptive sampling and boundary completion for segmentation

    SciTech Connect

    Dillard, Scott E; Prasad, Lakshman; Grazzini, Jacopo A

    2010-01-01

    Edge detection produces a set of points that are likely to lie on discontinuities between objects within an image. We consider faces of the Gabriel graph of these points, a sub-graph of the Delaunay triangulation. Features are extracted by merging these faces using size, shape and color cues. We measure regional properties of faces using a novel shape-dependant sampling method that overcomes undesirable sampling bias of the Delaunay triangles. Instead, sampling is biased so as to smooth regional statistics within the detected object boundaries, and this smoothing adapts to local geometric features of the shape such as curvature, thickness and straightness.

  9. An objective re-evaluation of adaptive sample size re-estimation: commentary on 'Twenty-five years of confirmatory adaptive designs'.

    PubMed

    Mehta, Cyrus; Liu, Lingyun

    2016-02-10

    Over the past 25 years, adaptive designs have gradually gained acceptance and are being used with increasing frequency in confirmatory clinical trials. Recent surveys of submissions to the regulatory agencies reveal that the most popular type of adaptation is unblinded sample size re-estimation. Concerns have nevertheless been raised that this type of adaptation is inefficient.We intend to show in our discussion that such concerns are greatly exaggerated in any practical setting and that the advantages of adaptive sample size re-estimation usually outweigh any minor loss of efficiency. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26757953

  10. A novel approach for SEMG signal classification with adaptive local binary patterns.

    PubMed

    Ertuğrul, Ömer Faruk; Kaya, Yılmaz; Tekin, Ramazan

    2016-07-01

    Feature extraction plays a major role in the pattern recognition process, and this paper presents a novel feature extraction approach, adaptive local binary pattern (aLBP). aLBP is built on the local binary pattern (LBP), which is an image processing method, and one-dimensional local binary pattern (1D-LBP). In LBP, each pixel is compared with its neighbors. Similarly, in 1D-LBP, each data in the raw is judged against its neighbors. 1D-LBP extracts feature based on local changes in the signal. Therefore, it has high a potential to be employed in medical purposes. Since, each action or abnormality, which is recorded in SEMG signals, has its own pattern, and via the 1D-LBP these (hidden) patterns may be detected. But, the positions of the neighbors in 1D-LBP are constant depending on the position of the data in the raw. Also, both LBP and 1D-LBP are very sensitive to noise. Therefore, its capacity in detecting hidden patterns is limited. To overcome these drawbacks, aLBP was proposed. In aLBP, the positions of the neighbors and their values can be assigned adaptively via the down-sampling and the smoothing coefficients. Therefore, the potential to detect (hidden) patterns, which may express an illness or an action, is really increased. To validate the proposed feature extraction approach, two different datasets were employed. Achieved accuracies by the proposed approach were higher than obtained results by employed popular feature extraction approaches and the reported results in the literature. Obtained accuracy results were brought out that the proposed method can be employed to investigate SEMG signals. In summary, this work attempts to develop an adaptive feature extraction scheme that can be utilized for extracting features from local changes in different categories of time-varying signals. PMID:26718556

  11. Mapping the genomic architecture of adaptive traits with interspecific introgressive origin: a coalescent-based approach.

    PubMed

    Hejase, Hussein A; Liu, Kevin J

    2016-01-01

    Recent studies of eukaryotes including human and Neandertal, mice, and butterflies have highlighted the major role that interspecific introgression has played in adaptive trait evolution. A common question arises in each case: what is the genomic architecture of the introgressed traits? One common approach that can be used to address this question is association mapping, which looks for genotypic markers that have significant statistical association with a trait. It is well understood that sample relatedness can be a confounding factor in association mapping studies if not properly accounted for. Introgression and other evolutionary processes (e.g., incomplete lineage sorting) typically introduce variation among local genealogies, which can also differ from global sample structure measured across all genomic loci. In contrast, state-of-the-art association mapping methods assume fixed sample relatedness across the genome, which can lead to spurious inference. We therefore propose a new association mapping method called Coal-Map, which uses coalescent-based models to capture local genealogical variation alongside global sample structure. Using simulated and empirical data reflecting a range of evolutionary scenarios, we compare the performance of Coal-Map against EIGENSTRAT, a leading association mapping method in terms of its popularity, power, and type I error control. Our empirical data makes use of hundreds of mouse genomes for which adaptive interspecific introgression has recently been described. We found that Coal-Map's performance is comparable or better than EIGENSTRAT in terms of statistical power and false positive rate. Coal-Map's performance advantage was greatest on model conditions that most closely resembled empirically observed scenarios of adaptive introgression. These conditions had: (1) causal SNPs contained in one or a few introgressed genomic loci and (2) varying rates of gene flow - from high rates to very low rates where incomplete lineage

  12. Novel Approaches for Fungal Transcriptomics from Host Samples

    PubMed Central

    Amorim-Vaz, Sara; Sanglard, Dominique

    2016-01-01

    Candida albicans adaptation to the host requires a profound reprogramming of the fungal transcriptome as compared to in vitro laboratory conditions. A detailed knowledge of the C. albicans transcriptome during the infection process is necessary in order to understand which of the fungal genes are important for host adaptation. Such genes could be thought of as potential targets for antifungal therapy. The acquisition of the C. albicans transcriptome is, however, technically challenging due to the low proportion of fungal RNA in host tissues. Two emerging technologies were used recently to circumvent this problem. One consists of the detection of low abundance fungal RNA using capture and reporter gene probes which is followed by emission and quantification of resulting fluorescent signals (nanoString). The other is based first on the capture of fungal RNA by short biotinylated oligonucleotide baits covering the C. albicans ORFome permitting fungal RNA purification. Next, the enriched fungal RNA is amplified and subjected to RNA sequencing (RNA-seq). Here we detail these two transcriptome approaches and discuss their advantages and limitations and future perspectives in microbial transcriptomics from host material. PMID:26834721

  13. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  14. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  15. Adaptive autonomous sampling toward the study of microbial carbon and energy fluxes in a dynamic estuary

    NASA Astrophysics Data System (ADS)

    Herfort, L.; Seaton, C. M.; Wilkin, M.; Baptista, A. M.; Roman, B.; Preston, C. M.; Scholin, C. A.; Melançon, C.; Simon, H. M.

    2013-12-01

    An autonomous microbial sampling device was integrated with a long-term (endurance) environmental sensor system to investigate variation in microbial composition and activities related to complex estuarine dynamics. This integration was a part of ongoing efforts in the Center for Coastal Margin Observation and Prediction (CMOP) to study estuarine carbon and nitrogen cycling using an observation and prediction system (SATURN, http://www.stccmop.org/saturn) as foundational infrastructure. The two endurance stations fitted with physical and biogeochemical sensors that were used in this study are located in the SATURN observation network. The microbial sampler is the Environmental Sample Processor (ESP), a commercially available electromechanical/fluidic system designed for automated collection, preservation and in situ analyses of marine water samples. The primary goal of the integration was to demonstrate that the ESP, developed for sampling of pelagic oceanic environments, could be successfully deployed for autonomous sample acquisition in the highly dynamic and turbid Columbia River estuary. The ability of the ESP to collect material at both pre-determined times and automatically in response to local conditions was tested. Pre-designated samples were acquired at specific times to capture variability in the tidal cycle. Autonomous, adaptive sampling was triggered when conditions associated with specific water masses were detected in real-time by the SATURN station's sensors and then communicated to the ESP via the station computer to initiate sample collection. Triggering criteria were based on our understanding of estuary dynamics, as provided by the analysis of extensive archives of high-resolution, long-term SATURN observations and simulations. In this manner, we used the ESP to selectively sample various microbial consortia in the estuary to facilitate the study of ephemeral microbial-driven processes. For example, during the summer of 2013 the adaptive sampling

  16. Non-adaptive and adaptive hybrid approaches for enhancing water quality management

    NASA Astrophysics Data System (ADS)

    Kalwij, Ineke M.; Peralta, Richard C.

    2008-09-01

    SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control

  17. A Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Parrinello, Michele

    2015-03-01

    The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.

  18. Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  19. Adaptation and Validation of the Sexual Assertiveness Scale (SAS) in a Sample of Male Drug Users.

    PubMed

    Vallejo-Medina, Pablo; Sierra, Juan Carlos

    2015-01-01

    The aim of the present study was to adapt and validate the Sexual Assertiveness Scale (SAS) in a sample of male drug users. A sample of 326 male drug users and 322 non-clinical males was selected by cluster sampling and convenience sampling, respectively. Results showed that the scale had good psychometric properties and adequate internal consistency reliability (Initiation = .66, Refusal = .74 and STD-P = .79). An evaluation of the invariance showed strong factor equivalence between both samples. A high and moderate effect of Differential Item Functioning was only found in items 1 and 14 (∆R 2 Nagelkerke = .076 and .037, respectively). We strongly recommend not using item 1 if the goal is to compare the scores of both groups, otherwise the comparison will be biased. Correlations obtained between the CSFQ-14 and the safe sex ratio and the SAS subscales were significant (CI = 95%) and indicated good concurrent validity. Scores of male drug users were similar to those of non-clinical males. Therefore, the adaptation of the SAS to drug users provides enough guarantees for reliable and valid use in both clinical practice and research, although care should be taken with item 1. PMID:25896498

  20. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach

    PubMed Central

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  1. Taking a Broad Approach to Public Health Program Adaptation: Adapting a Family-Based Diabetes Education Program

    ERIC Educational Resources Information Center

    Reinschmidt, Kerstin M.; Teufel-Shone, Nicolette I.; Bradford, Gail; Drummond, Rebecca L.; Torres, Emma; Redondo, Floribella; Elenes, Jo Jean; Sanders, Alicia; Gastelum, Sylvia; Moore-Monroy, Martha; Barajas, Salvador; Fernandez, Lourdes; Alvidrez, Rosy; de Zapien, Jill Guernsey; Staten, Lisa K.

    2010-01-01

    Diabetes health disparities among Hispanic populations have been countered with federally funded health promotion and disease prevention programs. Dissemination has focused on program adaptation to local cultural contexts for greater acceptability and sustainability. Taking a broader approach and drawing on our experience in Mexican American…

  2. New approaches to nanoparticle sample fabrication for atom probe tomography.

    PubMed

    Felfer, P; Li, T; Eder, K; Galinski, H; Magyar, A P; Bell, D C; Smith, G D W; Kruse, N; Ringer, S P; Cairney, J M

    2015-12-01

    Due to their unique properties, nano-sized materials such as nanoparticles and nanowires are receiving considerable attention. However, little data is available about their chemical makeup at the atomic scale, especially in three dimensions (3D). Atom probe tomography is able to answer many important questions about these materials if the challenge of producing a suitable sample can be overcome. In order to achieve this, the nanomaterial needs to be positioned within the end of a tip and fixed there so the sample possesses sufficient structural integrity for analysis. Here we provide a detailed description of various techniques that have been used to position nanoparticles on substrates for atom probe analysis. In some of the approaches, this is combined with deposition techniques to incorporate the particles into a solid matrix, and focused ion beam processing is then used to fabricate atom probe samples from this composite. Using these approaches, data has been achieved from 10-20 nm core-shell nanoparticles that were extracted directly from suspension (i.e. with no chemical modification) with a resolution of better than ± 1 nm. PMID:25980894

  3. Mars sample return, updated to a groundbreaking approach

    NASA Technical Reports Server (NTRS)

    Mattingly, R.; Matovsek, S.; Jordan, F.

    2002-01-01

    A Mars Sample Return (MSR) mission is a goal of the Mars Program. Recently, NASA and JPL have been studying the possibility of a Mars Sample Return some time in the next decade of Mars exploration. In 2001, JPL commissioned four industry teams to make a fresh examination of MSR architectures. Six papers on these studies were presented at last year's conference. As new fiscal realities of a cost-capped Mars Exploration Program unfolded, it was evident that these MSR concepts, which included mobility and subsurface sample acquisition, did not fit reasonably within a balanced program. Therefore, at the request of NASA and the science community, JPL asked the four industry teams plus JPL's Team X to explore ways to reduce the cost of a MSR. A NASA-created MSR Science Steering Group (SSG) established a reduced set of requirements for these new studies that built upon the previous year's work. As a result, a new 'Groundbreaking' approach to MSR was established that is well understood based on the studies and independent cost assessments by Aerospace Corporation and SAIC. The Groundbreaking approach appears to be what a contemporary, balanced Mars Exploration Program can afford, has turned out to be justifiable by the MSR Science Steering Group, and has been endorsed by the Mars science community at large. This paper gives a brief overview of the original 2001 study results and discusses the process leading to the new studies, the studies themselves, and the results.

  4. Enhancing Adaptive Filtering Approaches for Land Data Assimilation Systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent work has presented the initial application of adaptive filtering techniques to land surface data assimilation systems. Such techniques are motivated by our current lack of knowledge concerning the structure of large-scale error in either land surface modeling output or remotely-sensed estima...

  5. The Canadian approach to the settlement and adaptation of immigrants.

    PubMed

    1986-01-01

    Canada has been the host to over 400,000 refugees since World War II. The settlement and adaptation process is supported by the federal government and by the majority of provincial governments. Under the national and regional Employment and Immigration Commission CEIC) settlement organizations the major programs administered to effect the adaptation of newcomers are: 1) the Adjustment Assistance Program, 2) the Immigrant Settlement and Adaptation Program, 3) the Language/Skill Training Program, and 4) the Employment Services Program. Ontario, the recipient of more than 1/2 the newcomers that arrive in Canada each year, pursues active programs in the reception of newcomers through their Welcome House Program which offers a wide range of reception services to the newcomers. The employment and unemployment experiences of refugees is very much influenced by the prevailing labor market conditions, the refugees' proficiency in the country's official languages, the amount of sympathy evoked by the media reports on the plight of refugees, the availability of people of the same ethnic origin already well settled in the country, and the adaptability of the refugees themselves. The vast majority of refugee groups that came to Canada during the last 1/4 century seem to have adjusted well economically, despite having had difficulty in entering the occupations they intended to join. It is calculated that an average of $6607 per arrival is needed to cover the CEIC program costs of 1983-1984. PMID:12178937

  6. The Detroit Approach to Adapted Physical Education and Recreation.

    ERIC Educational Resources Information Center

    Elkins, Bruce; Czapski, Stephen

    The report describes Detroit's Adaptive Physical Education Consortium Project in Michigan. Among the main objectives of the project are to coordinate all physical education and recreation services to the handicapped in the Detroit area; to facilitate the mainstreaming of capable handicapped individuals into existing "regular" physical education…

  7. Adaptive E-Learning Environments: Research Dimensions and Technological Approaches

    ERIC Educational Resources Information Center

    Di Bitonto, Pierpaolo; Roselli, Teresa; Rossano, Veronica; Sinatra, Maria

    2013-01-01

    One of the most closely investigated topics in e-learning research has always been the effectiveness of adaptive learning environments. The technological evolutions that have dramatically changed the educational world in the last six decades have allowed ever more advanced and smarter solutions to be proposed. The focus of this paper is to depict…

  8. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  9. Design of Adaptive Hypermedia Learning Systems: A Cognitive Style Approach

    ERIC Educational Resources Information Center

    Mampadi, Freddy; Chen, Sherry Y.; Ghinea, Gheorghita; Chen, Ming-Puu

    2011-01-01

    In the past decade, a number of adaptive hypermedia learning systems have been developed. However, most of these systems tailor presentation content and navigational support solely according to students' prior knowledge. On the other hand, previous research suggested that cognitive styles significantly affect student learning because they refer to…

  10. Dissociating Conflict Adaptation from Feature Integration: A Multiple Regression Approach

    ERIC Educational Resources Information Center

    Notebaert, Wim; Verguts, Tom

    2007-01-01

    Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on…

  11. Assessing confidence in management adaptation approaches for climate-sensitive ecosystems

    NASA Astrophysics Data System (ADS)

    West, J. M.; Julius, S. H.; Weaver, C. P.

    2012-03-01

    A number of options are available for adapting ecosystem management to improve resilience in the face of climatic changes. However, uncertainty exists as to the effectiveness of these options. A report prepared for the US Climate Change Science Program reviewed adaptation options for a range of federally managed systems in the United States. The report included a qualitative uncertainty analysis of conceptual approaches to adaptation derived from the review. The approaches included reducing anthropogenic stressors, protecting key ecosystem features, maintaining representation, replicating, restoring, identifying refugia and relocating organisms. The results showed that the expert teams had the greatest scientific confidence in adaptation options that reduce anthropogenic stresses. Confidence in other approaches was lower because of gaps in understanding of ecosystem function, climate change impacts on ecosystems, and management effectiveness. This letter discusses insights gained from the confidence exercise and proposes strategies for improving future assessments of confidence for management adaptations to climate change.

  12. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    PubMed Central

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-01-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids. PMID:26576666

  13. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    NASA Astrophysics Data System (ADS)

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-11-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids.

  14. An Adaptive Sampling System for Sensor Nodes in Body Area Networks.

    PubMed

    Rieger, R; Taylor, J

    2014-04-23

    The importance of body sensor networks to monitor patients over a prolonged period of time has increased with an advance in home healthcare applications. Sensor nodes need to operate with very low-power consumption and under the constraint of limited memory capacity. Therefore, it is wasteful to digitize the sensor signal at a constant sample rate, given that the frequency contents of the signals vary with time. Adaptive sampling is established as a practical method to reduce the sample data volume. In this paper a low-power analog system is proposed, which adjusts the converter clock rate to perform a peak-picking algorithm on the second derivative of the input signal. The presented implementation does not require an analog-to-digital converter or a digital processor in the sample selection process. The criteria for selecting a suitable detection threshold are discussed, so that the maximum sampling error can be limited. A circuit level implementation is presented. Measured results exhibit a significant reduction in the average sample frequency and data rate of over 50% and 38% respectively. PMID:24760918

  15. An approach to fabrication of large adaptive optics mirrors

    NASA Astrophysics Data System (ADS)

    Schwartz, Eric; Rey, Justin; Blaszak, David; Cavaco, Jeffrey

    2014-07-01

    For more than two decades, Northrop Grumman Xinetics has been the principal supplier of small deformable mirrors that enable adaptive optical (AO) systems for the ground-based astronomical telescope community. With today's drive toward extremely large aperture systems, and the desire of telescope designers to include adaptive optics in the main optical path of the telescope, Xinetics has recognized the need for large active mirrors with the requisite bandwidth and actuator stoke. Presented in this paper is the proposed use of Northrop Grumman Xinetics' large, ultra-lightweight Silicon Carbide substrates with surface parallel actuation of sufficient spatial density and bandwidth to meet the requirements of tomorrow's AO systems, while reducing complexity and cost.

  16. A Hierarchical Adaptive Approach to Optimal Experimental Design

    PubMed Central

    Kim, Woojae; Pitt, Mark A.; Lu, Zhong-Lin; Steyvers, Mark; Myung, Jay I.

    2014-01-01

    Experimentation is at the core of research in the behavioral and neural sciences, yet observations can be expensive and time-consuming to acquire (e.g., MRI scans, responses from infant participants). A major interest of researchers is designing experiments that lead to maximal accumulation of information about the phenomenon under study with the fewest possible number of observations. In addressing this challenge, statisticians have developed adaptive design optimization methods. This letter introduces a hierarchical Bayes extension of adaptive design optimization that provides a judicious way to exploit two complementary schemes of inference (with past and future data) to achieve even greater accuracy and efficiency in information gain. We demonstrate the method in a simulation experiment in the field of visual perception. PMID:25149697

  17. Design of an Adaptive Secondary Mirror: A Global Approach

    NASA Astrophysics Data System (ADS)

    Brusa, Guido; del Vecchio, Ciro

    1998-07-01

    We present the mechanical and actuator design of an adaptive secondary mirror that matches the optical requirements of the active and adaptive corrections. Conceived for the particular implementation for the 6.5-m conversion of the multiple-mirror telescope, with small variations of the input parameters this study is suitable for applications for telescopes of the same class. We found that a three-layer structure, i.e., a thin deformable shell, a thick reference plate, and a third plate that acts as actuator support and heat sink, is able to provide the required mechanical stability and actuator density. We also found that a simple electromagnetic actuator can be used. This actuator, when optimized, will dissipate a typical power of a few tenths of watts.

  18. Organ sample generator for expected treatment dose construction and adaptive inverse planning optimization

    SciTech Connect

    Nie Xiaobo; Liang Jian; Yan Di

    2012-12-15

    Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h

  19. The adaptive significance of adult neurogenesis: an integrative approach

    PubMed Central

    Konefal, Sarah; Elliot, Mick; Crespi, Bernard

    2013-01-01

    Adult neurogenesis in mammals is predominantly restricted to two brain regions, the dentate gyrus (DG) of the hippocampus and the olfactory bulb (OB), suggesting that these two brain regions uniquely share functions that mediate its adaptive significance. Benefits of adult neurogenesis across these two regions appear to converge on increased neuronal and structural plasticity that subserves coding of novel, complex, and fine-grained information, usually with contextual components that include spatial positioning. By contrast, costs of adult neurogenesis appear to center on potential for dysregulation resulting in higher risk of brain cancer or psychological dysfunctions, but such costs have yet to be quantified directly. The three main hypotheses for the proximate functions and adaptive significance of adult neurogenesis, pattern separation, memory consolidation, and olfactory spatial, are not mutually exclusive and can be reconciled into a simple general model amenable to targeted experimental and comparative tests. Comparative analysis of brain region sizes across two major social-ecological groups of primates, gregarious (mainly diurnal haplorhines, visually-oriented, and in large social groups) and solitary (mainly noctural, territorial, and highly reliant on olfaction, as in most rodents) suggest that solitary species, but not gregarious species, show positive associations of population densities and home range sizes with sizes of both the hippocampus and OB, implicating their functions in social-territorial systems mediated by olfactory cues. Integrated analyses of the adaptive significance of adult neurogenesis will benefit from experimental studies motivated and structured by ecologically and socially relevant selective contexts. PMID:23882188

  20. An integrated sampling and analysis approach for improved biodiversity monitoring

    USGS Publications Warehouse

    DeWan, Amielle A.; Zipkin, Elise F.

    2010-01-01

    Successful biodiversity conservation requires high quality monitoring data and analyses to ensure scientifically defensible policy, legislation, and management. Although monitoring is a critical component in assessing population status and trends, many governmental and non-governmental organizations struggle to develop and implement effective sampling protocols and statistical analyses because of the magnitude and diversity of species in conservation concern. In this article we describe a practical and sophisticated data collection and analysis framework for developing a comprehensive wildlife monitoring program that includes multi-species inventory techniques and community-level hierarchical modeling. Compared to monitoring many species individually, the multi-species approach allows for improved estimates of individual species occurrences, including rare species, and an increased understanding of the aggregated response of a community to landscape and habitat heterogeneity. We demonstrate the benefits and practicality of this approach to address challenges associated with monitoring in the context of US state agencies that are legislatively required to monitor and protect species in greatest conservation need. We believe this approach will be useful to regional, national, and international organizations interested in assessing the status of both common and rare species.

  1. A unified approach to characterize and conserve adaptive and neutral genetic diversity in subdivided populations.

    PubMed

    Wellmann, Robin; Bennewitz, Jörn; Meuwissen, Theo H E

    2014-01-01

    As extinction of local domestic breeds and of isolated subpopulations of wild species continues, and the resources available for conservation programs are limited, prioritizing subpopulations for conservation is of high importance to halt the erosion of genetic diversity observed in endangered species. Current approaches usually only take neutral genetic diversity into account. However, adaptation of subpopulations to different environments also contributes to the diversity found in the species. This paper introduces two notions of adaptive variation. The adaptive diversity in a trait is the excess of variance found in genotypic values relative to the variance that would have been expected in the absence of selection. The adaptivity coverage of a set of subpopulations quantifies how well the subpopulations could adapt to a large range of environments within a limited time span. Additionally, genome-based notions of neutral diversities were obtained that correspond to well known pedigree-based definitions. The values of subpopulations for conservation of adaptivity coverage were compared with their conservation values for adaptive diversity and neutral diversities using simulated data. Conservation values for adaptive diversity and neutral diversities were only slightly correlated, but the values for conservation of adaptivity coverage showed a reasonable correlation with both kinds if the time span was chosen appropriately. Hence, maintaining adaptivity coverage is a promising approach to prioritize subpopulations for conservation decisions. PMID:25578300

  2. Key wavelengths screening using competitive adaptive reweighted sampling method for multivariate calibration.

    PubMed

    Li, Hongdong; Liang, Yizeng; Xu, Qingsong; Cao, Dongsheng

    2009-08-19

    By employing the simple but effective principle 'survival of the fittest' on which Darwin's Evolution Theory is based, a novel strategy for selecting an optimal combination of key wavelengths of multi-component spectral data, named competitive adaptive reweighted sampling (CARS), is developed. Key wavelengths are defined as the wavelengths with large absolute coefficients in a multivariate linear regression model, such as partial least squares (PLS). In the present work, the absolute values of regression coefficients of PLS model are used as an index for evaluating the importance of each wavelength. Then, based on the importance level of each wavelength, CARS sequentially selects N subsets of wavelengths from N Monte Carlo (MC) sampling runs in an iterative and competitive manner. In each sampling run, a fixed ratio (e.g. 80%) of samples is first randomly selected to establish a calibration model. Next, based on the regression coefficients, a two-step procedure including exponentially decreasing function (EDF) based enforced wavelength selection and adaptive reweighted sampling (ARS) based competitive wavelength selection is adopted to select the key wavelengths. Finally, cross validation (CV) is applied to choose the subset with the lowest root mean square error of CV (RMSECV). The performance of the proposed procedure is evaluated using one simulated dataset together with one near infrared dataset of two properties. The results reveal an outstanding characteristic of CARS that it can usually locate an optimal combination of some key wavelengths which are interpretable to the chemical property of interest. Additionally, our study shows that better prediction is obtained by CARS when compared to full spectrum PLS modeling, Monte Carlo uninformative variable elimination (MC-UVE) and moving window partial least squares regression (MWPLSR). PMID:19616692

  3. Advances in adaptive control theory: Gradient- and derivative-free approaches

    NASA Astrophysics Data System (ADS)

    Yucelen, Tansel

    In this dissertation, we present new approaches to improve standard designs in adaptive control theory, and novel adaptive control architectures. We first present a novel Kalman filter based approach for approximately enforcing a linear constraint in standard adaptive control design. One application is that this leads to alternative forms for well known modification terms such as e-modification. In addition, it leads to smaller tracking errors without incurring significant oscillations in the system response and without requiring high modification gain. We derive alternative forms of e- and adaptive loop recovery (ALR-) modifications. Next, we show how to use Kalman filter optimization to derive a novel adaptation law. This results in an optimization-based time-varying adaptation gain that reduces the need for adaptation gain tuning. A second major contribution of this dissertation is the development of a novel derivative-free, delayed weight update law for adaptive control. The assumption of constant unknown ideal weights is relaxed to the existence of time-varying weights, such that fast and possibly discontinuous variation in weights are allowed. This approach is particulary advantageous for applications to systems that can undergo a sudden change in dynamics, such as might be due to reconfiguration, deployment of a payload, docking, or structural damage, and for rejection of external disturbance processes. As a third and final contribution, we develop a novel approach for extending all the methods developed in this dissertation to the case of output feedback. The approach is developed only for the case of derivative-free adaptive control, and the extension of the other approaches developed previously for the state feedback case to output feedback is left as a future research topic. The proposed approaches of this dissertation are illustrated in both simulation and flight test.

  4. Image classification with densely sampled image windows and generalized adaptive multiple kernel learning.

    PubMed

    Yan, Shengye; Xu, Xinxing; Xu, Dong; Lin, Stephen; Li, Xuelong

    2015-03-01

    We present a framework for image classification that extends beyond the window sampling of fixed spatial pyramids and is supported by a new learning algorithm. Based on the observation that fixed spatial pyramids sample a rather limited subset of the possible image windows, we propose a method that accounts for a comprehensive set of windows densely sampled over location, size, and aspect ratio. A concise high-level image feature is derived to effectively deal with this large set of windows, and this higher level of abstraction offers both efficient handling of the dense samples and reduced sensitivity to misalignment. In addition to dense window sampling, we introduce generalized adaptive l(p)-norm multiple kernel learning (GA-MKL) to learn a robust classifier based on multiple base kernels constructed from the new image features and multiple sets of prelearned classifiers from other classes. With GA-MKL, multiple levels of image features are effectively fused, and information is shared among different classifiers. Extensive evaluation on benchmark datasets for object recognition (Caltech256 and Caltech101) and scene recognition (15Scenes) demonstrate that the proposed method outperforms the state-of-the-art under a broad range of settings. PMID:24968365

  5. Adaptive sampling in two-phase designs: a biomarker study for progression in arthritis

    PubMed Central

    McIsaac, Michael A; Cook, Richard J

    2015-01-01

    Response-dependent two-phase designs are used increasingly often in epidemiological studies to ensure sampling strategies offer good statistical efficiency while working within resource constraints. Optimal response-dependent two-phase designs are difficult to implement, however, as they require specification of unknown parameters. We propose adaptive two-phase designs that exploit information from an internal pilot study to approximate the optimal sampling scheme for an analysis based on mean score estimating equations. The frequency properties of estimators arising from this design are assessed through simulation, and they are shown to be similar to those from optimal designs. The design procedure is then illustrated through application to a motivating biomarker study in an ongoing rheumatology research program. Copyright © 2015 © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25951124

  6. Adaptive leadership: a novel approach for family decision making.

    PubMed

    Adams, Judith; Bailey, Donald E; Anderson, Ruth A; Galanos, Anthony N

    2013-03-01

    Family members of intensive care unit (ICU) patients want to be involved in decision making, but they may not be best served by being placed in the position of having to solve problems for which they lack knowledge and skills. This case report presents an exemplar family meeting in the ICU led by a palliative care specialist, with discussion about the strategies used to improve the capacity of the family to make a decision consistent with the patient's goals. These strategies are presented through the lens of Adaptive Leadership. PMID:22663140

  7. PFC design via FRIT Approach for Adaptive Output Feedback Control of Discrete-time Systems

    NASA Astrophysics Data System (ADS)

    Mizumoto, Ikuro; Takagi, Taro; Fukui, Sota; Shah, Sirish L.

    This paper deals with a design problem of an adaptive output feedback control for discrete-time systems with a parallel feedforward compensator (PFC) which is designed for making the augmented controlled system ASPR. A PFC design scheme by a FRIT approach with only using an input/output experimental data set will be proposed for discrete-time systems in order to design an adaptive output feedback control system. Furthermore, the effectiveness of the proposed PFC design method will be confirmed through numerical simulations by designing adaptive control system with adaptive NN (Neural Network) for an uncertain discrete-time system.

  8. Adaptive sampling dual terahertz comb spectroscopy using dual free-running femtosecond lasers

    PubMed Central

    Yasui, Takeshi; Ichikawa, Ryuji; Hsieh, Yi-Da; Hayashi, Kenta; Cahyadi, Harsono; Hindle, Francis; Sakaguchi, Yoshiyuki; Iwata, Tetsuo; Mizutani, Yasuhiro; Yamamoto, Hirotsugu; Minoshima, Kaoru; Inaba, Hajime

    2015-01-01

    Terahertz (THz) dual comb spectroscopy (DCS) is a promising method for high-accuracy, high-resolution, broadband THz spectroscopy because the mode-resolved THz comb spectrum includes both broadband THz radiation and narrow-line CW-THz radiation characteristics. In addition, all frequency modes of a THz comb can be phase-locked to a microwave frequency standard, providing excellent traceability. However, the need for stabilization of dual femtosecond lasers has often hindered its wide use. To overcome this limitation, here we have demonstrated adaptive-sampling THz-DCS, allowing the use of free-running femtosecond lasers. To correct the fluctuation of the time and frequency scales caused by the laser timing jitter, an adaptive sampling clock is generated by dual THz-comb-referenced spectrum analysers and is used for a timing clock signal in a data acquisition board. The results not only indicated the successful implementation of THz-DCS with free-running lasers but also showed that this configuration outperforms standard THz-DCS with stabilized lasers due to the slight jitter remained in the stabilized lasers. PMID:26035687

  9. Making CORBA objects persistent: The object database adapter approach

    SciTech Connect

    Reverbel, F.C.R.

    1997-05-01

    In spite of its remarkable successes in promoting standards for distributed object systems, the Object Management Group (OMG) has not yet settled the issue of object persistence in the Object Request Broker (ORB) environment. The Common Object Request Broker Architecture (CORBA) specification briefly mentions an Object-Oriented Database Adapter that makes objects stored in an object-oriented database accessible through the ORB. This idea is pursued in the Appendix B of the ODMG standard, which identifies a number of issues involved in using an Object Database Management System (ODBMS) in a CORBA environment, and proposes an Object Database Adapter (ODA) to realize the integration of the ORB with the ODBMS. This paper discusses the design and implementation of an ODA that integrates an ORB and an ODBMS with C++ bindings. For the author`s purposes, an ODBMS is a system with programming interfaces. It may be a pure object-oriented DBMS (an OODBMS), or a combination of a relational DBMS and an object-relational mapper.

  10. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems.

    PubMed

    Vrabie, Draguna; Lewis, Frank

    2009-04-01

    In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided. PMID:19362449

  11. Identification of novel serum peptide biomarkers for high-altitude adaptation: a comparative approach

    PubMed Central

    Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen

    2016-01-01

    We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347–356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205–214), and isoform 1 of fibrinogen α chain precursor (FGA 588–624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes. PMID:27150491

  12. Identification of novel serum peptide biomarkers for high-altitude adaptation: a comparative approach.

    PubMed

    Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen

    2016-01-01

    We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347-356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205-214), and isoform 1 of fibrinogen α chain precursor (FGA 588-624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes. PMID:27150491

  13. Identification of novel serum peptide biomarkers for high-altitude adaptation: a comparative approach

    NASA Astrophysics Data System (ADS)

    Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen

    2016-05-01

    We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347–356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205–214), and isoform 1 of fibrinogen α chain precursor (FGA 588–624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes.

  14. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  15. Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James C.; Acikmese, Behcet; Mandic, Milan

    2012-01-01

    A software tool is used to demonstrate the feasibility of Touch and Go (TAG) sampling for Asteroid Sample Return missions. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. Previous work at JPL developed the G-TAG simulation tool, which provides a software environment for fast, multi-body simulations of the TAG event. G-TAG is described in Multibody Simulation Software Testbed for Small-Body Exploration and Sampling, (NPO-47196) NASA Tech Briefs, Vol. 35, No. 11 (November 2011), p.54. This current innovation adapts this tool to a mission that intends to return a sample from the surface of an asteroid. In order to demonstrate the feasibility of the TAG concept, the new software tool was used to generate extensive simulations that demonstrate the designed spacecraft meets key requirements. These requirements state that contact force and duration must be sufficient to ensure that enough material from the surface is collected in the brushwheel sampler (BWS), and that the spacecraft must survive the contact and must be able to recover and ascend to a safe position, and maintain velocity and orientation after the contact.

  16. An Evidence-Based Public Health Approach to Climate Change Adaptation

    PubMed Central

    Eidson, Millicent; Tlumak, Jennifer E.; Raab, Kristin K.; Luber, George

    2014-01-01

    Background: Public health is committed to evidence-based practice, yet there has been minimal discussion of how to apply an evidence-based practice framework to climate change adaptation. Objectives: Our goal was to review the literature on evidence-based public health (EBPH), to determine whether it can be applied to climate change adaptation, and to consider how emphasizing evidence-based practice may influence research and practice decisions related to public health adaptation to climate change. Methods: We conducted a substantive review of EBPH, identified a consensus EBPH framework, and modified it to support an EBPH approach to climate change adaptation. We applied the framework to an example and considered implications for stakeholders. Discussion: A modified EBPH framework can accommodate the wide range of exposures, outcomes, and modes of inquiry associated with climate change adaptation and the variety of settings in which adaptation activities will be pursued. Several factors currently limit application of the framework, including a lack of higher-level evidence of intervention efficacy and a lack of guidelines for reporting climate change health impact projections. To enhance the evidence base, there must be increased attention to designing, evaluating, and reporting adaptation interventions; standardized health impact projection reporting; and increased attention to knowledge translation. This approach has implications for funders, researchers, journal editors, practitioners, and policy makers. Conclusions: The current approach to EBPH can, with modifications, support climate change adaptation activities, but there is little evidence regarding interventions and knowledge translation, and guidelines for projecting health impacts are lacking. Realizing the goal of an evidence-based approach will require systematic, coordinated efforts among various stakeholders. Citation: Hess JJ, Eidson M, Tlumak JE, Raab KK, Luber G. 2014. An evidence-based public

  17. A context-adaptable approach to clinical guidelines.

    PubMed

    Terenziani, Paolo; Montani, Stefania; Bottrighi, Alessio; Torchio, Mauro; Molino, Gianpaolo; Correndo, Gianluca

    2004-01-01

    One of the most relevant obstacles to the use and dissemination of clinical guidelines is the gap between the generality of guidelines (as defined, e.g., by physicians' committees) and the peculiarities of the specific context of application. In particular, general guidelines do not take into account the fact that the tools needed for laboratory and instrumental investigations might be unavailable at a given hospital. Moreover, computer-based guideline managers must also be integrated with the Hospital Information System (HIS), and usually different DBMS are adopted by different hospitals. The GLARE (Guideline Acquisition, Representation and Execution) system addresses these issues by providing a facility for automatic resource-based adaptation of guidelines to the specific context of application, and by providing a modular architecture in which only limited and well-localised changes are needed to integrate the system with the HIS at hand. PMID:15360797

  18. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  19. Small Sample Properties of an Adaptive Filter with Application to Low Volume Statistical Process Control

    SciTech Connect

    CROWDER, STEPHEN V.

    1999-09-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards while building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper we address the issue of low volume statistical process control. We investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. We develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, we study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. We show that far fewer data values are needed than is typically recommended for process control applications. We also demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  20. Small sample properties of an adaptive filter with application to low volume statistical process control

    SciTech Connect

    Crowder, S.V.; Eshleman, L.

    1998-08-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards white building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper the authors address the issue of low volume statistical process control. They investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. The authors develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, they study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. They show that far fewer data values are needed than is typically recommended for process control applications. And they demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  1. Computational prediction of riboswitch tertiary structures including pseudoknots by RAGTOP: a hierarchical graph sampling approach.

    PubMed

    Kim, Namhee; Zahran, Mai; Schlick, Tamar

    2015-01-01

    The modular organization of RNA structure has been exploited in various computational and theoretical approaches to identify RNA tertiary (3D) motifs and assemble RNA structures. Riboswitches exemplify this modularity in terms of both structural and functional adaptability of RNA components. Here, we extend our computational approach based on tree graph sampling to the prediction of riboswitch topologies by defining additional edges to mimick pseudoknots. Starting from a secondary (2D) structure, we construct an initial graph deduced from predicted junction topologies by our data-mining algorithm RNAJAG trained on known RNAs; we sample these graphs in 3D space guided by knowledge-based statistical potentials derived from bending and torsion measures of internal loops as well as radii of gyration for known RNAs. We present graph sampling results for 10 representative riboswitches, 6 of them with pseudoknots, and compare our predictions to solved structures based on global and local RMSD measures. Our results indicate that the helical arrangements in riboswitches can be approximated using our combination of modified 3D tree graph representations for pseudoknots, junction prediction, graph moves, and scoring functions. Future challenges in the field of riboswitch prediction and design are also discussed. PMID:25726463

  2. Adaptively Managing Wildlife for Climate Change: A Fuzzy Logic Approach

    NASA Astrophysics Data System (ADS)

    Prato, Tony

    2011-07-01

    Wildlife managers have little or no control over climate change. However, they may be able to alleviate potential adverse impacts of future climate change by adaptively managing wildlife for climate change. In particular, wildlife managers can evaluate the efficacy of compensatory management actions (CMAs) in alleviating potential adverse impacts of future climate change on wildlife species using probability-based or fuzzy decision rules. Application of probability-based decision rules requires managers to specify certain probabilities, which is not possible when they are uncertain about the relationships between observed and true ecological conditions for a species. Under such uncertainty, the efficacy of CMAs can be evaluated and the best CMA selected using fuzzy decision rules. The latter are described and demonstrated using three constructed cases that assume: (1) a single ecological indicator (e.g., population size for a species) in a single time period; (2) multiple ecological indicators for a species in a single time period; and (3) multiple ecological conditions for a species in multiple time periods.

  3. Adaptation to floods in future climate: a practical approach

    NASA Astrophysics Data System (ADS)

    Doroszkiewicz, Joanna; Romanowicz, Renata; Radon, Radoslaw; Hisdal, Hege

    2016-04-01

    In this study some aspects of the application of the 1D hydraulic model are discussed with a focus on its suitability for flood adaptation under future climate conditions. The Biała Tarnowska catchment is used as a case study. A 1D hydraulic model is developed for the evaluation of inundation extent and risk maps in future climatic conditions. We analyse the following flood indices: (i) extent of inundation area; (ii) depth of water on flooded land; (iii) the flood wave duration; (iv) the volume of a flood wave over the threshold value. In this study we derive a model cross-section geometry following the results of primary research based on a 500-year flood inundation extent. We compare two methods of localisation of cross-sections from the point of view of their suitability to the derivation of the most precise inundation outlines. The aim is to specify embankment heights along the river channel that would protect the river valley in the most vulnerable locations under future climatic conditions. We present an experimental design for scenario analysis studies and uncertainty reduction options for future climate projections obtained from the EUROCORDEX project. Acknowledgements: This work was supported by the project CHIHE (Climate Change Impact on Hydrological Extremes), carried out in the Institute of Geophysics Polish Academy of Sciences, funded by Norway Grants (contract No. Pol-Nor/196243/80/2013). The hydro-meteorological observations were provided by the Institute of Meteorology and Water Management (IMGW), Poland.

  4. Recruiting hard-to-reach United States population sub-groups via adaptations of snowball sampling strategy

    PubMed Central

    Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith

    2011-01-01

    Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089

  5. Adaptation of a Weighted Regression Approach to Evaluate Water Quality Trends in an Estuary

    EPA Science Inventory

    To improve the description of long-term changes in water quality, we adapted a weighted regression approach to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach, originally developed to resolve pollutant transport trends in rivers...

  6. Adaptation of a weighted regression approach to evaluate water quality trends in anestuary

    EPA Science Inventory

    To improve the description of long-term changes in water quality, a weighted regression approach developed to describe trends in pollutant transport in rivers was adapted to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach allows...

  7. Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2006-01-01

    This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…

  8. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    ERIC Educational Resources Information Center

    Dorça, Fabiano Azevedo; Lima, Luciano Vieira; Fernandes, Márcia Aparecida; Lopes, Carlos Roberto

    2012-01-01

    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and…

  9. Adaptive Role Playing Games: An Immersive Approach for Problem Based Learning

    ERIC Educational Resources Information Center

    Sancho, Pilar; Moreno-Ger, Pablo; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar

    2009-01-01

    In this paper we present a general framework, called NUCLEO, for the application of socio-constructive educational approaches in higher education. The underlying pedagogical approach relies on an adaptation model in order to improve group dynamics, as this has been identified as one of the key features in the success of collaborative learning…

  10. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    DOE PAGESBeta

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The methodmore » is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.« less

  11. Accelerating the Convergence of Replica Exchange Simulations Using Gibbs Sampling and Adaptive Temperature Sets

    SciTech Connect

    Vogel, Thomas; Perez, Danny

    2015-08-28

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.

  12. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors

    PubMed Central

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  13. Real-time nutrient monitoring in rivers: adaptive sampling strategies, technological challenges and future directions

    NASA Astrophysics Data System (ADS)

    Blaen, Phillip; Khamis, Kieran; Lloyd, Charlotte; Bradley, Chris

    2016-04-01

    Excessive nutrient concentrations in river waters threaten aquatic ecosystem functioning and can pose substantial risks to human health. Robust monitoring strategies are therefore required to generate reliable estimates of river nutrient loads and to improve understanding of the catchment processes that drive spatiotemporal patterns in nutrient fluxes. Furthermore, these data are vital for prediction of future trends under changing environmental conditions and thus the development of appropriate mitigation measures. In recent years, technological developments have led to an increase in the use of continuous in-situ nutrient analysers, which enable measurements at far higher temporal resolutions than can be achieved with discrete sampling and subsequent laboratory analysis. However, such instruments can be costly to run and difficult to maintain (e.g. due to high power consumption and memory requirements), leading to trade-offs between temporal and spatial monitoring resolutions. Here, we highlight how adaptive monitoring strategies, comprising a mixture of temporal sample frequencies controlled by one or more 'trigger variables' (e.g. river stage, turbidity, or nutrient concentration), can advance our understanding of catchment nutrient dynamics while simultaneously overcoming many of the practical and economic challenges encountered in typical in-situ river nutrient monitoring applications. We present examples of short-term variability in river nutrient dynamics, driven by complex catchment behaviour, which support our case for the development of monitoring systems that can adapt in real-time to rapid environmental changes. In addition, we discuss the advantages and disadvantages of current nutrient monitoring techniques, and suggest new research directions based on emerging technologies and highlight how these might improve: 1) monitoring strategies, and 2) understanding of linkages between catchment processes and river nutrient fluxes.

  14. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.

    PubMed

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  15. Machine Learning Approaches to Rare Events Sampling and Estimation

    NASA Astrophysics Data System (ADS)

    Elsheikh, A. H.

    2014-12-01

    Given the severe impacts of rare events, we try to quantitatively answer the following two questions: How can we estimate the probability of a rare event? And what are the factors affecting these probabilities? We utilize machine learning classification methods to define the failure boundary (in the stochastic space) corresponding to a specific threshold of a rare event. The training samples for the classification algorithm are obtained using multilevel splitting and Monte Carlo (MC) simulations. Once the training of the classifier is performed, a full MC simulation can be performed efficiently using the classifier as a reduced order model replacing the full physics simulator.We apply the proposed method on a standard benchmark for CO2 leakage through an abandoned well. In this idealized test case, CO2 is injected into a deep aquifer and then spreads within the aquifer and, upon reaching an abandoned well; it rises to a shallower aquifer. In current study, we try to evaluate the probability of leakage of a pre-defined amount of the injected CO2 given a heavy tailed distribution of the leaky well permeability. We show that machine learning based approaches significantly outperform direct MC and multi-level splitting methods in terms of efficiency and precision. The proposed algorithm's efficiency and reliability enabled us to perform a sensitivity analysis to the different modeling assumptions including the different prior distributions on the probability of CO2 leakage.

  16. An efficient Bayesian inference approach to inverse problems based on an adaptive sparse grid collocation method

    NASA Astrophysics Data System (ADS)

    Ma, Xiang; Zabaras, Nicholas

    2009-03-01

    A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media.

  17. An adaptive management approach to controlling suburban deer

    USGS Publications Warehouse

    Nielson, C.K.; Porter, W.F.; Underwood, H.B.

    1997-01-01

    Distance sight-resight sampling has particular relevance to aerial surveys, in which height above ground and aircraft speed make the critical assumption of certain detection on the track-line unrealistic. Recent developments in distance sight-resight theory have left practical issues related to data collection as the major impediment to widespread use of distance sight-resight sampling in aerial surveys. We describe and evaluate a system to automatically log, store, and process data from distance sight-resight aerial surveys. The system has a primary digital system and a secondary audio system. The digital system comprises a sighting 'gun' and small keypad for each observer, a global positioning system (GPS) receiver, and an altimeter interface, all linked to a central laptop computer. The gun is used to record time and angle of declination from the horizon of sighted groups of animals as they pass the aircraft. The keypad is used to record information on species and group size. The altimeter interface records altitude from the aircraft's radar altimeter, and the GPS receiver provides location data at user-definable intervals. We wrote software to import data into a database and convert it into a form appropriate for distance sight-resight analyses. Perpendicular distance of sighted groups of animals from the flight path is calculated from altitude and angle of declination. Time, angle of declination, species, and group size of sightings by independent observers on the same side of the aircraft are used as criteria to classify single and duplicate sightings, allowing testing of the critical distance sampling assumption (g(0)=1) and estimation of g(0) if that assumption fails. An audio system comprising headphones for each observer and a 4-track tape recorder allows recording of data that are difficult to accommodate in the digital system and provides a backup to the digital system. We evaluated the system by conducting experimental surveys and reviewing results

  18. Farms adaptation to changes in flood risk: a management approach

    NASA Astrophysics Data System (ADS)

    Pivot, Jean-Marc; Martin, Philippe

    2002-10-01

    Creating flood expansion areas e.g. for the protection of urban areas from flooding involves a localised increase in risk which may require farmers to be compensated for crop damage or other losses. With this in mind, the paper sets out the approach used to study the problem and gives results obtained from a survey of farms liable to flooding in central France. The approach is based on a study of decisions made by farmers in situations of uncertainty, using the concept of 'model of action'. The results show that damage caused to farming areas by flooding should be considered both at field level and at farm level. The damage caused to the field depends on the flood itself, the fixed characteristics of the field, and the plant species cultivated. However, the losses to the farm taken as a whole can differ considerably from those for the flooded field, due to 'knock-on' effects on farm operations which depend on the internal organization, the availability of production resources, and the farmer's objectives, both for the farm as a whole and for its individual enterprises. Three main strategies regarding possible flood events were identified. Reasons for choosing one of these include the way the farmer perceives the risk and the size of the area liable to flooding. Finally, the formalisation of farm system management in the face of uncertainty, especially due to flooding, enables compensation to be calculated for farmers whose land is affected by the creation of flood expansion areas.

  19. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  20. A global sampling approach to designing and reengineering RNA secondary structures

    PubMed Central

    Levin, Alex; Lis, Mieszko; Ponty, Yann; O’Donnell, Charles W.; Devadas, Srinivas; Berger, Bonnie; Waldispühl, Jérôme

    2012-01-01

    The development of algorithms for designing artificial RNA sequences that fold into specific secondary structures has many potential biomedical and synthetic biology applications. To date, this problem remains computationally difficult, and current strategies to address it resort to heuristics and stochastic search techniques. The most popular methods consist of two steps: First a random seed sequence is generated; next, this seed is progressively modified (i.e. mutated) to adopt the desired folding properties. Although computationally inexpensive, this approach raises several questions such as (i) the influence of the seed; and (ii) the efficiency of single-path directed searches that may be affected by energy barriers in the mutational landscape. In this article, we present RNA-ensign, a novel paradigm for RNA design. Instead of taking a progressive adaptive walk driven by local search criteria, we use an efficient global sampling algorithm to examine large regions of the mutational landscape under structural and thermodynamical constraints until a solution is found. When considering the influence of the seeds and the target secondary structures, our results show that, compared to single-path directed searches, our approach is more robust, succeeds more often and generates more thermodynamically stable sequences. An ensemble approach to RNA design is thus well worth pursuing as a complement to existing approaches. RNA-ensign is available at http://csb.cs.mcgill.ca/RNAensign. PMID:22941632

  1. Adaptive Thouless-Anderson-Palmer approach to inverse Ising problems with quenched random fields.

    PubMed

    Huang, Haiping; Kabashima, Yoshiyuki

    2013-06-01

    The adaptive Thouless-Anderson-Palmer equation is derived for inverse Ising problems in the presence of quenched random fields. We test the proposed scheme on Sherrington-Kirkpatrick, Hopfield, and random orthogonal models and find that the adaptive Thouless-Anderson-Palmer approach allows accurate inference of quenched random fields whose distribution can be either Gaussian or bimodal. In particular, another competitive method for inferring external fields, namely, the naive mean field method with diagonal weights, is compared and discussed. PMID:23848649

  2. Design of Field Experiments for Adaptive Sampling of the Ocean with Autonomous Vehicles

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Ooi, B. H.; Cho, W.; Dao, M. H.; Tkalich, P.; Patrikalakis, N. M.

    2010-05-01

    Due to the highly non-linear and dynamical nature of oceanic phenomena, the predictive capability of various ocean models depends on the availability of operational data. A practical method to improve the accuracy of the ocean forecast is to use a data assimilation methodology to combine in-situ measured and remotely acquired data with numerical forecast models of the physical environment. Autonomous surface and underwater vehicles with various sensors are economic and efficient tools for exploring and sampling the ocean for data assimilation; however there is an energy limitation to such vehicles, and thus effective resource allocation for adaptive sampling is required to optimize the efficiency of exploration. In this paper, we use physical oceanography forecasts of the coastal zone of Singapore for the design of a set of field experiments to acquire useful data for model calibration and data assimilation. The design process of our experiments relied on the oceanography forecast including the current speed, its gradient, and vorticity in a given region of interest for which permits for field experiments could be obtained and for time intervals that correspond to strong tidal currents. Based on these maps, resources available to our experimental team, including Autonomous Surface Craft (ASC) are allocated so as to capture the oceanic features that result from jets and vortices behind bluff bodies (e.g., islands) in the tidal current. Results are summarized from this resource allocation process and field experiments conducted in January 2009.

  3. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  4. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil

    PubMed Central

    France, Brian; Bell, William; Chang, Emily; Scholten, Trudy

    2015-01-01

    Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. PMID:26714315

  5. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil.

    PubMed

    France, Brian; Bell, William; Chang, Emily; Scholten, Trudy

    2015-01-01

    Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. PMID:26714315

  6. Land-based approach to evaluate sustainable land management and adaptive capacity of ecosystems/lands

    NASA Astrophysics Data System (ADS)

    Kust, German; Andreeva, Olga

    2015-04-01

    A number of new concepts and paradigms appeared during last decades, such as sustainable land management (SLM), climate change (CC) adaptation, environmental services, ecosystem health, and others. All of these initiatives still not having the common scientific platform although some agreements in terminology were reached, schemes of links and feedback loops created, and some models developed. Nevertheless, in spite of all these scientific achievements, the land related issues are still not in the focus of CC adaptation and mitigation. The last did not grow much beyond the "greenhouse gases" (GHG) concept, which makes land degradation as the "forgotten side of climate change" The possible decision to integrate concepts of climate and desertification/land degradation could be consideration of the "GHG" approach providing global solution, and "land" approach providing local solution covering other "locally manifesting" issues of global importance (biodiversity conservation, food security, disasters and risks, etc.) to serve as a central concept among those. SLM concept is a land-based approach, which includes the concepts of both ecosystem-based approach (EbA) and community-based approach (CbA). SLM can serve as in integral CC adaptation strategy, being based on the statement "the more healthy and resilient the system is, the less vulnerable and more adaptive it will be to any external changes and forces, including climate" The biggest scientific issue is the methods to evaluate the SLM and results of the SLM investments. We suggest using the approach based on the understanding of the balance or equilibrium of the land and nature components as the major sign of the sustainable system. Prom this point of view it is easier to understand the state of the ecosystem stress, size of the "health", range of adaptive capacity, drivers of degradation and SLM nature, as well as the extended land use, and the concept of environmental land management as the improved SLM approach

  7. Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach

    PubMed Central

    2014-01-01

    Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population

  8. The Colorado Climate Preparedness Project: A Systematic Approach to Assessing Efforts Supporting State-Level Adaptation

    NASA Astrophysics Data System (ADS)

    Klein, R.; Gordon, E.

    2010-12-01

    Scholars and policy analysts often contend that an effective climate adaptation strategy must entail "mainstreaming," or incorporating responses to possible climate impacts into existing planning and management decision frameworks. Such an approach, however, makes it difficult to assess the degree to which decisionmaking entities are engaging in adaptive activities that may or may not be explicitly framed around a changing climate. For example, a drought management plan may not explicitly address climate change, but the activities and strategies outlined in it may reduce vulnerabilities posed by a variable and changing climate. Consequently, to generate a strategic climate adaptation plan requires identifying the entire suite of activities that are implicitly linked to climate and may affect adaptive capacity within the system. Here we outline a novel, two-pronged approach, leveraging social science methods, to understanding adaptation throughout state government in Colorado. First, we conducted a series of interviews with key actors in state and federal government agencies, non-governmental organizations, universities, and other entities engaged in state issues. The purpose of these interviews was to elicit information about current activities that may affect the state’s adaptive capacity and to identify future climate-related needs across the state. Second, we have developed an interactive database cataloging organizations, products, projects, and people actively engaged in adaptive planning and policymaking that are relevant to the state of Colorado. The database includes a wiki interface, helping create a dynamic component that will enable frequent updating as climate-relevant information emerges. The results of this project are intended to paint a clear picture of sectors and agencies with higher and lower levels of adaptation awareness and to provide a roadmap for the next gubernatorial administration to pursue a more sophisticated climate adaptation agenda

  9. The Application of Adaptive Sampling and Analysis Program (ASAP) Techniques to NORM Sites

    SciTech Connect

    Johnson, Robert; Smith, Karen P.; Quinn, John

    1999-10-29

    The results from the Michigan demonstration establish that this type of approach can be very effective for NORM sites. The advantages include (1) greatly reduced per sample analytical costs; (2) a reduced reliance on soil sampling and ex situ gamma spectroscopy analyses; (3) the ability to combine characterization with remediation activities in one fieldwork cycle; (4) improved documentation; and (5) ultimately better remediation, as measured by greater precision in delineating soils that are not in compliance with requirements from soils that are in compliance. In addition, the demonstration showed that the use of real-time technologies, such as the RadInSoil, can facilitate the implementation of a Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM)-based final status survey program

  10. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  11. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  12. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    SciTech Connect

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  13. Novel Sample-handling Approach for XRD Analysis with Minimal Sample Preparation

    NASA Technical Reports Server (NTRS)

    Sarrazin, P.; Chipera, S.; Bish, D.; Blake, D.; Feldman, S.; Vaniman, D.; Bryson, C.

    2004-01-01

    Sample preparation and sample handling are among the most critical operations associated with X-ray diffraction (XRD) analysis. These operations require attention in a laboratory environment, but they become a major constraint in the deployment of XRD instruments for robotic planetary exploration. We are developing a novel sample handling system that dramatically relaxes the constraints on sample preparation by allowing characterization of coarse-grained material that would normally be impossible to analyze with conventional powder-XRD techniques.

  14. An enhanced adaptive management approach for remediation of legacy mercury in the South River.

    PubMed

    Foran, Christy M; Baker, Kelsie M; Grosso, Nancy R; Linkov, Igor

    2015-01-01

    Uncertainties about future conditions and the effects of chosen actions, as well as increasing resource scarcity, have been driving forces in the utilization of adaptive management strategies. However, many applications of adaptive management have been criticized for a number of shortcomings, including a limited ability to learn from actions and a lack of consideration of stakeholder objectives. To address these criticisms, we supplement existing adaptive management approaches with a decision-analytical approach that first informs the initial selection of management alternatives and then allows for periodic re-evaluation or phased implementation of management alternatives based on monitoring information and incorporation of stakeholder values. We describe the application of this enhanced adaptive management (EAM) framework to compare remedial alternatives for mercury in the South River, based on an understanding of the loading and behavior of mercury in the South River near Waynesboro, VA. The outcomes show that the ranking of remedial alternatives is influenced by uncertainty in the mercury loading model, by the relative importance placed on different criteria, and by cost estimates. The process itself demonstrates that a decision model can link project performance criteria, decision-maker preferences, environmental models, and short- and long-term monitoring information with management choices to help shape a remediation approach that provides useful information for adaptive, incremental implementation. PMID:25665032

  15. 120nm resolution in thick samples with structured illumination and adaptive optics

    NASA Astrophysics Data System (ADS)

    Thomas, Benjamin; Sloan, Megan; Wolstenholme, Adrian J.; Kner, Peter

    2014-03-01

    μLinear Structured Illumination Microscopy (SIM) provides a two-fold increase over the diffraction limited resolution. SIM produces excellent images with 120nm resolution in tissue culture cells in two and three dimensions. For SIM to work correctly, the point spread function (PSF) and optical transfer function (OTF) must be known, and, ideally, should be unaberrated. When imaging through thick samples, aberrations will be introduced into the optical system which will reduce the peak intensity and increase the width of the PSF. This will lead to reduced resolution and artifacts in SIM images. Adaptive optics can be used to correct the optical wavefront restoring the PSF to its unaberrated state, and AO has been used in several types of fluorescence microscopy. We demonstrate that AO can be used with SIM to achieve 120nm resolution through 25m of tissue by imaging through the full thickness of an adult C. elegans roundworm. The aberrations can be corrected over a 25μm × 45μm field of view with one wavefront correction setting, demonstrating that AO can be used effectively with widefield superresolution techniques.

  16. Preschoolers' narrative representations and childhood adaptation in an ethnoracially diverse sample.

    PubMed

    Grey, Izabela K; Yates, Tuppett M

    2014-01-01

    This investigation evaluated relations between preschoolers' representational content and coherence in the MacArthur Story Stem Battery (MSSB) at age four as related to child adjustment at age six. A community sample of 250 preschoolers (50% female; M(age) = 49.05 months, SD = 2.9; 46% Hispanic, 18% Black, 11.2% White, 0.4% Asian, and 24.4% multiracial) completed assessments of relational representations using the MSSB at age four and of child adjustment at age six, including a measure of child-reported depressive symptomatology and observer ratings of child aggression during a Bobo doll task and inhibitory control during a delay of gratification task. Regression analyses demonstrated prospective relations between negative mother representation and less inhibitory control, negative child representation and higher aggression, and narrative coherence and more inhibitory control. Interactive analyses revealed relations between negative mother representation and difficulties in inhibitory control among White children and weaker relations among Black children. Prospective relations between narrative coherence and increased inhibitory control were less pronounced for Hispanic children. Findings indicate that preschoolers' narratives can reveal the thematic content and structural coherence of their internalized beliefs and expectations of self and (m)other. Associations between representations and children's adaptation have clear implications for representational processes and interventions in development. PMID:25299891

  17. Kinetic Boltzmann approach adapted for modeling highly ionized matter created by x-ray irradiation of a solid

    NASA Astrophysics Data System (ADS)

    Ziaja, Beata; Saxena, Vikrant; Son, Sang-Kil; Medvedev, Nikita; Barbrel, Benjamin; Woloncewicz, Bianca; Stransky, Michal

    2016-05-01

    We report on the kinetic Boltzmann approach adapted for simulations of highly ionized matter created from a solid by its x-ray irradiation. X rays can excite inner-shell electrons, which leads to the creation of deeply lying core holes. Their relaxation, especially in heavier elements, can take complicated paths, leading to a large number of active configurations. Their number can be so large that solving the set of respective evolution equations becomes computationally inefficient and another modeling approach should be used instead. To circumvent this complexity, the commonly used continuum models employ a superconfiguration scheme. Here, we propose an alternative approach which still uses "true" atomic configurations but limits their number by restricting the sample relaxation to the predominant relaxation paths. We test its reliability, performing respective calculations for a bulk material consisting of light atoms and comparing the results with a full calculation including all relaxation paths. Prospective application for heavy elements is discussed.

  18. Kinetic Boltzmann approach adapted for modeling highly ionized matter created by x-ray irradiation of a solid.

    PubMed

    Ziaja, Beata; Saxena, Vikrant; Son, Sang-Kil; Medvedev, Nikita; Barbrel, Benjamin; Woloncewicz, Bianca; Stransky, Michal

    2016-05-01

    We report on the kinetic Boltzmann approach adapted for simulations of highly ionized matter created from a solid by its x-ray irradiation. X rays can excite inner-shell electrons, which leads to the creation of deeply lying core holes. Their relaxation, especially in heavier elements, can take complicated paths, leading to a large number of active configurations. Their number can be so large that solving the set of respective evolution equations becomes computationally inefficient and another modeling approach should be used instead. To circumvent this complexity, the commonly used continuum models employ a superconfiguration scheme. Here, we propose an alternative approach which still uses "true" atomic configurations but limits their number by restricting the sample relaxation to the predominant relaxation paths. We test its reliability, performing respective calculations for a bulk material consisting of light atoms and comparing the results with a full calculation including all relaxation paths. Prospective application for heavy elements is discussed. PMID:27300998

  19. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  20. Neural Network Aided Adaptive Extended Kalman Filtering Approach for DGPS Positioning

    NASA Astrophysics Data System (ADS)

    Jwo, Dah-Jing; Huang, Hung-Chih

    2004-09-01

    The extended Kalman filter, when employed in the GPS receiver as the navigation state estimator, provides optimal solutions if the noise statistics for the measurement and system are completely known. In practice, the noise varies with time, which results in performance degradation. The covariance matching method is a conventional adaptive approach for estimation of noise covariance matrices. The technique attempts to make the actual filter residuals consistent with their theoretical covariance. However, this innovation-based adaptive estimation shows very noisy results if the window size is small. To resolve the problem, a multilayered neural network is trained to identify the measurement noise covariance matrix, in which the back-propagation algorithm is employed to iteratively adjust the link weights using the steepest descent technique. Numerical simulations show that based on the proposed approach the adaptation performance is substantially enhanced and the positioning accuracy is substantially improved.

  1. A Time-Critical Adaptive Approach for Visualizing Natural Scenes on Different Devices

    PubMed Central

    Dong, Tianyang; Liu, Siyuan; Xia, Jiajia; Fan, Jing; Zhang, Ling

    2015-01-01

    To automatically adapt to various hardware and software environments on different devices, this paper presents a time-critical adaptive approach for visualizing natural scenes. In this method, a simplified expression of a tree model is used for different devices. The best rendering scheme is intelligently selected to generate a particular scene by estimating the rendering time of trees based on their visual importance. Therefore, this approach can ensure the reality of natural scenes while maintaining a constant frame rate for their interactive display. To verify its effectiveness and flexibility, this method is applied in different devices, such as a desktop computer, laptop, iPad and smart phone. Applications show that the method proposed in this paper can not only adapt to devices with different computing abilities and system resources very well but can also achieve rather good visual realism and a constant frame rate for natural scenes. PMID:25723177

  2. Adaptation to heat health risk among vulnerable urban residents: a multi-city approach

    NASA Astrophysics Data System (ADS)

    Wilhelmi, O.; Hayden, M.; Brenkert-Smith, H.

    2010-12-01

    Recent studies on climate impacts demonstrate that climate change will have differential consequences in the U.S. at the regional and local scales. Changing climate is predicted to increase the frequency, intensity and impacts of extreme heat events prompting the need to develop preparedness and adaptation strategies that reduce societal vulnerability. Central to understanding societal vulnerability, is population’s adaptive capacity, which, in turn, influences adaptation, the actual adjustments made to cope with the impacts from current and future hazardous heat events. To-date, few studies have considered the complexity of vulnerability and its relationship to capacity to cope with or adapt to extreme heat. In this presentation we will discuss a pilot project conducted in 2009 in Phoenix, AZ, which explored urban societal vulnerability and adaptive capacity to extreme heat in several neighborhoods. Household-level surveys revealed differential adaptive capacity among the neighborhoods and social groups. In response to this pilot project, and in order to develop a methodological framework that could be used across locales, we also present an expansion of this project into Houston, TX and Toronto, Canada with the goal of furthering our understanding of adaptive capacity to extreme heat in very different urban settings. This presentation will communicate the results of the extreme heat vulnerability survey in Phoenix as well as the multidisciplinary, multi-model framework that will be used to explore urban vulnerability and adaptation strategies to heat in Houston and Toronto. We will outline challenges and opportunities in furthering our understanding of adaptive capacity and the need to approach these problems from a macro to a micro level.

  3. Three Authentic Curriculum-Integration Approaches to Bird Adaptations That Incorporate Technology and Thinking Skills

    ERIC Educational Resources Information Center

    Rule, Audrey C.; Barrera, Manuel T., III

    2008-01-01

    Integration of subject areas with technology and thinking skills is a way to help teachers cope with today's overloaded curriculum and to help students see the connectedness of different curriculum areas. This study compares three authentic approaches to teaching a science unit on bird adaptations for habitat that integrate thinking skills and…

  4. Assessment of Social Competence, Adaptive Behaviors, and Approaches to Learning with Young Children. Working Paper Series.

    ERIC Educational Resources Information Center

    Meisels, Samuel J.; Atkins-Burnett, Sally; Nicholson, Julie

    Prepared in support of the Early Childhood Longitudinal Study (ECLS), which will examine children's early school experiences beginning with kindergarten, this working paper focuses on research regarding the measurement of young children's social competence, adaptive behavior, and approaches to learning. The paper reviews the key variables and…

  5. AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT

    EPA Science Inventory

    A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...

  6. Adaptive leadership and person-centered care: a new approach to solving problems.

    PubMed

    Corazzini, Kirsten N; Anderson, Ruth A

    2014-01-01

    Successfully transitioning to person-centered care in nursing homes requires a new approach to solving care issues. The adaptive leadership framework suggests that expert providers must support frontline caregivers in their efforts to develop high-quality, person-centered solutions. PMID:25237881

  7. Complexity Thinking in PE: Game-Centred Approaches, Games as Complex Adaptive Systems, and Ecological Values

    ERIC Educational Resources Information Center

    Storey, Brian; Butler, Joy

    2013-01-01

    Background: This article draws on the literature relating to game-centred approaches (GCAs), such as Teaching Games for Understanding, and dynamical systems views of motor learning to demonstrate a convergence of ideas around games as complex adaptive learning systems. This convergence is organized under the title "complexity thinking"…

  8. EXSPRT: An Expert Systems Approach to Computer-Based Adaptive Testing.

    ERIC Educational Resources Information Center

    Frick, Theodore W.; And Others

    Expert systems can be used to aid decision making. A computerized adaptive test (CAT) is one kind of expert system, although it is not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. EXSPRT-R uses random selection of test items,…

  9. An Enhanced Approach to Combine Item Response Theory with Cognitive Diagnosis in Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Chun; Zheng, Chanjin; Chang, Hua-Hua

    2014-01-01

    Computerized adaptive testing offers the possibility of gaining information on both the overall ability and cognitive profile in a single assessment administration. Some algorithms aiming for these dual purposes have been proposed, including the shadow test approach, the dual information method (DIM), and the constraint weighted method. The…

  10. ASICs Approach for the Implementation of a Symmetric Triangular Fuzzy Coprocessor and Its Application to Adaptive Filtering

    NASA Technical Reports Server (NTRS)

    Starks, Scott; Abdel-Hafeez, Saleh; Usevitch, Bryan

    1997-01-01

    This paper discusses the implementation of a fuzzy logic system using an ASICs design approach. The approach is based upon combining the inherent advantages of symmetric triangular membership functions and fuzzy singleton sets to obtain a novel structure for fuzzy logic system application development. The resulting structure utilizes a fuzzy static RAM to store the rule-base and the end-points of the triangular membership functions. This provides advantages over other approaches in which all sampled values of membership functions for all universes must be stored. The fuzzy coprocessor structure implements the fuzzification and defuzzification processes through a two-stage parallel pipeline architecture which is capable of executing complex fuzzy computations in less than 0.55us with an accuracy of more than 95%, thus making it suitable for a wide range of applications. Using the approach presented in this paper, a fuzzy logic rule-base can be directly downloaded via a host processor to an onchip rule-base memory with a size of 64 words. The fuzzy coprocessor's design supports up to 49 rules for seven fuzzy membership functions associated with each of the chip's two input variables. This feature allows designers to create fuzzy logic systems without the need for additional on-board memory. Finally, the paper reports on simulation studies that were conducted for several adaptive filter applications using the least mean squared adaptive algorithm for adjusting the knowledge rule-base.

  11. Selecting a Sample for Your Experiment: A Non-Random Stratified Sampling Approach

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2012-01-01

    The purpose of this paper is to develop a more general method for sample recruitment in experiments that is purposive (not random) and that results in a sample that is compositionally similar to the generalization population. This work builds on Tipton et al. (2011) by offering solutions to a larger class of problems than the non-overlapping…

  12. The role of adaptive management as an operational approach for resource management agencies

    USGS Publications Warehouse

    Johnson, B.L.

    1999-01-01

    In making resource management decisions, agencies use a variety of approaches that involve different levels of political concern, historical precedence, data analyses, and evaluation. Traditional decision-making approaches have often failed to achieve objectives for complex problems in large systems, such as the Everglades or the Colorado River. I contend that adaptive management is the best approach available to agencies for addressing this type of complex problem, although its success has been limited thus far. Traditional decision-making approaches have been fairly successful at addressing relatively straightforward problems in small, replicated systems, such as management of trout in small streams or pulp production in forests. However, this success may be jeopardized as more users place increasing demands on these systems. Adaptive management has received little attention from agencies for addressing problems in small-scale systems, but I suggest that it may be a useful approach for creating a holistic view of common problems and developing guidelines that can then be used in simpler, more traditional approaches to management. Although adaptive management may be more expensive to initiate than traditional approaches, it may be less expensive in the long run if it leads to more effective management. The overall goal of adaptive management is not to maintain an optimal condition of the resource, but to develop an optimal management capacity. This is accomplished by maintaining ecological resilience that allows the system to react to inevitable stresses, and generating flexibility in institutions and stakeholders that allows managers to react when conditions change. The result is that, rather than managing for a single, optimal state, we manage within a range of acceptable outcomes while avoiding catastrophes and irreversible negative effects. Copyright ?? 1999 by The Resilience Alliance.

  13. The fate of early experience following developmental change: longitudinal approaches to individual adaptation in childhood.

    PubMed

    Sroufe, L A; Egeland, B; Kreutzer, T

    1990-10-01

    2 strategies were used to investigate the continued impact of early experience and adaptation given subsequent experience and/or developmental change in a poverty sample (N = 190). Groups were defined whose adaptation was similar during the preschool years but consistently different earlier; then these 2 groups were compared in elementary school. In addition, a series of regression analyses was performed in which variance accounted for by near-in or contemporary predictors of adaptation in middle childhood was removed before adding earlier adaptation in subsequent steps. Children showing positive adaptation in the infant/toddler period showed greater rebound in the elementary school years, despite poor functioning in the preschool period. Regression analyses revealed some incremental power of early predictors with intermediate predictors removed. The results were interpreted as supporting Bowlby's thesis that adaptation is always a product of both developmental history and current circumstances. While this research cannot resolve such a complicated issue, it does point to the need for complex formulations to guide research on individual development. PMID:2245730

  14. Experimental Approaches to Microarray Analysis of Tumor Samples

    ERIC Educational Resources Information Center

    Furge, Laura Lowe; Winter, Michael B.; Meyers, Jacob I.; Furge, Kyle A.

    2008-01-01

    Comprehensive measurement of gene expression using high-density nucleic acid arrays (i.e. microarrays) has become an important tool for investigating the molecular differences in clinical and research samples. Consequently, inclusion of discussion in biochemistry, molecular biology, or other appropriate courses of microarray technologies has…

  15. NEW APPROACHES TO THE PRESERVATION OF CONTAMINANTS IN WATER SAMPLES

    EPA Science Inventory

    The potential of antibiotics, chemical biocides and lytic enzymes in preserving nutrients, biological oxygen demand and oil and grease in water and sewage effluents was studied. Preliminary studies concerning the effect of drugs on cell growth and oxygen utilization in samples st...

  16. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.

    PubMed

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  17. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures

    PubMed Central

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  18. The Parent Version of the Preschool Social Skills Rating System: Psychometric Analysis and Adaptation with a German Preschool Sample

    ERIC Educational Resources Information Center

    Hess, Markus; Scheithauer, Herbert; Kleiber, Dieter; Wille, Nora; Erhart, Michael; Ravens-Sieberer, Ulrike

    2014-01-01

    The Social Skills Rating System (SSRS) developed by Gresham and Elliott (1990) is a multirater, norm-referenced instrument measuring social skills and adaptive behavior in preschool children. The aims of the present study were (a) to test the factorial structure of the Parent Form of the SSRS for the first time with a German preschool sample (391…

  19. Some Features of the Sampling Distribution of the Ability Estimate in Computerized Adaptive Testing According to Two Stopping Rules.

    ERIC Educational Resources Information Center

    Blais, Jean-Guy; Raiche, Gilles

    This paper examines some characteristics of the statistics associated with the sampling distribution of the proficiency level estimate when the Rasch model is used. These characteristics allow the judgment of the meaning to be given to the proficiency level estimate obtained in adaptive testing, and as a consequence, they can illustrate the…

  20. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-01

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/. PMID:25083512

  1. An adaptive online learning approach for Support Vector Regression: Online-SVR-FID

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Zio, Enrico

    2016-08-01

    Support Vector Regression (SVR) is a popular supervised data-driven approach for building empirical models from available data. Like all data-driven methods, under non-stationary environmental and operational conditions it needs to be provided with adaptive learning capabilities, which might become computationally burdensome with large datasets cumulating dynamically. In this paper, a cost-efficient online adaptive learning approach is proposed for SVR by combining Feature Vector Selection (FVS) and Incremental and Decremental Learning. The proposed approach adaptively modifies the model only when different pattern drifts are detected according to proposed criteria. Two tolerance parameters are introduced in the approach to control the computational complexity, reduce the influence of the intrinsic noise in the data and avoid the overfitting problem of SVR. Comparisons of the prediction results is made with other online learning approaches e.g. NORMA, SOGA, KRLS, Incremental Learning, on several artificial datasets and a real case study concerning time series prediction based on data recorded on a component of a nuclear power generation system. The performance indicators MSE and MARE computed on the test dataset demonstrate the efficiency of the proposed online learning method.

  2. Station-keeping control for a stratospheric airship platform via fuzzy adaptive backstepping approach

    NASA Astrophysics Data System (ADS)

    Yang, Yueneng; Wu, Jie; Zheng, Wei

    2013-04-01

    This paper presents a novel approach for station-keeping control of a stratospheric airship platform in the presence of parametric uncertainty and external disturbance. First, conceptual design of the stratospheric airship platform is introduced, including the target mission, configuration, energy sources, propeller and payload. Second, the dynamics model of the airship platform is presented, and the mathematical model of its horizontal motion is derived. Third, a fuzzy adaptive backstepping control approach is proposed to develop the station-keeping control system for the simplified horizontal motion. The backstepping controller is designed assuming that the airship model is accurately known, and a fuzzy adaptive algorithm is used to approximate the uncertainty of the airship model. The stability of the closed-loop control system is proven via the Lyapunov theorem. Finally, simulation results illustrate the effectiveness and robustness of the proposed control approach.

  3. A new approach for designing self-organizing systems and application to adaptive control

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Zhang, Shi; Lin, Yueqing; Huang, Song

    1993-01-01

    There is tremendous interest in the design of intelligent machines capable of autonomous learning and skillful performance under complex environments. A major task in designing such systems is to make the system plastic and adaptive when presented with new and useful information and stable in response to irrelevant events. A great body of knowledge, based on neuro-physiological concepts, has evolved as a possible solution to this problem. Adaptive resonance theory (ART) is a classical example under this category. The system dynamics of an ART network is described by a set of differential equations with nonlinear functions. An approach for designing self-organizing networks characterized by nonlinear differential equations is proposed.

  4. Adapting Evidence-based Mental Health Treatments in Community Settings: Preliminary Results from a Partnership Approach

    PubMed Central

    Southam-Gerow, Michael A.; Hourigan, Shannon E.; Allin, Robert B.

    2009-01-01

    This paper describes the application of a university-community partnership model to the problem of adapting evidence-based treatment approaches in a community mental health setting. Background on partnership research is presented, with consideration of methodological and practical issues related to this kind of research. Then, a rationale for using partnerships as a basis for conducting mental health treatment research is presented. Finally, an ongoing partnership research project concerned with the adaptation of evidence-based mental health treatments for childhood internalizing problems in community settings is presented, with preliminary results of the ongoing effort discussed. PMID:18697917

  5. An Efficient Adaptive Angle-Doppler Compensation Approach for Non-Sidelooking Airborne Radar STAP.

    PubMed

    Shen, Mingwei; Yu, Jia; Wu, Di; Zhu, Daiyin

    2015-01-01

    In this study, the effects of non-sidelooking airborne radar clutter dispersion on space-time adaptive processing (STAP) is considered, and an efficient adaptive angle-Doppler compensation (EAADC) approach is proposed to improve the clutter suppression performance. In order to reduce the computational complexity, the reduced-dimension sparse reconstruction (RDSR) technique is introduced into the angle-Doppler spectrum estimation to extract the required parameters for compensating the clutter spectral center misalignment. Simulation results to demonstrate the effectiveness of the proposed algorithm are presented. PMID:26053755

  6. Approaches to retrospective sampling for longitudinal transition regression models

    PubMed Central

    Hunsberger, Sally; Albert, Paul S.; Thoma, Marie

    2016-01-01

    For binary diseases that relapse and remit, it is often of interest to estimate the effect of covariates on the transition process between disease states over time. The transition process can be characterized by modeling the probability of the binary event given the individual’s history. Designing studies that examine the impact of time varying covariates over time can lead to collection of extensive amounts of data. Sometimes it may be possible to collect and store tissue, blood or images and retrospectively analyze this covariate information. In this paper we consider efficient sampling designs that do not require biomarker measurements on all subjects. We describe appropriate estimation methods for transition probabilities and functions of these probabilities, and evaluate efficiency of the estimates from the proposed sampling designs. These new methods are illustrated with data from a longitudinal study of bacterial vaginosis, a common relapsing-remitting vaginal infection of women of child bearing age.

  7. A comparison of adaptive sampling designs and binary spatial models: A simulation study using a census of Bromus inermis

    USGS Publications Warehouse

    Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa

    2013-01-01

    Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.

  8. Mass Spectrometry Imaging Using the Stretched Sample Approach

    PubMed Central

    Zimmerman, Tyler A.; Rubakhin, Stanislav S.; Sweedler, Jonathan V.

    2011-01-01

    Matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging (MSI) can determine tissue localization for a variety of analytes with high sensitivity, chemical specificity, and spatial resolution. MS image quality typically depends on the MALDI matrix application method used, particularly when the matrix solution or powder is applied directly to the tissue surface. Improper matrix application results in spatial redistribution of analytes and reduced MS signal quality. Here we present a stretched sample imaging protocol that removes the dependence of MS image quality on the matrix application process and improves analyte extraction and sample desalting. First, the tissue sample is placed on a monolayer of solid support beads that are embedded in a hydrophobic membrane. Stretching the membrane fragments the tissue into thousands of nearly single-cell sized islands, with the pieces physically isolated from each other by the membrane. This spatial isolation prevents analyte transfer between beads, allowing for longer exposure of the tissue fragments to the MALDI matrix, thereby improving detectability of small analyte quantities without sacrificing spatial resolution. When using this method to reconstruct chemical images, complications result from non-uniform stretching of the supporting membrane. Addressing this concern, several computational tools enable automated data acquisition at individual bead locations and allow reconstruction of ion images corresponding to the original spatial conformation of the tissue section. Using mouse pituitary, we demonstrate the utility of this stretched imaging technique for characterizing peptide distributions in heterogeneous tissues at nearly single-cell resolution. PMID:20680608

  9. Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures

    SciTech Connect

    Calyam, Prasad

    2014-09-15

    The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federation policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.

  10. A Direct Adaptive Control Approach in the Presence of Model Mismatch

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.; Tao, Gang; Khong, Thuan

    2009-01-01

    This paper considers the problem of direct model reference adaptive control when the plant-model matching conditions are violated due to abnormal changes in the plant or incorrect knowledge of the plant's mathematical structure. The approach consists of direct adaptation of state feedback gains for state tracking, and simultaneous estimation of the plant-model mismatch. Because of the mismatch, the plant can no longer track the state of the original reference model, but may be able to track a new reference model that still provides satisfactory performance. The reference model is updated if the estimated plant-model mismatch exceeds a bound that is determined via robust stability and/or performance criteria. The resulting controller is a hybrid direct-indirect adaptive controller that offers asymptotic state tracking in the presence of plant-model mismatch as well as parameter deviations.

  11. Prediction of contact forces of underactuated finger by adaptive neuro fuzzy approach

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Abbasi, Almas; Kiani, Kourosh; Al-Shammari, Eiman Tamah

    2015-12-01

    To obtain adaptive finger passive underactuation can be used. Underactuation principle can be used to adapt shapes of the fingers for grasping objects. The fingers with underactuation do not require control algorithm. In this study a kinetostatic model of the underactuated finger mechanism was analyzed. The underactuation is achieved by adding the compliance in every finger joint. Since the contact forces of the finger depend on contact position of the finger and object, it is suitable to make a prediction model for the contact forces in function of contact positions of the finger and grasping objects. In this study prediction of the contact forces was established by a soft computing approach. Adaptive neuro-fuzzy inference system (ANFIS) was applied as the soft computing method to perform the prediction of the finger contact forces.

  12. A simple and flexible graphical approach for adaptive group-sequential clinical trials.

    PubMed

    Sugitani, Toshifumi; Bretz, Frank; Maurer, Willi

    2016-01-01

    In this article, we introduce a graphical approach to testing multiple hypotheses in group-sequential clinical trials allowing for midterm design modifications. It is intended for structured study objectives in adaptive clinical trials and extends the graphical group-sequential designs from Maurer and Bretz (Statistics in Biopharmaceutical Research 2013; 5: 311-320) to adaptive trial designs. The resulting test strategies can be visualized graphically and performed iteratively. We illustrate the methodology with two examples from our clinical trial practice. First, we consider a three-armed gold-standard trial with the option to reallocate patients to either the test drug or the active control group, while stopping the recruitment of patients to placebo, after having demonstrated superiority of the test drug over placebo at an interim analysis. Second, we consider a confirmatory two-stage adaptive design with treatment selection at interim. PMID:25372071

  13. Lexical adaptation of link grammar to the biomedical sublanguage: a comparative evaluation of three approaches

    PubMed Central

    Pyysalo, Sampo; Salakoski, Tapio; Aubin, Sophie; Nazarenko, Adeline

    2006-01-01

    Background We study the adaptation of Link Grammar Parser to the biomedical sublanguage with a focus on domain terms not found in a general parser lexicon. Using two biomedical corpora, we implement and evaluate three approaches to addressing unknown words: automatic lexicon expansion, the use of morphological clues, and disambiguation using a part-of-speech tagger. We evaluate each approach separately for its effect on parsing performance and consider combinations of these approaches. Results In addition to a 45% increase in parsing efficiency, we find that the best approach, incorporating information from a domain part-of-speech tagger, offers a statistically significant 10% relative decrease in error. Conclusion When available, a high-quality domain part-of-speech tagger is the best solution to unknown word issues in the domain adaptation of a general parser. In the absence of such a resource, surface clues can provide remarkably good coverage and performance when tuned to the domain. The adapted parser is available under an open-source license. PMID:17134475

  14. An adaptive Kalman filter approach for cardiorespiratory signal extraction and fusion of non-contacting sensors

    PubMed Central

    2014-01-01

    Background Extracting cardiorespiratory signals from non-invasive and non-contacting sensor arrangements, i.e. magnetic induction sensors, is a challenging task. The respiratory and cardiac signals are mixed on top of a large and time-varying offset and are likely to be disturbed by measurement noise. Basic filtering techniques fail to extract relevant information for monitoring purposes. Methods We present a real-time filtering system based on an adaptive Kalman filter approach that separates signal offsets, respiratory and heart signals from three different sensor channels. It continuously estimates respiration and heart rates, which are fed back into the system model to enhance performance. Sensor and system noise covariance matrices are automatically adapted to the aimed application, thus improving the signal separation capabilities. We apply the filtering to two different subjects with different heart rates and sensor properties and compare the results to the non-adaptive version of the same Kalman filter. Also, the performance, depending on the initialization of the filters, is analyzed using three different configurations ranging from best to worst case. Results Extracted data are compared with reference heart rates derived from a standard pulse-photoplethysmographic sensor and respiration rates from a flowmeter. In the worst case for one of the subjects the adaptive filter obtains mean errors (standard deviations) of -0.2 min −1 (0.3 min −1) and -0.7 bpm (1.7 bpm) (compared to -0.2 min −1 (0.4 min −1) and 42.0 bpm (6.1 bpm) for the non-adaptive filter) for respiration and heart rate, respectively. In bad conditions the heart rate is only correctly measurable when the Kalman matrices are adapted to the target sensor signals. Also, the reduced mean error between the extracted offset and the raw sensor signal shows that adapting the Kalman filter continuously improves the ability to separate the desired signals from the raw sensor data. The average

  15. A sampling and classification item selection approach with content balancing.

    PubMed

    Chen, Pei-Hua

    2015-03-01

    Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM. PMID:24610145

  16. Sparsely sampling the sky: a Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Jaffe, A. H.

    2013-08-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.

  17. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  18. Adaptive combinatorial design to explore large experimental spaces: approach and validation.

    PubMed

    Lejay, L V; Shasha, D E; Palenchar, P M; Kouranov, A Y; Cruikshank, A A; Chou, M F; Coruzzi, G M

    2004-12-01

    Systems biology requires mathematical tools not only to analyse large genomic datasets, but also to explore large experimental spaces in a systematic yet economical way. We demonstrate that two-factor combinatorial design (CD), shown to be useful in software testing, can be used to design a small set of experiments that would allow biologists to explore larger experimental spaces. Further, the results of an initial set of experiments can be used to seed further 'Adaptive' CD experimental designs. As a proof of principle, we demonstrate the usefulness of this Adaptive CD approach by analysing data from the effects of six binary inputs on the regulation of genes in the N-assimilation pathway of Arabidopsis. This CD approach identified the more important regulatory signals previously discovered by traditional experiments using far fewer experiments, and also identified examples of input interactions previously unknown. Tests using simulated data show that Adaptive CD suffers from fewer false positives than traditional experimental designs in determining decisive inputs, and succeeds far more often than traditional or random experimental designs in determining when genes are regulated by input interactions. We conclude that Adaptive CD offers an economical framework for discovering dominant inputs and interactions that affect different aspects of genomic outputs and organismal responses. PMID:17051692

  19. Improving the sampling efficiency of the Grand Canonical Simulated Quenching approach

    SciTech Connect

    Perez, Danny; Vernon, Louis J.

    2012-04-04

    Most common atomistic simulation techniques, like molecular dynamics or Metropolis Monte Carlo, operate under a constant interatomic Hamiltonian with a fixed number of atoms. Internal (atom positions or velocities) or external (simulation cell size or geometry) variables are then evolved dynamically or stochastically to yield sampling in different ensembles, such as microcanonical (NVE), canonical (NVT), isothermal-isobaric (NPT), etc. Averages are then taken to compute relevant physical properties. At least two limitations of these standard approaches can seriously hamper their application to many important systems: (1) they do not allow for the exchange of particles with a reservoir, and (2) the sampling efficiency is insufficient to allow the obtention of converged results because of the very long intrinsic timescales associated with these quantities. To fix ideas, one might want to identify low (free) energy configurations of grain boundaries (GB). In reality, grain boundaries are in contact the grains which act as reservoirs of defects (e.g., vacancies and interstitials). Since the GB can exchange particles with its environment, the most stable configuration cannot provably be found by sampling from NVE or NVT ensembles alone: one needs to allow the number of atoms in the sample to fluctuate. The first limitation can be circumvented by working in the grand canonical ensemble (TV ) or its derivatives (such as the semi-grand-canonical ensemble useful for the study of substitutional alloys). Monte Carlo methods have been the first to adapt to this kind of system where the number of atoms is allowed to fluctuate. Many of these methods are based on the Widom insertion method [Widom63] where the chemical potential of a given chemical species can be inferred from the potential energy changes upon random insertion of a new particle within the simulation cell. Other techniques, such as the Gibbs ensemble Monte Carlo [Panagiotopoulos87] where exchanges of particles are

  20. Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization.

    SciTech Connect

    Romero, Vicente JosÔe; Chen, Chun-Hung (George Mason University, Fairfax, VA)

    2006-11-01

    A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.

  1. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-01-01

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions. PMID:26985961

  2. Development of error criteria for adaptive multi-element polynomial chaos approaches

    NASA Astrophysics Data System (ADS)

    Chouvion, B.; Sarrouy, E.

    2016-01-01

    This paper presents and compares different methodologies to create an adaptive stochastic space partitioning in polynomial chaos applications which use a multi-element approach. To implement adaptive partitioning, Wan and Karniadakis first developed a criterion based on the relative error in local variance. We propose here two different error criteria: one based on the residual error and the other on the local variance discontinuity created by partitioning. The methods are applied to classical differential equations with long-term integration difficulties, including the Kraichnan-Orszag three-mode problem, and to simple linear and nonlinear mechanical systems whose stochastic dynamic responses are investigated. The efficiency and robustness of the approaches are investigated by comparison with Monte-Carlo simulations. For the different examples considered, they show significantly better convergence characteristics than the original error criterion used.

  3. An Adaptive Particle Filtering Approach to Tracking Modes in a Varying Shallow Ocean Environment

    SciTech Connect

    Candy, J V

    2011-03-22

    The shallow ocean environment is ever changing mostly due to temperature variations in its upper layers (< 100m) directly affecting sound propagation throughout. The need to develop processors that are capable of tracking these changes implies a stochastic as well as an 'adaptive' design. The stochastic requirement follows directly from the multitude of variations created by uncertain parameters and noise. Some work has been accomplished in this area, but the stochastic nature was constrained to Gaussian uncertainties. It has been clear for a long time that this constraint was not particularly realistic leading a Bayesian approach that enables the representation of any uncertainty distribution. Sequential Bayesian techniques enable a class of processors capable of performing in an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean. In this paper adaptive processors providing enhanced signals for acoustic hydrophonemeasurements on a vertical array as well as enhanced modal function estimates are developed. Synthetic data is provided to demonstrate that this approach is viable.

  4. Performance Monitoring and Assessment of Neuro-Adaptive Controllers for Aerospace Applications Using a Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Guenther, Kurt; Hodgkinson, John; Jacklin, Stephen; Richard, Michael; Schumann, Johann; Soares, Fola

    2005-01-01

    Modern exploration missions require modern control systems-control systems that can handle catastrophic changes in the system's behavior, compensate for slow deterioration in sustained operations, and support fast system ID. Adaptive controllers, based upon Neural Networks have these capabilities, but they can only be used safely if proper verification & validation (V&V) can be done. In this paper we present our V & V approach and simulation result within NASA's Intelligent Flight Control Systems (IFCS).

  5. A unique approach to the development of adaptive sensor systems for future spacecraft

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Tietz, J. C.; Sivertson, W. E.; Wilson, R. G.

    1979-01-01

    In the Shuttle era, it should be possible to develop adaptive remote sensor systems serving more directly specific researcher and user needs and at the same time alleviating the data management problem via intelligent sensor capabilities. The present paper provides a summary of such an approach, wherein specific capabilities have been developed for future global monitoring applications. A detailed description of FILE-I (Feature Identification and Location Experiment) is included along with a summary of future experiments currently under development.

  6. A decision analysis approach to climate adaptation: comparing multiple pathways for multi-decadal decision making

    NASA Astrophysics Data System (ADS)

    Lin, B. B.; Little, L.

    2013-12-01

    Policy planners around the world are required to consider the implications of adapting to climatic change across spatial contexts and decadal timeframes. However, local level information for planning is often poorly defined, even though climate adaptation decision-making is made at this scale. This is especially true when considering sea level rise and coastal impacts of climate change. We present a simple approach using sea level rise simulations paired with adaptation scenarios to assess a range of adaptation options available to local councils dealing with issues of beach recession under present and future sea level rise and storm surge. Erosion and beach recession pose a large socioeconomic risk to coastal communities because of the loss of key coastal infrastructure. We examine the well-known adaptation technique of beach nourishment and assess various timings and amounts of beach nourishment at decadal time spans in relation to beach recession impacts. The objective was to identify an adaptation strategy that would allow for a low frequency of management interventions, the maintenance of beach width, and the ability to minimize variation in beach width over the 2010 to 2100 simulation period. 1000 replications of each adaptation option were produced against the 90 year simulation in order to model the ability each adaptation option to achieve the three key objectives. Three sets of adaptation scenarios were identified. Within each scenario, a number of adaptation options were tested. The three scenarios were: 1) Fixed periodic beach replenishment of specific amounts at 20 and 50 year intervals, 2) Beach replenishment to the initial beach width based on trigger levels of recession (5m, 10m, 20m), and 3) Fixed period beach replenishment of a variable amount at decadal intervals (every 10, 20, 30, 40, 50 years). For each adaptation option, we show the effectiveness of each beach replenishment scenario to maintain beach width and consider the implications of more

  7. Reducing False Negative Reads in RFID Data Streams Using an Adaptive Sliding-Window Approach

    PubMed Central

    Massawe, Libe Valentine; Kinyua, Johnson D. M.; Vermaak, Herman

    2012-01-01

    Unreliability of the data streams generated by RFID readers is among the primary factors which limit the widespread adoption of the RFID technology. RFID data cleaning is, therefore, an essential task in the RFID middleware systems in order to reduce reading errors, and to allow these data streams to be used to make a correct interpretation and analysis of the physical world they are representing. In this paper we propose an adaptive sliding-window based approach called WSTD which is capable of efficiently coping with both environmental variation and tag dynamics. Our experimental results demonstrate the efficacy of the proposed approach. PMID:22666027

  8. Adaptive Critic Neural Network-Based Terminal Area Energy Management and Approach and Landing Guidance

    NASA Technical Reports Server (NTRS)

    Grantham, Katie

    2003-01-01

    Reusable Launch Vehicles (RLVs) have different mission requirements than the Space Shuttle, which is used for benchmark guidance design. Therefore, alternative Terminal Area Energy Management (TAEM) and Approach and Landing (A/L) Guidance schemes can be examined in the interest of cost reduction. A neural network based solution for a finite horizon trajectory optimization problem is presented in this paper. In this approach the optimal trajectory of the vehicle is produced by adaptive critic based neural networks, which were trained off-line to maintain a gradual glideslope.

  9. Modern control concepts in hydrology. [parameter identification in adaptive stochastic control approach

    NASA Technical Reports Server (NTRS)

    Duong, N.; Winn, C. B.; Johnson, G. R.

    1975-01-01

    Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  10. A New Approach to Interference Excision in Radio Astronomy: Real-Time Adaptive Cancellation

    NASA Astrophysics Data System (ADS)

    Barnbaum, Cecilia; Bradley, Richard F.

    1998-11-01

    Every year, an increasing amount of radio-frequency (RF) spectrum in the VHF, UHF, and microwave bands is being utilized to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Such services already cause problems for radio astronomy even in very remote observing sites, and the potential for this form of light pollution to grow is alarming. Preventive measures to eliminate interference through FCC legislation and ITU agreements can be effective; however, many times this approach is inadequate and interference excision at the receiver is necessary. Conventional techniques such as RF filters, RF shielding, and postprocessing of data have been only somewhat successful, but none has been sufficient. Adaptive interference cancellation is a real-time approach to interference excision that has not been used before in radio astronomy. We describe here, for the first time, adaptive interference cancellation in the context of radio astronomy instrumentation, and we present initial results for our prototype receiver. In the 1960s, analog adaptive interference cancelers were developed that obtain a high degree of cancellation in problems of radio communications and radar. However, analog systems lack the dynamic range, noised performance, and versatility required by radio astronomy. The concept of digital adaptive interference cancellation was introduced in the mid-1960s as a way to reduce unwanted noise in low-frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartments of automobiles. These audio-frequency applications require bandwidths of only a few tens of kilohertz. Only recently has high-speed digital filter technology made high dynamic range adaptive canceling possible in a bandwidth as large as a few megahertz, finally opening the door to application in radio astronomy. We have

  11. New approach based on compressive sampling for sample rate enhancement in DASs for low-cost sensing nodes.

    PubMed

    Bonavolontà, Francesco; D'Apuzzo, Massimo; Liccardo, Annalisa; Vadursi, Michele

    2014-01-01

    The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs) included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate. PMID:25313493

  12. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    PubMed Central

    Bonavolontà, Francesco; D'Apuzzo, Massimo; Liccardo, Annalisa; Vadursi, Michele

    2014-01-01

    The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs) included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate. PMID:25313493

  13. An Integrated Systems Approach to Designing Climate Change Adaptation Policy in Water Resources

    NASA Astrophysics Data System (ADS)

    Ryu, D.; Malano, H. M.; Davidson, B.; George, B.

    2014-12-01

    Climate change projections are characterised by large uncertainties with rainfall variability being the key challenge in designing adaptation policies. Climate change adaptation in water resources shows all the typical characteristics of 'wicked' problems typified by cognitive uncertainty as new scientific knowledge becomes available, problem instability, knowledge imperfection and strategic uncertainty due to institutional changes that inevitably occur over time. Planning that is characterised by uncertainties and instability requires an approach that can accommodate flexibility and adaptive capacity for decision-making. An ability to take corrective measures in the event that scenarios and responses envisaged initially derive into forms at some future stage. We present an integrated-multidisciplinary and comprehensive framework designed to interface and inform science and decision making in the formulation of water resource management strategies to deal with climate change in the Musi Catchment of Andhra Pradesh, India. At the core of this framework is a dialogue between stakeholders, decision makers and scientists to define a set of plausible responses to an ensemble of climate change scenarios derived from global climate modelling. The modelling framework used to evaluate the resulting combination of climate scenarios and adaptation responses includes the surface and groundwater assessment models (SWAT & MODFLOW) and the water allocation modelling (REALM) to determine the water security of each adaptation strategy. Three climate scenarios extracted from downscaled climate models were selected for evaluation together with four agreed responses—changing cropping patterns, increasing watershed development, changing the volume of groundwater extraction and improving irrigation efficiency. Water security in this context is represented by the combination of level of water availability and its associated security of supply for three economic activities (agriculture

  14. Psychometric Properties of the Schedule for Nonadaptive and Adaptive Personality in a PTSD Sample

    ERIC Educational Resources Information Center

    Wolf, Erika J.; Harrington, Kelly M.; Miller, Mark W.

    2011-01-01

    This study evaluated the psychometric characteristics of the Schedule for Nonadaptive and Adaptive Personality (SNAP; Clark, 1996) in 280 individuals who screened positive for posttraumatic stress disorder (PTSD). The SNAP validity, trait, temperament, and personality disorder (PD) scales were compared with scales on the Brief Form of the…

  15. Adaption of egg and larvae sampling techniques for lake sturgeon and broadcast spawning fishes in a deep river

    USGS Publications Warehouse

    Roseman, Edward F.; Kennedy, Gregory W.; Craig, Jaquelyn; Boase, James; Soper, Karen

    2011-01-01

    In this report we describe how we adapted two techniques for sampling lake sturgeon (Acipenser fulvescens) and other fish early life history stages to meet our research needs in the Detroit River, a deep, flowing Great Lakes connecting channel. First, we developed a buoy-less method for sampling fish eggs and spawning activity using egg mats deployed on the river bottom. The buoy-less method allowed us to fish gear in areas frequented by boaters and recreational anglers, thus eliminating surface obstructions that interfered with recreational and boating activities. The buoy-less method also reduced gear loss due to drift when masses of floating aquatic vegetation would accumulate on buoys and lines, increasing the drag on the gear and pulling it downstream. Second, we adapted a D-frame drift net system formerly employed in shallow streams to assess larval lake sturgeon dispersal for use in the deeper (>8 m) Detroit River using an anchor and buoy system.

  16. A problem-oriented approach to understanding adaptation: lessons learnt from Alpine Shire, Victoria Australia.

    NASA Astrophysics Data System (ADS)

    Roman, Carolina

    2010-05-01

    Climate change is gaining attention as a significant strategic issue for localities that rely on their business sectors for economic viability. For businesses in the tourism sector, considerable research effort has sought to characterise the vulnerability to the likely impacts of future climate change through scenarios or ‘end-point' approaches (Kelly & Adger, 2000). Whilst useful, there are few demonstrable case studies that complement such work with a ‘start-point' approach that seeks to explore contextual vulnerability (O'Brien et al., 2007). This broader approach is inclusive of climate change as a process operating within a biophysical system and allows recognition of the complex interactions that occur in the coupled human-environmental system. A problem-oriented and interdisciplinary approach was employed at Alpine Shire, in northeast Victoria Australia, to explore the concept of contextual vulnerability and adaptability to stressors that include, but are not limited to climatic change. Using a policy sciences approach, the objective was to identify factors that influence existing vulnerabilities and that might consequently act as barriers to effective adaptation for the Shire's business community involved in the tourism sector. Analyses of results suggest that many threats, including the effects climate change, compete for the resources, strategy and direction of local tourism management bodies. Further analysis of conditioning factors revealed that many complex and interacting factors define the vulnerability and adaptive capacity of the Shire's tourism sector to the challenges of global change, which collectively have more immediate implications for policy and planning than long-term future climate change scenarios. An approximation of the common interest, i.e. enhancing capacity in business acumen amongst tourism operators, would facilitate adaptability and sustainability through the enhancement of social capital in this business community. Kelly, P

  17. Adapting hydrological model structure to catchment characteristics: A large-sample experiment

    NASA Astrophysics Data System (ADS)

    Addor, Nans; Clark, Martyn P.; Nijssen, Bart

    2016-04-01

    Current hydrological modeling frameworks do not offer a clear way to systematically investigate the relationship between model complexity and model fidelity. The characterization of this relationship has so far relied on comparisons of different modules within the same model or comparisons of entirely different models. This lack of granularity in the differences between the model constructs makes it difficult to pinpoint model features that contribute to good simulations and means that the number of models or modeling hypotheses evaluated is usually small. Here we use flexible modeling frameworks to comprehensively and systematically compare modeling alternatives across the continuum of model complexity. A key goal is to explore which model structures are most adequate for catchments in different hydroclimatic conditions. Starting from conceptual models based on the Framework for Understanding Structural Errors (FUSE), we progressively increase model complexity by replacing conceptual formulations by physically explicit ones (process complexity) and by refining model spatial resolution (spatial complexity) using the newly developed Structure for Unifying Multiple Modeling Alternatives (SUMMA). To investigate how to best reflect catchment characteristics using model structure, we rely on a recently released data set of 671 catchments in the continuous United States. Instead of running hydrological simulations in every catchment, we use clustering techniques to define catchment clusters, run hydrological simulations for representative members of each cluster, develop hypotheses (e.g., when specific process representations have useful explanatory power) and test these hypotheses using other members of the cluster. We thus refine our catchment clustering based on insights into dominant hydrological processes gained from our modeling approach. With this large-sample experiment, we seek to uncover trade-offs between realism and practicality, and formulate general

  18. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  19. Testing for Adaptation to Climate in Arabidopsis thaliana: A Calibrated Common Garden Approach

    PubMed Central

    Rutter, Matthew T.; Fenster, Charles B.

    2007-01-01

    Background and Aims A recent method used to test for local adaptation is a common garden experiment where analyses are calibrated to the environmental conditions of the garden. In this study the calibrated common garden approach is used to test for patterns of adaptation to climate in accessions of Arabidopsis thaliana. Methods Seedlings from 21 accessions of A. thaliana were planted outdoors in College Park, MD, USA, and development was monitored during the course of a growing season. ANOVA and multiple regression analysis were used to determine if development traits were significant predictors of plant success. Previously published data relating to accessional differences in genetic and physiological characters were also examined. Historical records of climate were used to evaluate whether properties of the site of origin of an accession affected the fitness of plants in a novel environment. Key Results By calibrating the analysis to the climatic conditions of the common garden site, performance differences were detected among the accessions consistent with a pattern of adaptation to latitude and climatic conditions. Relatively higher accession fitness was predicted by a latitude and climatic history similar to that of College Park in April and May during the main growth period of this experiment. The climatic histories of the accessions were better predictors of performance than many of the life-history and growth measures taken during the experiment. Conclusions It is concluded that the calibrated common garden experiment can detect local adaptation and guide subsequent reciprocal transplant experiments. PMID:17293351

  20. An adaptive gating approach for x-ray dose reduction during cardiac interventional procedures

    SciTech Connect

    Abdel-Malek, A.; Yassa, F.; Bloomer, J. )

    1994-03-01

    The increasing number of cardiac interventional procedures has resulted in a tremendous increase in the absorbed x-ray dose by radiologists as well as patients. A new method is presented for x-ray dose reduction which utilizes adaptive tube pulse-rate scheduling in pulsed fluoroscopic systems. In the proposed system, pulse-rate scheduling depends on the heart muscle activity phase determined through continuous guided segmentation of the patient's electrocardiogram (ECG). Displaying images generated at the proposed adaptive nonuniform rate is visually unacceptable; therefore, a frame-filling approach is devised to ensure a 30 frame/sec display rate. The authors adopted two approaches for the frame-filling portion of the system depending on the imaging mode used in the procedure. During cine-mode imaging (high x-ray dose), collected image frame-to-frame pixel motion is estimated using a pel-recursive algorithm followed by motion-based pixel interpolation to estimate the frames necessary to increase the rate to 30 frames/sec. The other frame-filling approach is adopted during fluoro-mode imaging (low x-ray dose), characterized by low signal-to-noise ratio images. This approach consists of simply holding the last collected frame for as many frames as necessary to maintain the real-time display rate.

  1. Adaptive niche radii and niche shapes approaches for niching with the CMA-ES.

    PubMed

    Shir, Ofer M; Emmerich, Michael; Bäck, Thomas

    2010-01-01

    While the motivation and usefulness of niching methods is beyond doubt, the relaxation of assumptions and limitations concerning the hypothetical search landscape is much needed if niching is to be valid in a broader range of applications. Upon the introduction of radii-based niching methods with derandomized evolution strategies (ES), the purpose of this study is to address the so-called niche radius problem. A new concept of an adaptive individual niche radius is applied to niching with the covariance matrix adaptation evolution strategy (CMA-ES). Two approaches are considered. The first approach couples the radius to the step size mechanism, while the second approach employs the Mahalanobis distance metric with the covariance matrix mechanism for the distance calculation, for obtaining niches with more complex geometrical shapes. The proposed approaches are described in detail, and then tested on high-dimensional artificial landscapes at several levels of difficulty. They are shown to be robust and to achieve satisfying results. PMID:20064027

  2. Teacher and Student-Focused Approaches: Influence of Learning Approach and Self-Efficacy in a Psychology Postgraduate Sample

    ERIC Educational Resources Information Center

    Kaye, Linda K.; Brewer, Gayle

    2013-01-01

    The current study examined approaches to teaching in a postgraduate psychology sample. This included considering teaching-focused (information transfer) and student-focused (conceptual changes in understanding) approaches to teaching. Postgraduate teachers of psychology (N = 113) completed a questionnaire measuring their use of a teacher- or…

  3. Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1996-01-01

    A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  4. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  5. Establishing Interpretive Consistency When Mixing Approaches: Role of Sampling Designs in Evaluations

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.

    2013-01-01

    The goal of this chapter is to recommend quality criteria to guide evaluators' selections of sampling designs when mixing approaches. First, we contextualize our discussion of quality criteria and sampling designs by discussing the concept of interpretive consistency and how it impacts sampling decisions. Embedded in this discussion are…

  6. RECRUITING FOR A LONGITUDINAL STUDY OF CHILDREN'S HEALTH USING A HOUSEHOLD-BASED PROBABILITY SAMPLING APPROACH

    EPA Science Inventory

    The sampling design for the National Children¿s Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach (visit our website for more information on the NCS : www.nationalchildrensstudy.gov). The full sample is designed to be representative of ...

  7. An adaptable image retrieval system with relevance feedback using kernel machines and selective sampling.

    PubMed

    Azimi-Sadjadi, Mahmood R; Salazar, Jaime; Srinivasan, Saravanakumar

    2009-07-01

    This paper presents an adaptable content-based image retrieval (CBIR) system developed using regularization theory, kernel-based machines, and Fisher information measure. The system consists of a retrieval subsystem that carries out similarity matching using image-dependant information, multiple mapping subsystems that adaptively modify the similarity measures, and a relevance feedback mechanism that incorporates user information. The adaptation process drives the retrieval error to zero in order to exactly meet either an existing multiclass classification model or the user high-level concepts using reference-model or relevance feedback learning, respectively. To facilitate the selection of the most informative query images during relevance feedback learning a new method based upon the Fisher information is introduced. Model-reference and relevance feedback learning mechanisms are thoroughly tested on a domain-specific image database that encompasses a wide range of underwater objects captured using an electro-optical sensor. Benchmarking results with two other relevance feedback learning methods are also provided. PMID:19447718

  8. [Asymmetry and spatial specificity of auditory aftereffects following adaptation to signals simulating approach and withdrawal of sound sources].

    PubMed

    Malinina, E S

    2014-01-01

    The spatial specificity of auditory approaching and withdrawing aftereffects was investigated in an anechoic chamber. The adapting and testing stimuli were presented from loudspeakers located in front of the subject at the distance of 1.1 m (near) and 4.5 m (far) from the listener's head. Approach and withdrawal of stimuli were simulated by increasing or decreasing the amplitude of the wide-noise impulse sequence. The listeners were required to determine the movement direction of test stimulus following each 5-s adaptation period. The listeners' "withdrawal" responses were used for psychometric functions plotting and for quantitative assessment of auditory aftereffect. The data summarized for all 8 participants indicated that the asymmetry of approaching and withdrawing aftereffects depended on spatial localization of adaptor and test. The asymmetry of aftereffects was largest when adaptor and test were presented from the same loudspeaker (either near or far). Adaptation to the approach induced a directionally dependent displacement of the psychometric functions relative to control condition without adaptation and adaptation to the withdrawal was not. The magnitude of approaching aftereffect was greater when adaptor and test were located in near spatial domain than when they came from far domain. When adaptor and test were presented from the distinct loudspeakers, magnitude approaching aftereffect was decreasing in comparison to the same spatial localization, but after adaptation to withdrawal it was increasing. As a result, the directionally dependent displacements of the psychometric functions relative to control condition were observed after adaptation as to approach and to withdrawal. The discrepancy of the psychometric functions received after adaptation to approach and to withdrawal at near and far spatial domains was greater under the same localization of adaptor and test in comparison to their distinct localization. We assume that the peculiarities of

  9. Towards a System Level Understanding of Non-Model Organisms Sampled from the Environment: A Network Biology Approach

    PubMed Central

    Williams, Tim D.; Turan, Nil; Diab, Amer M.; Wu, Huifeng; Mackenzie, Carolynn; Bartie, Katie L.; Hrydziuszko, Olga; Lyons, Brett P.; Stentiford, Grant D.; Herbert, John M.; Abraham, Joseph K.; Katsiadaki, Ioanna; Leaver, Michael J.; Taggart, John B.; George, Stephen G.; Viant, Mark R.; Chipman, Kevin J.; Falciani, Francesco

    2011-01-01

    The acquisition and analysis of datasets including multi-level omics and physiology from non-model species, sampled from field populations, is a formidable challenge, which so far has prevented the application of systems biology approaches. If successful, these could contribute enormously to improving our understanding of how populations of living organisms adapt to environmental stressors relating to, for example, pollution and climate. Here we describe the first application of a network inference approach integrating transcriptional, metabolic and phenotypic information representative of wild populations of the European flounder fish, sampled at seven estuarine locations in northern Europe with different degrees and profiles of chemical contaminants. We identified network modules, whose activity was predictive of environmental exposure and represented a link between molecular and morphometric indices. These sub-networks represented both known and candidate novel adverse outcome pathways representative of several aspects of human liver pathophysiology such as liver hyperplasia, fibrosis, and hepatocellular carcinoma. At the molecular level these pathways were linked to TNF alpha, TGF beta, PDGF, AGT and VEGF signalling. More generally, this pioneering study has important implications as it can be applied to model molecular mechanisms of compensatory adaptation to a wide range of scenarios in wild populations. PMID:21901081

  10. A control systems engineering approach for adaptive behavioral interventions: illustration with a fibromyalgia intervention.

    PubMed

    Deshpande, Sunil; Rivera, Daniel E; Younger, Jarred W; Nandola, Naresh N

    2014-09-01

    The term adaptive intervention has been used in behavioral medicine to describe operationalized and individually tailored strategies for prevention and treatment of chronic, relapsing disorders. Control systems engineering offers an attractive means for designing and implementing adaptive behavioral interventions that feature intensive measurement and frequent decision-making over time. This is illustrated in this paper for the case of a low-dose naltrexone treatment intervention for fibromyalgia. System identification methods from engineering are used to estimate dynamical models from daily diary reports completed by participants. These dynamical models then form part of a model predictive control algorithm which systematically decides on treatment dosages based on measurements obtained under real-life conditions involving noise, disturbances, and uncertainty. The effectiveness and implications of this approach for behavioral interventions (in general) and pain treatment (in particular) are demonstrated using informative simulations. PMID:25264467

  11. Systematic analysis of the kalimantacin assembly line NRPS module using an adapted targeted mutagenesis approach.

    PubMed

    Uytterhoeven, Birgit; Appermans, Kenny; Song, Lijiang; Masschelein, Joleen; Lathouwers, Thomas; Michiels, Chris W; Lavigne, Rob

    2016-04-01

    Kalimantacin is an antimicrobial compound with strong antistaphylococcal activity that is produced by a hybrid trans-acyltransferase polyketide synthase/nonribosomal peptide synthetase system in Pseudomonas fluorescens BCCM_ID9359. We here present a systematic analysis of the substrate specificity of the glycine-incorporating adenylation domain from the kalimantacin biosynthetic assembly line by a targeted mutagenesis approach. The specificity-conferring code was adapted for use in Pseudomonas and mutated adenylation domain active site sequences were introduced in the kalimantacin gene cluster, using a newly adapted ligation independent cloning method. Antimicrobial activity screens and LC-MS analyses revealed that the production of the kalimantacin analogues in the mutated strains was abolished. These results support the idea that further insight in the specificity of downstream domains in nonribosomal peptide synthetases and polyketide synthases is required to efficiently engineer these strains in vivo. PMID:26666990

  12. Adaptive low-rank approximation and denoised Monte Carlo approach for high-dimensional Lindblad equations

    NASA Astrophysics Data System (ADS)

    Le Bris, C.; Rouchon, P.; Roussel, J.

    2015-12-01

    We present a twofold contribution to the numerical simulation of Lindblad equations. First, an adaptive numerical approach to approximate Lindblad equations using low-rank dynamics is described: a deterministic low-rank approximation of the density operator is computed, and its rank is adjusted dynamically, using an on-the-fly estimator of the error committed when reducing the dimension. On the other hand, when the intrinsic dimension of the Lindblad equation is too high to allow for such a deterministic approximation, we combine classical ensemble averages of quantum Monte Carlo trajectories and a denoising technique. Specifically, a variance reduction method based on the consideration of a low-rank dynamics as a control variate is developed. Numerical tests for quantum collapse and revivals show the efficiency of each approach, along with the complementarity of the two approaches.

  13. One adaptive synchronization approach for fractional-order chaotic system with fractional-order 1 < q < 2.

    PubMed

    Zhou, Ping; Bai, Rongji

    2014-01-01

    Based on a new stability result of equilibrium point in nonlinear fractional-order systems for fractional-order lying in 1 < q < 2, one adaptive synchronization approach is established. The adaptive synchronization for the fractional-order Lorenz chaotic system with fractional-order 1 < q < 2 is considered. Numerical simulations show the validity and feasibility of the proposed scheme. PMID:25247207

  14. A data based mechanistic approach to nonlinear flood routing and adaptive flood level forecasting

    NASA Astrophysics Data System (ADS)

    Romanowicz, Renata J.; Young, Peter C.; Beven, Keith J.; Pappenberger, Florian

    2008-08-01

    Operational flood forecasting requires accurate forecasts with a suitable lead time, in order to be able to issue appropriate warnings and take appropriate emergency actions. Recent improvements in both flood plain characterization and computational capabilities have made the use of distributed flood inundation models more common. However, problems remain with the application of such models. There are still uncertainties associated with the identifiability of parameters; with the computational burden of calculating distributed estimates of predictive uncertainty; and with the adaptive use of such models for operational, real-time flood inundation forecasting. Moreover, the application of distributed models is complex, costly and requires high degrees of skill. This paper presents an alternative to distributed inundation models for real-time flood forecasting that provides fast and accurate, medium to short-term forecasts. The Data Based Mechanistic (DBM) methodology exploits a State Dependent Parameter (SDP) modelling approach to derive a nonlinear dependence between the water levels measured at gauging stations along the river. The transformation of water levels depends on the relative geometry of the channel cross-sections, without the need to apply rating curve transformations to the discharge. The relationship obtained is used to transform water levels as an input to a linear, on-line, real-time and adaptive stochastic DBM model. The approach provides an estimate of the prediction uncertainties, including allowing for heterescadasticity of the multi-step-ahead forecasting errors. The approach is illustrated using an 80 km reach of the River Severn, in the UK.

  15. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples.

    PubMed

    Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F

    2015-11-01

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934

  16. Mobile membrane introduction tandem mass spectrometry for on-the-fly measurements and adaptive sampling of VOCs around oil and gas projects in Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Krogh, E.; Gill, C.; Bell, R.; Davey, N.; Martinsen, M.; Thompson, A.; Simpson, I. J.; Blake, D. R.

    2012-12-01

    The release of hydrocarbons into the environment can have significant environmental and economic consequences. The evolution of smaller, more portable mass spectrometers to the field can provide spatially and temporally resolved information for rapid detection, adaptive sampling and decision support. We have deployed a mobile platform membrane introduction mass spectrometer (MIMS) for the in-field simultaneous measurement of volatile and semi-volatile organic compounds. In this work, we report instrument and data handling advances that produce geographically referenced data in real-time and preliminary data where these improvements have been combined with high precision ultra-trace VOCs analysis to adaptively sample air plumes near oil and gas operations in Alberta, Canada. We have modified a commercially available ion-trap mass spectrometer (Griffin ICX 400) with an in-house temperature controlled capillary hollow fibre polydimethylsiloxane (PDMS) polymer membrane interface and in-line permeation tube flow cell for a continuously infused internal standard. The system is powered by 24 VDC for remote operations in a moving vehicle. Software modifications include the ability to run continuous, interlaced tandem mass spectrometry (MS/MS) experiments for multiple contaminants/internal standards. All data are time and location stamped with on-board GPS and meteorological data to facilitate spatial and temporal data mapping. Tandem MS/MS scans were employed to simultaneously monitor ten volatile and semi-volatile analytes, including benzene, toluene, ethylbenzene and xylene (BTEX), reduced sulfur compounds, halogenated organics and naphthalene. Quantification was achieved by calibrating against a continuously infused deuterated internal standard (toluene-d8). Time referenced MS/MS data were correlated with positional data and processed using Labview and Matlab to produce calibrated, geographical Google Earth data-visualizations that enable adaptive sampling protocols

  17. Approach for Structurally Clearing an Adaptive Compliant Trailing Edge Flap for Flight

    NASA Technical Reports Server (NTRS)

    Miller, Eric J.; Lokos, William A.; Cruz, Josue; Crampton, Glen; Stephens, Craig A.; Kota, Sridhar; Ervin, Gregory; Flick, Pete

    2015-01-01

    The Adaptive Compliant Trailing Edge (ACTE) flap was flown on the NASA Gulfstream GIII test bed at the NASA Armstrong Flight Research Center. This smoothly curving flap replaced the existing Fowler flaps creating a seamless control surface. This compliant structure, developed by FlexSys Inc. in partnership with Air Force Research Laboratory, supported NASA objectives for airframe structural noise reduction, aerodynamic efficiency, and wing weight reduction through gust load alleviation. A thorough structures airworthiness approach was developed to move this project safely to flight.

  18. Evidence-Based Approach to Treating Lateral Epicondylitis Using the Occupational Adaptation Model.

    PubMed

    Bachman, Stephanie

    2016-01-01

    The occupational therapy Centennial Vision reinforces the importance of informing consumers about the benefit of occupational therapy and continuing to advocate for the unique client-centered role of occupational therapy. Occupational therapy practitioners working in hand therapy have traditionally found it difficult to combine the biomechanical foundations of hand therapy with the fundamental client-centered tenets of occupational therapy. Embracing our historical roots will become more important as health care evolves and third-party payers continue to scrutinize the need for the profession of occupational therapy. This article outlines a client-centered approach for hand therapists for the treatment of lateral epicondylitis using the Occupational Adaptation Model. PMID:26943119

  19. Stable Direct Adaptive Control of Linear Infinite-dimensional Systems Using a Command Generator Tracker Approach

    NASA Technical Reports Server (NTRS)

    Balas, M. J.; Kaufman, H.; Wen, J.

    1985-01-01

    A command generator tracker approach to model following contol of linear distributed parameter systems (DPS) whose dynamics are described on infinite dimensional Hilbert spaces is presented. This method generates finite dimensional controllers capable of exponentially stable tracking of the reference trajectories when certain ideal trajectories are known to exist for the open loop DPS; we present conditions for the existence of these ideal trajectories. An adaptive version of this type of controller is also presented and shown to achieve (in some cases, asymptotically) stable finite dimensional control of the infinite dimensional DPS.

  20. An Adaptive Nonlinear Aircraft Maneuvering Envelope Estimation Approach for Online Applications

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Lombaerts, Thomas Jan; Acosta, Diana; Wheeler, Kevin; Kaneshige, John

    2014-01-01

    A nonlinear aircraft model is presented and used to develop an overall unified robust and adaptive approach to passive trim and maneuverability envelope estimation with uncertainty quantification. The concept of time scale separation makes this method suitable for the online characterization of altered safe maneuvering limitations after impairment. The results can be used to provide pilot feedback and/or be combined with flight planning, trajectory generation, and guidance algorithms to help maintain safe aircraft operations in both nominal and off-nominal scenarios.

  1. Site-adaptation of satellite-based DNI and GHI time series: Overview and SolarGIS approach

    NASA Astrophysics Data System (ADS)

    Cebecauer, Tomas; Suri, Marcel

    2016-05-01

    Site adaptation is an approach of reducing uncertainty in the satellite-based longterm estimates of solar radiation by combining them with short-term high-accuracy measurements at a project site. We inventory the existing approaches and introduce the SolarGIS method that is optimized for providing bankable data for energy simulation in Concentrating Solar Power. We also indicate the achievable uncertainty of SolarGIS model outputs based on site-adaptation of projects executed in various geographical conditions.

  2. Developing integrated approaches to climate change adaptation in rural communities of the Peruvian Andes

    NASA Astrophysics Data System (ADS)

    Huggel, Christian

    2010-05-01

    Over centuries, Andean communities have developed strategies to cope with climate variability and extremes, such as cold waves or droughts, which can have severe impacts on their welfare. Nevertheless, the rural population, living at altitudes of 3000 to 4000 m asl or even higher, remains highly vulnerable to external stresses, partly because of the extreme living conditions, partly as a consequence of high poverty. Moreover, recent studies indicate that climatic extreme events have increased in frequency in the past years. A Peruvian-Swiss Climate Change Adaptation Programme in Peru (PACC) is currently undertaking strong efforts to understand the links between climatic conditions and local livelihood assets. The goal is to propose viable strategies for adaptation in collaboration with the local population and governments. The program considers three main areas of action, i.e. (i) water resource management; (ii) disaster risk reduction; and (iii) food security. The scientific studies carried out within the programme follow a highly transdisciplinary approach, spanning the whole range from natural and social sciences. Moreover, the scientific Peruvian-Swiss collaboration is closely connected to people and institutions operating at the implementation and political level. In this contribution we report on first results of thematic studies, address critical questions, and outline the potential of integrative research for climate change adaptation in mountain regions in the context of a developing country.

  3. A Risk-based Model Predictive Control Approach to Adaptive Interventions in Behavioral Health.

    PubMed

    Zafra-Cabeza, Ascensión; Rivera, Daniel E; Collins, Linda M; Ridao, Miguel A; Camacho, Eduardo F

    2011-07-01

    This paper examines how control engineering and risk management techniques can be applied in the field of behavioral health through their use in the design and implementation of adaptive behavioral interventions. Adaptive interventions are gaining increasing acceptance as a means to improve prevention and treatment of chronic, relapsing disorders, such as abuse of alcohol, tobacco, and other drugs, mental illness, and obesity. A risk-based Model Predictive Control (MPC) algorithm is developed for a hypothetical intervention inspired by Fast Track, a real-life program whose long-term goal is the prevention of conduct disorders in at-risk children. The MPC-based algorithm decides on the appropriate frequency of counselor home visits, mentoring sessions, and the availability of after-school recreation activities by relying on a model that includes identifiable risks, their costs, and the cost/benefit assessment of mitigating actions. MPC is particularly suited for the problem because of its constraint-handling capabilities, and its ability to scale to interventions involving multiple tailoring variables. By systematically accounting for risks and adapting treatment components over time, an MPC approach as described in this paper can increase intervention effectiveness and adherence while reducing waste, resulting in advantages over conventional fixed treatment. A series of simulations are conducted under varying conditions to demonstrate the effectiveness of the algorithm. PMID:21643450

  4. Where do adaptive shifts occur during invasion A multidisciplinary approach to unravel cold adaptation in a tropical ant species invading the Mediterranean zone

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Although evolution is now recognized as improving the invasive success of populations, where and when key adaptation event(s) occur often remains unclear. Here we used a multidisciplinary approach to disentangle the eco-evolutionary scenario of invasion of a Mediterranean zone (i.e. Israel) by the t...

  5. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics). PMID:24806652

  6. Adaptive Filter-bank Approach to Restoration and Spectral Analysis of Gapped Data

    NASA Astrophysics Data System (ADS)

    Stoica, Petre; Larsson, Erik G.; Li, Jian

    2000-10-01

    The main topic of this paper is the nonparametric estimation of complex (both amplitude and phase) spectra from gapped data, as well as the restoration of such data. The focus is on the extension of the APES (amplitude and phase estimation) approach to data sequences with gaps. APES, which is one of the most successful existing nonparametric approaches to the spectral analysis of full data sequences, uses a bank of narrowband adaptive (both frequency and data dependent) filters to estimate the spectrum. A recent interpretation of this approach showed that the filterbank used by APES and the resulting spectrum minimize a least-squares (LS) fitting criterion between the filtered sequence and its spectral decomposition. The extended approach, which is called GAPES for somewhat obvious reasons, capitalizes on the aforementioned interpretation: it minimizes the APES-LS fitting criterion with respect to the missing data as well. This should be a sensible thing to do whenever the full data sequence is stationary, and hence the missing data have the same spectral content as the available data. We use both simulated and real data examples to show that GAPES estimated spectra and interpolated data sequences have excellent accuracy. We also show the performance gain achieved by GAPES over two of the most commonly used approaches for gapped-data spectral analysis, viz., the periodogram and the parametric CLEAN method. This work was partly supported by the Swedish Foundation for Strategic Research.

  7. Enhancement and bias removal of optical coherence tomography images: An iterative approach with adaptive bilateral filtering.

    PubMed

    Sudeep, P V; Issac Niwas, S; Palanisamy, P; Rajan, Jeny; Xiaojun, Yu; Wang, Xianghong; Luo, Yuemei; Liu, Linbo

    2016-04-01

    Optical coherence tomography (OCT) has continually evolved and expanded as one of the most valuable routine tests in ophthalmology. However, noise (speckle) in the acquired images causes quality degradation of OCT images and makes it difficult to analyze the acquired images. In this paper, an iterative approach based on bilateral filtering is proposed for speckle reduction in multiframe OCT data. Gamma noise model is assumed for the observed OCT image. First, the adaptive version of the conventional bilateral filter is applied to enhance the multiframe OCT data and then the bias due to noise is reduced from each of the filtered frames. These unbiased filtered frames are then refined using an iterative approach. Finally, these refined frames are averaged to produce the denoised OCT image. Experimental results on phantom images and real OCT retinal images demonstrate the effectiveness of the proposed filter. PMID:26907572

  8. Adaptive life simulator: A novel approach to modeling the cardiovascular system

    SciTech Connect

    Kangas, L.J.; Keller, P.E.; Hashem, S.

    1995-06-01

    In this paper, an adaptive life simulator (ALS) is introduced. The ALS models a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. These models are developed for use in applications that require simulations of cardiovascular systems, such as medical mannequins, and in medical diagnostic systems. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the actual variables of an individual can subsequently be used for diagnosis. This approach also exploits sensor fusion applied to biomedical sensors. Sensor fusion optimizes the utilization of the sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  9. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  10. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  11. Wavefront sensorless approaches to adaptive optics for in vivo fluorescence imaging of mouse retina

    NASA Astrophysics Data System (ADS)

    Wahl, Daniel J.; Bonora, Stefano; Mata, Oscar S.; Haunerland, Bengt K.; Zawadzki, Robert J.; Sarunic, Marinko V.; Jian, Yifan

    2016-03-01

    Adaptive optics (AO) is necessary to correct aberrations when imaging the mouse eye with high numerical aperture. In order to obtain cellular resolution, we have implemented wavefront sensorless adaptive optics for in vivo fluorescence imaging of mouse retina. Our approach includes a lens-based system and MEMS deformable mirror for aberration correction. The AO system was constructed with a reflectance channel for structural images and fluorescence channel for functional images. The structural imaging was used in real-time for navigation on the retina using landmarks such as blood vessels. We have also implemented a tunable liquid lens to select the retinal layer of interest at which to perform the optimization. At the desired location on the mouse retina, the optimization algorithm used the fluorescence image data to drive a modal hill-climbing algorithm using an intensity or sharpness image quality metric. The optimization requires ~30 seconds to complete a search up to the 20th Zernike mode. In this report, we have demonstrated the AO performance for high-resolution images of the capillaries in a fluorescence angiography. We have also made progress on an approach to AO with pupil segmentation as a possible sensorless technique suitable for small animal retinal imaging. Pupil segmentation AO was implemented on the same ophthalmic system and imaging performance was demonstrated on fluorescent beads with induced aberrations.

  12. An adaptive optics approach for laser beam correction in turbulence utilizing a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2015-09-01

    Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n <10-13 m- 2/3). An intelligent correction algorithm can then be developed to reconstruct the perturbed wavefront and use this information to drive a deformable mirror capable of correcting the major distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.

  13. Space-time adaptive approach to variational data assimilation using wavelets

    NASA Astrophysics Data System (ADS)

    Souopgui, Innocent; Wieland, Scott A.; Yousuff Hussaini, M.; Vasilyev, Oleg V.

    2016-02-01

    This paper focuses on one of the main challenges of 4-dimensional variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by considering the time in the same fashion as the space variables, reformulating the mathematical model in the entire space-time domain, and solving the problem on a near optimal computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed form of the solution eliminates the need to save or recompute data for every time slice as it is typically done in traditional time marching approaches to 4-dimensional variational data assimilation. The reduction of the required computational degrees of freedom is achieved using the compression properties of multi-dimensional second generation wavelets. The simultaneous space-time discretization of both the forward and the adjoint models makes it possible to solve both models either concurrently or sequentially. In addition, the grid adaptation reduces the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The proposed approach is demonstrated for the advection diffusion problem in two space-time dimensions.

  14. Adaptive speed/position control of induction motor based on SPR approach

    NASA Astrophysics Data System (ADS)

    Lee, Hou-Tsan

    2014-11-01

    A sensorless speed/position tracking control scheme for induction motors is proposed subject to unknown load torque via adaptive strictly positive real (SPR) approach design. A special nonlinear coordinate transform is first provided to reform the dynamical model of the induction motor. The information on rotor fluxes can thus be derived from the dynamical model to decide on the proportion of input voltage in the d-q frame under the constraint of the maximum power transfer property of induction motors. Based on the SPR approach, the speed and position control objectives can be achieved. The proposed control scheme is to provide the speed/position control of induction motors while lacking the knowledge of some mechanical system parameters, such as the motor inertia, motor damping coefficient, and the unknown payload. The adaptive control technique is thus involved in the field oriented control scheme to deal with the unknown parameters. The thorough proof is derived to guarantee the stability of the speed and position of control systems of induction motors. Besides, numerical simulation and experimental results are also provided to validate the effectiveness of the proposed control scheme.

  15. Behavior Change Interventions to Improve the Health of Racial and Ethnic Minority Populations: A Tool Kit of Adaptation Approaches

    PubMed Central

    Davidson, Emma M; Liu, Jing Jing; Bhopal, Raj; White, Martin; Johnson, Mark RD; Netto, Gina; Wabnitz, Cecile; Sheikh, Aziz

    2013-01-01

    Context Adapting behavior change interventions to meet the needs of racial and ethnic minority populations has the potential to enhance their effectiveness in the target populations. But because there is little guidance on how best to undertake these adaptations, work in this field has proceeded without any firm foundations. In this article, we present our Tool Kit of Adaptation Approaches as a framework for policymakers, practitioners, and researchers interested in delivering behavior change interventions to ethnically diverse, underserved populations in the United Kingdom. Methods We undertook a mixed-method program of research on interventions for smoking cessation, increasing physical activity, and promoting healthy eating that had been adapted to improve salience and acceptability for African-, Chinese-, and South Asian–origin minority populations. This program included a systematic review (reported using PRISMA criteria), qualitative interviews, and a realist synthesis of data. Findings We compiled a richly informative data set of 161 publications and twenty-six interviews detailing the adaptation of behavior change interventions and the contexts in which they were undertaken. On the basis of these data, we developed our Tool Kit of Adaptation Approaches, which contains (1) a forty-six-item Typology of Adaptation Approaches; (2) a Pathway to Adaptation, which shows how to use the Typology to create a generic behavior change intervention; and (3) RESET, a decision tool that provides practical guidance on which adaptations to use in different contexts. Conclusions Our Tool Kit of Adaptation Approaches provides the first evidence-derived suite of materials to support the development, design, implementation, and reporting of health behavior change interventions for minority groups. The Tool Kit now needs prospective, empirical evaluation in a range of intervention and population settings. PMID:24320170

  16. Reliability and Validity of the Spanish Adaptation of EOSS, Comparing Normal and Clinical Samples

    ERIC Educational Resources Information Center

    Valero-Aguayo, Luis; Ferro-Garcia, Rafael; Lopez-Bermudez, Miguel Angel; de Huralde, Ma. Angeles Selva-Lopez

    2012-01-01

    The Experiencing of Self Scale (EOSS) was created for the evaluation of Functional Analytic Psychotherapy (Kohlenberg & Tsai, 1991, 2001, 2008) in relation to the concept of the experience of personal self as socially and verbally constructed. This paper presents a reliability and validity study of the EOSS with a Spanish sample (582 participants,…

  17. Social Daydreaming and Adjustment: An Experience-Sampling Study of Socio-Emotional Adaptation During a Life Transition

    PubMed Central

    Poerio, Giulia L.; Totterdell, Peter; Emerson, Lisa-Marie; Miles, Eleanor

    2016-01-01

    Estimates suggest that up to half of waking life is spent daydreaming; that is, engaged in thought that is independent of, and unrelated to, one’s current task. Emerging research indicates that daydreams are predominately social suggesting that daydreams may serve socio-emotional functions. Here we explore the functional role of social daydreaming for socio-emotional adjustment during an important and stressful life transition (the transition to university) using experience-sampling with 103 participants over 28 days. Over time, social daydreams increased in their positive characteristics and positive emotional outcomes; specifically, participants reported that their daydreams made them feel more socially connected and less lonely, and that the content of their daydreams became less fanciful and involved higher quality relationships. These characteristics then predicted less loneliness at the end of the study, which, in turn was associated with greater social adaptation to university. Feelings of connection resulting from social daydreams were also associated with less emotional inertia in participants who reported being less socially adapted to university. Findings indicate that social daydreaming is functional for promoting socio-emotional adjustment to an important life event. We highlight the need to consider the social content of stimulus-independent cognitions, their characteristics, and patterns of change, to specify how social thoughts enable socio-emotional adaptation. PMID:26834685

  18. QuEChERS sample preparation approach for mass spectrometric analysis of pesticide residues in foods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter describes an easy, rapid, and low-cost sample preparation approach for the determination of pesticide residues in foods using gas and/or liquid chromatographic (GC and/or LC) analytical separation and mass spectrometric (MS) detection. The approach is known as QuEChERS, which stands fo...

  19. Psychometric properties of the Schedule for Nonadaptive and Adaptive Personality in a PTSD sample.

    PubMed

    Wolf, Erika J; Harrington, Kelly M; Miller, Mark W

    2011-12-01

    This study evaluated the psychometric characteristics of the Schedule for Nonadaptive and Adaptive Personality (SNAP; Clark, 1996) in 280 individuals who screened positive for posttraumatic stress disorder (PTSD). The SNAP validity, trait, temperament, and personality disorder (PD) scales were compared with scales on the Brief Form of the Multidimensional Personality Questionnaire (Patrick, Curtin, & Tellegen, 2002). In a subsample of 86 veterans, the SNAP PD, trait, and temperament scales were also evaluated in comparison to the International Personality Disorder Examination (IPDE; Loranger, 1999), a semistructured diagnostic interview. Results revealed that the SNAP scales have good convergent validity, as evidenced by their pattern of associations with related measures of personality and PD. However, evidence for their discriminant validity in relationship to other measures of personality and PD was more mixed, and test scores on the SNAP trait and temperament scales left much unexplained variance in IPDE-assessed PDs. The diagnostic scoring of the SNAP PD scales greatly inflated prevalence estimates of PDs relative to the IPDE and showed poor agreement with the IPDE. In contrast, the dimensional SNAP scores yielded far stronger associations with continuous scores on the IPDE. The SNAP scales also largely evidenced expected patterns of association with a measure of PTSD severity. Overall, findings support the use of this measure in this population and contribute to our conceptualization of the association between temperament, PTSD, and Axis II psychopathology. PMID:21767029

  20. Psychometric Properties of the Schedule for Nonadaptive and Adaptive Personality in a PTSD Sample

    PubMed Central

    Wolf, Erika J.; Harrington, Kelly M.; Miller, Mark W.

    2011-01-01

    This study evaluated the psychometric characteristics of the Schedule for Nonadaptive and Adaptive Personality (SNAP; Clark, 1996) in 280 individuals who screened positive for posttraumatic stress disorder (PTSD). The SNAP validity, trait, temperament, and personality disorder (PD) scales were compared with scales on the Brief Form of the Multidimensional Personality Questionnaire (Patrick, Curtin, & Tellegen, 2002). In a subsample of 86 veterans, the SNAP PD, trait, and temperament scales were also evaluated in comparison to the International Personality Disorder Exam (IPDE; Loranger, 1999), a semi-structured diagnostic interview. Results revealed that the SNAP scales have good convergent validity, as evidenced by their pattern of associations with related measures of personality and PD. However evidence for their discriminant validity in relationship to other measures of personality and PD was more mixed and test scores on the SNAP trait and temperament scales left much unexplained variance in IPDE-assessed PDs. The diagnostic scoring of the SNAP PD scales greatly inflated prevalence estimates of PDs relative to the IPDE and showed poor agreement with the IPDE. In contrast, the dimensional SNAP scores yielded far stronger associations with continuous scores on the IPDE. The SNAP scales also largely evidenced expected patterns of association with a measure of PTSD severity. Overall, findings support the use of this measure in this population and contribute to our conceptualization of the association between temperament, PTSD, and Axis II psychopathology. PMID:21767029

  1. Massively parallel sampling of lattice proteins reveals foundations of thermal adaptation.

    PubMed

    Venev, Sergey V; Zeldovich, Konstantin B

    2015-08-01

    Evolution of proteins in bacteria and archaea living in different conditions leads to significant correlations between amino acid usage and environmental temperature. The origins of these correlations are poorly understood, and an important question of protein theory, physics-based prediction of types of amino acids overrepresented in highly thermostable proteins, remains largely unsolved. Here, we extend the random energy model of protein folding by weighting the interaction energies of amino acids by their frequencies in protein sequences and predict the energy gap of proteins designed to fold well at elevated temperatures. To test the model, we present a novel scalable algorithm for simultaneous energy calculation for many sequences in many structures, targeting massively parallel computing architectures such as graphics processing unit. The energy calculation is performed by multiplying two matrices, one representing the complete set of sequences, and the other describing the contact maps of all structural templates. An implementation of the algorithm for the CUDA platform is available at http://www.github.com/kzeldovich/galeprot and calculates protein folding energies over 250 times faster than a single central processing unit. Analysis of amino acid usage in 64-mer cubic lattice proteins designed to fold well at different temperatures demonstrates an excellent agreement between theoretical and simulated values of energy gap. The theoretical predictions of temperature trends of amino acid frequencies are significantly correlated with bioinformatics data on 191 bacteria and archaea, and highlight protein folding constraints as a fundamental selection pressure during thermal adaptation in biological evolution. PMID:26254668

  2. Using adaptive sampling and triangular meshes for the processing and inversion of potential field data

    NASA Astrophysics Data System (ADS)

    Foks, Nathan Leon

    The interpretation of geophysical data plays an important role in the analysis of potential field data in resource exploration industries. Two categories of interpretation techniques are discussed in this thesis; boundary detection and geophysical inversion. Fault or boundary detection is a method to interpret the locations of subsurface boundaries from measured data, while inversion is a computationally intensive method that provides 3D information about subsurface structure. My research focuses on these two aspects of interpretation techniques. First, I develop a method to aid in the interpretation of faults and boundaries from magnetic data. These processes are traditionally carried out using raster grid and image processing techniques. Instead, I use unstructured meshes of triangular facets that can extract inferred boundaries using mesh edges. Next, to address the computational issues of geophysical inversion, I develop an approach to reduce the number of data in a data set. The approach selects the data points according to a user specified proxy for its signal content. The approach is performed in the data domain and requires no modification to existing inversion codes. This technique adds to the existing suite of compressive inversion algorithms. Finally, I develop an algorithm to invert gravity data for an interfacing surface using an unstructured mesh of triangular facets. A pertinent property of unstructured meshes is their flexibility at representing oblique, or arbitrarily oriented structures. This flexibility makes unstructured meshes an ideal candidate for geometry based interface inversions. The approaches I have developed provide a suite of algorithms geared towards large-scale interpretation of potential field data, by using an unstructured representation of both the data and model parameters.

  3. Adaptive use of bubble wrap for storing liquid samples and performing analytical assays.

    PubMed

    Bwambok, David K; Christodouleas, Dionysios C; Morin, Stephen A; Lange, Heiko; Phillips, Scott T; Whitesides, George M

    2014-08-01

    This paper demonstrates that the gas-filled compartments in the packing material commonly called "bubble wrap" can be repurposed in resource-limited regions as containers to store liquid samples, and to perform bioanalyses. The bubbles of bubble wrap are easily filled by injecting the samples into them using a syringe with a needle or a pipet tip, and then sealing the hole with nail hardener. The bubbles are transparent in the visible range of the spectrum, and can be used as "cuvettes" for absorbance and fluorescence measurements. The interiors of these bubbles are sterile and allow storage of samples without the need for expensive sterilization equipment. The bubbles are also permeable to gases, and can be used to culture and store micro-organisms. By incorporating carbon electrodes, these bubbles can be used as electrochemical cells. This paper demonstrates the capabilities of the bubbles by culturing E. coli, growing C. elegans, measuring glucose and hemoglobin spectrophotometrically, and measuring ferrocyanide electrochemically, all within the bubbles. PMID:24983331

  4. Avoidance and activation as keys to depression: adaptation of the Behavioral Activation for Depression Scale in a Spanish sample.

    PubMed

    Barraca, Jorge; Pérez-Alvarez, Marino; Lozano Bleda, José Héctor

    2011-11-01

    In this paper we present the adaptation of the Behavioral Activation for Depression Scale (BADS), developed by Kanter, Mulick, Busch, Berlin, and Martell (2007), in a Spanish sample. The psychometric properties were tested in a sample of 263 participants (124 clinical and 139 non-clinical). The results show that, just as in the original English version, the Spanish BADS is a valid and internally consistent scale. Construct validity was examined by correlation with the BDI-II, AAQ, ATQ, MCQ-30, STAI and EROS. Factor analysis justified the four-dimensions of the original instrument (Activation, Avoidance/Rumination, Work/School Impairment and Social Impairment), although with some differences in the factor loadings of the items. Further considerations about the usefulness of the BADS in the clinical treatment of depressed patients are also suggested. PMID:22059343

  5. Automatic Training Sample Selection for a Multi-Evidence Based Crop Classification Approach

    NASA Astrophysics Data System (ADS)

    Chellasamy, M.; Ferre, P. A. Ty; Humlekrog Greve, M.

    2014-09-01

    An approach to use the available agricultural parcel information to automatically select training samples for crop classification is investigated. Previous research addressed the multi-evidence crop classification approach using an ensemble classifier. This first produced confidence measures using three Multi-Layer Perceptron (MLP) neural networks trained separately with spectral, texture and vegetation indices; classification labels were then assigned based on Endorsement Theory. The present study proposes an approach to feed this ensemble classifier with automatically selected training samples. The available vector data representing crop boundaries with corresponding crop codes are used as a source for training samples. These vector data are created by farmers to support subsidy claims and are, therefore, prone to errors such as mislabeling of crop codes and boundary digitization errors. The proposed approach is named as ECRA (Ensemble based Cluster Refinement Approach). ECRA first automatically removes mislabeled samples and then selects the refined training samples in an iterative training-reclassification scheme. Mislabel removal is based on the expectation that mislabels in each class will be far from cluster centroid. However, this must be a soft constraint, especially when working with a hypothesis space that does not contain a good approximation of the targets classes. Difficulty in finding a good approximation often exists either due to less informative data or a large hypothesis space. Thus this approach uses the spectral, texture and indices domains in an ensemble framework to iteratively remove the mislabeled pixels from the crop clusters declared by the farmers. Once the clusters are refined, the selected border samples are used for final learning and the unknown samples are classified using the multi-evidence approach. The study is implemented with WorldView-2 multispectral imagery acquired for a study area containing 10 crop classes. The proposed

  6. A User-Driven and Data-Driven Approach for Supporting Teachers in Reflection and Adaptation of Adaptive Tutorials

    ERIC Educational Resources Information Center

    Ben-Naim, Dror; Bain, Michael; Marcus, Nadine

    2009-01-01

    It has been recognized that in order to drive Intelligent Tutoring Systems (ITSs) into mainstream use by the teaching community, it is essential to support teachers through the entire ITS process: Design, Development, Deployment, Reflection and Adaptation. Although research has been done on supporting teachers through design to deployment of ITSs,…

  7. An Adaptive Intelligent Integrated Lighting Control Approach for High-Performance Office Buildings

    NASA Astrophysics Data System (ADS)

    Karizi, Nasim

    An acute and crucial societal problem is the energy consumed in existing commercial buildings. There are 1.5 million commercial buildings in the U.S. with only about 3% being built each year. Hence, existing buildings need to be properly operated and maintained for several decades. Application of integrated centralized control systems in buildings could lead to more than 50% energy savings. This research work demonstrates an innovative adaptive integrated lighting control approach which could achieve significant energy savings and increase indoor comfort in high performance office buildings. In the first phase of the study, a predictive algorithm was developed and validated through experiments in an actual test room. The objective was to regulate daylight on a specified work plane by controlling the blind slat angles. Furthermore, a sensor-based integrated adaptive lighting controller was designed in Simulink which included an innovative sensor optimization approach based on genetic algorithm to minimize the number of sensors and efficiently place them in the office. The controller was designed based on simple integral controllers. The objective of developed control algorithm was to improve the illuminance situation in the office through controlling the daylight and electrical lighting. To evaluate the performance of the system, the controller was applied on experimental office model in Lee et al.'s research study in 1998. The result of the developed control approach indicate a significantly improvement in lighting situation and 1-23% and 50-78% monthly electrical energy savings in the office model, compared to two static strategies when the blinds were left open and closed during the whole year respectively.

  8. Adaption of egg and larvae sampling techniques for lake sturgeon and broadcast spawning fishes in a deep river

    USGS Publications Warehouse

    Roseman, E.F.; Boase, J.; Kennedy, G.; Craig, J.; Soper, K.

    2011-01-01

    In this report we describe how we adapted two techniques for sampling lake sturgeon (Acipenser fulvescens) and other fish early life history stages to meet our research needs in the Detroit River, a deep, flowing Great Lakes connecting channel. First, we developed a buoy-less method for sampling fish eggs and spawning activity using egg mats deployed on the river bottom. The buoy-less method allowed us to fish gear in areas frequented by boaters and recreational anglers, thus eliminating surface obstructions that interfered with recreational and boating activities. The buoy-less method also reduced gear loss due to drift when masses of floating aquatic vegetation would accumulate on buoys and lines, increasing the drag on the gear and pulling it downstream. Second, we adapted a D-frame drift net system formerly employed in shallow streams to assess larval lake sturgeon dispersal for use in the deeper (>8m) Detroit River using an anchor and buoy system. ?? 2011 Blackwell Verlag, Berlin.

  9. Preliminary Efficacy of Adapted Responsive Teaching for Infants at Risk of Autism Spectrum Disorder in a Community Sample

    PubMed Central

    Baranek, Grace T.; Turner-Brown, Lauren; Field, Samuel H.; Crais, Elizabeth R.; Wakeford, Linn; Little, Lauren M.; Reznick, J. Steven

    2015-01-01

    This study examined the (a) feasibility of enrolling 12-month-olds at risk of ASD from a community sample into a randomized controlled trial, (b) subsequent utilization of community services, and (c) potential of a novel parent-mediated intervention to improve outcomes. The First Year Inventory was used to screen and recruit 12-month-old infants at risk of ASD to compare the effects of 6–9 months of Adapted Responsive Teaching (ART) versus referral to early intervention and monitoring (REIM). Eighteen families were followed for ~20 months. Assessments were conducted before randomization, after treatment, and at 6-month follow-up. Utilization of community services was highest for the REIM group. ART significantly outperformed REIM on parent-reported and observed measures of child receptive language with good linear model fit. Multiphase growth models had better fit for more variables, showing the greatest effects in the active treatment phase, where ART outperformed REIM on parental interactive style (less directive), child sensory responsiveness (less hyporesponsive), and adaptive behavior (increased communication and socialization). This study demonstrates the promise of a parent-mediated intervention for improving developmental outcomes for infants at risk of ASD in a community sample and highlights the utility of earlier identification for access to community services earlier than standard practice. PMID:25648749

  10. Massively parallel sampling of lattice proteins reveals foundations of thermal adaptation

    NASA Astrophysics Data System (ADS)

    Venev, Sergey V.; Zeldovich, Konstantin B.

    2015-08-01

    Evolution of proteins in bacteria and archaea living in different conditions leads to significant correlations between amino acid usage and environmental temperature. The origins of these correlations are poorly understood, and an important question of protein theory, physics-based prediction of types of amino acids overrepresented in highly thermostable proteins, remains largely unsolved. Here, we extend the random energy model of protein folding by weighting the interaction energies of amino acids by their frequencies in protein sequences and predict the energy gap of proteins designed to fold well at elevated temperatures. To test the model, we present a novel scalable algorithm for simultaneous energy calculation for many sequences in many structures, targeting massively parallel computing architectures such as graphics processing unit. The energy calculation is performed by multiplying two matrices, one representing the complete set of sequences, and the other describing the contact maps of all structural templates. An implementation of the algorithm for the CUDA platform is available at http://www.github.com/kzeldovich/galeprot and calculates protein folding energies over 250 times faster than a single central processing unit. Analysis of amino acid usage in 64-mer cubic lattice proteins designed to fold well at different temperatures demonstrates an excellent agreement between theoretical and simulated values of energy gap. The theoretical predictions of temperature trends of amino acid frequencies are significantly correlated with bioinformatics data on 191 bacteria and archaea, and highlight protein folding constraints as a fundamental selection pressure during thermal adaptation in biological evolution.

  11. Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach

    PubMed Central

    Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A

    2015-01-01

    Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange. PMID:26418190

  12. Assessment of Different Sampling Methods for Measuring and Representing Macular Cone Density Using Flood-Illuminated Adaptive Optics

    PubMed Central

    Feng, Shu; Gale, Michael J.; Fay, Jonathan D.; Faridi, Ambar; Titus, Hope E.; Garg, Anupam K.; Michaels, Keith V.; Erker, Laura R.; Peters, Dawn; Smith, Travis B.; Pennesi, Mark E.

    2015-01-01

    Purpose To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Methods Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Results Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. Conclusions We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population. PMID:26325414

  13. Efficient pulse compression for LPI waveforms based on a nonparametric iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Li, Zhengzheng; Nepal, Ramesh; Zhang, Yan; Blake, WIlliam

    2015-05-01

    In order to achieve low probability-of-intercept (LPI), radar waveforms are usually long and randomly generated. Due to the randomized nature, Matched filter responses (autocorrelation) of those waveforms can have high sidelobes which would mask weaker targets near a strong target, limiting radar's ability to distinguish close-by targets. To improve resolution and reduced sidelobe contaminations, a waveform independent pulse compression filter is desired. Furthermore, the pulse compression filter needs to be able to adapt to received signal to achieve optimized performance. As many existing pulse techniques require intensive computation, real-time implementation is infeasible. This paper introduces a new adaptive pulse compression technique for LPI waveforms that is based on a nonparametric iterative adaptive approach (IAA). Due to the nonparametric nature, no parameter tuning is required for different waveforms. IAA can achieve super-resolution and sidelobe suppression in both range and Doppler domains. Also it can be extended to directly handle the matched filter (MF) output (called MF-IAA), which further reduces the computational load. The practical impact of LPI waveform operations on IAA and MF-IAA has not been carefully studied in previous work. Herein the typical LPI waveforms such as random phase coding and other non- PI waveforms are tested with both single-pulse and multi-pulse IAA processing. A realistic airborne radar simulator as well as actual measured radar data are used for the validations. It is validated that in spite of noticeable difference with different test waveforms, the IAA algorithms and its improvement can effectively achieve range-Doppler super-resolution in realistic data.

  14. Controlling aliased dynamics in motion systems? An identification for sampled-data control approach

    NASA Astrophysics Data System (ADS)

    Oomen, Tom

    2014-07-01

    Sampled-data control systems occasionally exhibit aliased resonance phenomena within the control bandwidth. The aim of this paper is to investigate the aspect of these aliased dynamics with application to a high performance industrial nano-positioning machine. This necessitates a full sampled-data control design approach, since these aliased dynamics endanger both the at-sample performance and the intersample behaviour. The proposed framework comprises both system identification and sampled-data control. In particular, the sampled-data control objective necessitates models that encompass the intersample behaviour, i.e., ideally continuous time models. Application of the proposed approach on an industrial wafer stage system provides a thorough insight and new control design guidelines for controlling aliased dynamics.

  15. Building the framework for climate change adaptation in the urban areas using participatory approach: the Czech Republic experience

    NASA Astrophysics Data System (ADS)

    Emmer, Adam; Hubatová, Marie; Lupač, Miroslav; Pondělíček, Michael; Šafařík, Miroslav; Šilhánková, Vladimíra; Vačkář, David

    2016-04-01

    The Czech Republic has experienced numerous extreme hydrometeorological / climatological events such as floods (significant ones in 1997, 2002, 2010, 2013), droughts (2013, 2015), heat waves (2015) and windstorms (2007) during past decades. These events are generally attributed to the ongoing climate change and caused loss of lives and significant material damages (up to several % of GDP in some years), especially in urban areas. To initiate the adaptation process of urban areas, the main objective was to prepare a framework for creating climate change adaptation strategies of individual cities reflecting physical-geographical and socioeconomical conditions of the Czech Republic. Three pilot cities (Hradec Králové, Žďár nad Sázavou, Dobru\\vska) were used to optimize entire procedure. Two sets of participatory seminars were organised in order to involve all key stakeholders (the city council, department of the environment, department of the crisis management, hydrometeorological institute, local experts, ...) into the process of creation of the adaptation strategy from its early stage. Lesson learned for the framework were related especially to its applicability on a local level, which is largely a matter of the understandability of the concept. Finally, this illustrative and widely applicable framework (so called 'road map to adaptation strategy') includes five steps: (i) analysis of existing strategies and plans on national, regional and local levels; (ii) analysing climate-change related hazards and key vulnerabilities; (iii) identification of adaptation needs, evaluation of existing adaptation capacity and formulation of future adaptation priorities; (iv) identification of limits and barriers for the adaptation (economical, environmental, ...); and (v) selection of specific types of adaptation measures reflecting identified adaptation needs and formulated adaptation priorities. Keywords: climate change adaptation (CCA); urban areas; participatory approach

  16. Requirements and approaches to adapting laser writers for fabrication of gray-scale masks

    NASA Astrophysics Data System (ADS)

    Korolkov, Victor P.; Shimansky, Ruslan; Poleshchuk, Alexander G.; Cherkashin, Vadim V.; Kharissov, Andrey A.; Denk, Dmitry

    2001-11-01

    The photolithography using gray-scale masks (GSM) with multilevel transmittance is now one of promising ways for manufacturing of high efficiency diffractive optical elements and microoptics. Such masks can be most effectively fabricated by laser or electron-beam writers on materials with a transmittance changing under influence of high-energy beams. The basic requirements for adaptation of existing and developed scanning laser writers are formulated. These systems create an image by continuous movement of a writing beam along one coordinate and overlapping of adjacent written tracks along another coordinate. Several problems must be solved at the GSM manufacturing: the calibration of the influence of the laser beam on a recording material without transferring the gray-scale structure into photoresist; the transmittance at the current exposed pixel depends on surrounding structures generated before recording of the current track and a character of the laser beam power modulation; essential increasing of the computed data in comparison with binary elements. The offered solutions are based on the results of investigations of the materials with variable transmittance (LDW-glass, a-Si film) and takes into account the specificity of diffractive blazed microstructures. The reduction of data amount for fabrication of multi-level DOEs is effectively performed using offered vector-gradient data format, which is based on piecewise-linear approximation of phase profile. The presented approaches to adaptation of laser writers are realized in software and hardware, and they allow to solve the basic problems of manufacturing GSMs.

  17. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  18. Adaptation policies to increase terrestrial ecosystem resilience. Potential utility of a multicriteria approach

    SciTech Connect

    de Bremond, Ariane; Engle, Nathan L.

    2014-01-30

    Climate change is rapidly undermining terrestrial ecosystem resilience and capacity to continue providing their services to the benefit of humanity and nature. Because of the importance of terrestrial ecosystems to human well-being and supporting services, decision makers throughout the world are busy creating policy responses that secure multiple development and conservation objectives- including that of supporting terrestrial ecosystem resilience in the context of climate change. This article aims to advance analyses on climate policy evaluation and planning in the area of terrestrial ecosystem resilience by discussing adaptation policy options within the ecology-economy-social nexus. The paper evaluates these decisions in the realm of terrestrial ecosystem resilience and evaluates the utility of a set of criteria, indicators, and assessment methods, proposed by a new conceptual multi-criteria framework for pro-development climate policy and planning developed by the United Nations Environment Programme. Potential applications of a multicriteria approach to climate policy vis-A -vis terrestrial ecosystems are then explored through two hypothetical case study examples. The paper closes with a brief discussion of the utility of the multi-criteria approach in the context of other climate policy evaluation approaches, considers lessons learned as a result efforts to evaluate climate policy in the realm of terrestrial ecosystems, and reiterates the role of ecosystem resilience in creating sound policies and actions that support the integration of climate change and development goals.

  19. The adaptive approach for storage assignment by mining data of warehouse management system for distribution centres

    NASA Astrophysics Data System (ADS)

    Ming-Huang Chiang, David; Lin, Chia-Ping; Chen, Mu-Chen

    2011-05-01

    Among distribution centre operations, order picking has been reported to be the most labour-intensive activity. Sophisticated storage assignment policies adopted to reduce the travel distance of order picking have been explored in the literature. Unfortunately, previous research has been devoted to locating entire products from scratch. Instead, this study intends to propose an adaptive approach, a Data Mining-based Storage Assignment approach (DMSA), to find the optimal storage assignment for newly delivered products that need to be put away when there is vacant shelf space in a distribution centre. In the DMSA, a new association index (AIX) is developed to evaluate the fitness between the put away products and the unassigned storage locations by applying association rule mining. With AIX, the storage location assignment problem (SLAP) can be formulated and solved as a binary integer programming. To evaluate the performance of DMSA, a real-world order database of a distribution centre is obtained and used to compare the results from DMSA with a random assignment approach. It turns out that DMSA outperforms random assignment as the number of put away products and the proportion of put away products with high turnover rates increase.

  20. An adaptive neural swarm approach for intrusion defense in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Cannady, James

    2011-06-01

    Wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are being increasingly deployed in critical applications due to the flexibility and extensibility of the technology. While these networks possess numerous advantages over traditional wireless systems in dynamic environments they are still vulnerable to many of the same types of host-based and distributed attacks common to those systems. Unfortunately, the limited power and bandwidth available in WSNs and MANETs, combined with the dynamic connectivity that is a defining characteristic of the technology, makes it extremely difficult to utilize traditional intrusion detection techniques. This paper describes an approach to accurately and efficiently detect potentially damaging activity in WSNs and MANETs. It enables the network as a whole to recognize attacks, anomalies, and potential vulnerabilities in a distributive manner that reflects the autonomic processes of biological systems. Each component of the network recognizes activity in its local environment and then contributes to the overall situational awareness of the entire system. The approach utilizes agent-based swarm intelligence to adaptively identify potential data sources on each node and on adjacent nodes throughout the network. The swarm agents then self-organize into modular neural networks that utilize a reinforcement learning algorithm to identify relevant behavior patterns in the data without supervision. Once the modular neural networks have established interconnectivity both locally and with neighboring nodes the analysis of events within the network can be conducted collectively in real-time. The approach has been shown to be extremely effective in identifying distributed network attacks.

  1. Characterization of GM events by insert knowledge adapted re-sequencing approaches.

    PubMed

    Yang, Litao; Wang, Congmao; Holst-Jensen, Arne; Morisset, Dany; Lin, Yongjun; Zhang, Dabing

    2013-01-01

    Detection methods and data from molecular characterization of genetically modified (GM) events are needed by stakeholders of public risk assessors and regulators. Generally, the molecular characteristics of GM events are incomprehensively revealed by current approaches and biased towards detecting transformation vector derived sequences. GM events are classified based on available knowledge of the sequences of vectors and inserts (insert knowledge). Herein we present three insert knowledge-adapted approaches for characterization GM events (TT51-1 and T1c-19 rice as examples) based on paired-end re-sequencing with the advantages of comprehensiveness, accuracy, and automation. The comprehensive molecular characteristics of two rice events were revealed with additional unintended insertions comparing with the results from PCR and Southern blotting. Comprehensive transgene characterization of TT51-1 and T1c-19 is shown to be independent of a priori knowledge of the insert and vector sequences employing the developed approaches. This provides an opportunity to identify and characterize also unknown GM events. PMID:24088728

  2. An adaptive toolbox approach to the route to expertise in sport

    PubMed Central

    de Oliveira, Rita F.; Lobinger, Babett H.; Raab, Markus

    2014-01-01

    Expertise is characterized by fast decision-making which is highly adaptive to new situations. Here we propose that athletes use a toolbox of heuristics which they develop on their route to expertise. The development of heuristics occurs within the context of the athletes’ natural abilities, past experiences, developed skills, and situational context, but does not pertain to any of these factors separately. This is a novel approach because it integrates separate factors into a comprehensive heuristic description. The novelty of this approach lies within the integration of separate factors determining expertise into a comprehensive heuristic description. It is our contention that talent identification methods and talent development models should therefore be geared toward the assessment and development of specific heuristics. Specifically, in addition to identifying and developing separate natural abilities and skills as per usual, heuristics should be identified and developed. The application of heuristics to talent and expertise models can bring the field one step away from dichotomized models of nature and nurture toward a comprehensive approach to the route to expertise. PMID:25071673

  3. Characterization of GM events by insert knowledge adapted re-sequencing approaches

    PubMed Central

    Yang, Litao; Wang, Congmao; Holst-Jensen, Arne; Morisset, Dany; Lin, Yongjun; Zhang, Dabing

    2013-01-01

    Detection methods and data from molecular characterization of genetically modified (GM) events are needed by stakeholders of public risk assessors and regulators. Generally, the molecular characteristics of GM events are incomprehensively revealed by current approaches and biased towards detecting transformation vector derived sequences. GM events are classified based on available knowledge of the sequences of vectors and inserts (insert knowledge). Herein we present three insert knowledge-adapted approaches for characterization GM events (TT51-1 and T1c-19 rice as examples) based on paired-end re-sequencing with the advantages of comprehensiveness, accuracy, and automation. The comprehensive molecular characteristics of two rice events were revealed with additional unintended insertions comparing with the results from PCR and Southern blotting. Comprehensive transgene characterization of TT51-1 and T1c-19 is shown to be independent of a priori knowledge of the insert and vector sequences employing the developed approaches. This provides an opportunity to identify and characterize also unknown GM events. PMID:24088728

  4. Seeking mathematics success for college students: a randomized field trial of an adapted approach

    NASA Astrophysics Data System (ADS)

    Gula, Taras; Hoessler, Carolyn; Maciejewski, Wes

    2015-11-01

    Many students enter the Canadian college system with insufficient mathematical ability and leave the system with little improvement. Those students who enter with poor mathematics ability typically take a developmental mathematics course as their first and possibly only mathematics course. The educational experiences that comprise a developmental mathematics course vary widely and are, too often, ineffective at improving students' ability. This trend is concerning, since low mathematics ability is known to be related to lower rates of success in subsequent courses. To date, little attention has been paid to the selection of an instructional approach to consistently apply across developmental mathematics courses. Prior research suggests that an appropriate instructional method would involve explicit instruction and practising mathematical procedures linked to a mathematical concept. This study reports on a randomized field trial of a developmental mathematics approach at a college in Ontario, Canada. The new approach is an adaptation of the JUMP Math program, an explicit instruction method designed for primary and secondary school curriculae, to the college learning environment. In this study, a subset of courses was assigned to JUMP Math and the remainder was taught in the same style as in the previous years. We found consistent, modest improvement in the JUMP Math sections compared to the non-JUMP sections, after accounting for potential covariates. The findings from this randomized field trial, along with prior research on effective education for developmental mathematics students, suggest that JUMP Math is a promising way to improve college student outcomes.

  5. An Efficient and Self-Adapted Approach to the Sharpening of Color Images

    PubMed Central

    Lee, Tien-Lin

    2013-01-01

    An efficient approach to the sharpening of color images is proposed in this paper. For this, the image to be sharpened is first transformed to the HSV color model, and then only the channel of Value will be used for the process of sharpening while the other channels are left unchanged. We then apply a proposed edge detector and low-pass filter to the channel of Value to pick out pixels around boundaries. After that, those pixels detected as around edges or boundaries are adjusted so that the boundary can be sharpened, and those nonedge pixels are kept unaltered. The increment or decrement magnitude that is to be added to those edge pixels is determined in an adaptive manner based on global statistics of the image and local statistics of the pixel to be sharpened. With the proposed approach, the discontinuities can be highlighted while most of the original information contained in the image can be retained. Finally, the adjusted channel of Value and that of Hue and Saturation will be integrated to get the sharpened color image. Extensive experiments on natural images will be given in this paper to highlight the effectiveness and efficiency of the proposed approach. PMID:24348136

  6. Detection of synchronization between chaotic signals: An adaptive similarity-based approach

    NASA Astrophysics Data System (ADS)

    Chen, Shyan-Shiou; Chen, Li-Fen; Wu, Yu-Te; Wu, Yu-Zu; Lee, Po-Lei; Yeh, Tzu-Chen; Hsieh, Jen-Chuen

    2007-12-01

    We present an adaptive similarity-based approach to detect generalized synchronization (GS) with n:m phase synchronization (PS), where n and m are integers and one of them is 1. This approach is based on the similarity index (SI) and Gaussian mixture model with the minimum description length criterion. The clustering method, which is shown to be superior to the closeness and connectivity of a continuous function, is employed in this study to detect the existence of GS with n:m PS. We conducted a computer simulation and a finger-lifting experiment to illustrate the effectiveness of the proposed method. In the simulation of a Rössler-Lorenz system, our method outperformed the conventional SI, and GS with 2:1 PS within the coupled system was found. In the experiment of self-paced finger-lifting movement, cortico-muscular GS with 1:2 and 1:3 PS was found between the surface electromyogram signals on the first dorsal interossei muscle and the magnetoencephalographic data in the motor area. The GS with n:m PS ( n or m=1 ) has been simultaneously resolved from both simulation and experiment. The proposed approach thereby provides a promising means for advancing research into both nonlinear dynamics and brain science.

  7. Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Huff, Daniel W.

    computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage.

  8. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  9. A 3D approach for object recognition in illuminated scenes with adaptive correlation filters

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Víctor H.

    2015-09-01

    In this paper we solve the problem of pose recognition of a 3D object in non-uniformly illuminated and noisy scenes. The recognition system employs a bank of space-variant correlation filters constructed with an adaptive approach based on local statistical parameters of the input scene. The position and orientation of the target are estimated with the help of the filter bank. For an observed input frame, the algorithm computes the correlation process between the observed image and the bank of filters using a combination of data and task parallelism by taking advantage of a graphics processing unit (GPU) architecture. The pose of the target is estimated by finding the template that better matches the current view of target within the scene. The performance of the proposed system is evaluated in terms of recognition accuracy, location and orientation errors, and computational performance.

  10. A disturbance observer-based adaptive control approach for flexure beam nano manipulators.

    PubMed

    Zhang, Yangming; Yan, Peng; Zhang, Zhen

    2016-01-01

    This paper presents a systematic modeling and control methodology for a two-dimensional flexure beam-based servo stage supporting micro/nano manipulations. Compared with conventional mechatronic systems, such systems have major control challenges including cross-axis coupling, dynamical uncertainties, as well as input saturations, which may have adverse effects on system performance unless effectively eliminated. A novel disturbance observer-based adaptive backstepping-like control approach is developed for high precision servo manipulation purposes, which effectively accommodates model uncertainties and coupling dynamics. An auxiliary system is also introduced, on top of the proposed control scheme, to compensate the input saturations. The proposed control architecture is deployed on a customized-designed nano manipulating system featured with a flexure beam structure and voice coil actuators (VCA). Real time experiments on various manipulating tasks, such as trajectory/contour tracking, demonstrate precision errors of less than 1%. PMID:26546099

  11. Approach for Structurally Clearing an Adaptive Compliant Trailing Edge Flap for Flight

    NASA Technical Reports Server (NTRS)

    Miller, Eric J.; Lokos, William A.; Cruz, Josue; Crampton, Glen; Stephens, Craig A.; Kota, Sridhar; Ervin, Gregory; Flick, Pete

    2015-01-01

    The Adaptive Compliant Trailing Edge (ACTE) flap was flown on the National Aeronautics and Space Administration (NASA) Gulfstream GIII testbed at the NASA Armstrong Flight Research Center. This smoothly curving flap replaced the existing Fowler flaps creating a seamless control surface. This compliant structure, developed by FlexSys Inc. in partnership with the Air Force Research Laboratory, supported NASA objectives for airframe structural noise reduction, aerodynamic efficiency, and wing weight reduction through gust load alleviation. A thorough structures airworthiness approach was developed to move this project safely to flight. A combination of industry and NASA standard practice require various structural analyses, ground testing, and health monitoring techniques for showing an airworthy structure. This paper provides an overview of compliant structures design, the structural ground testing leading up to flight, and the flight envelope expansion and monitoring strategy. Flight data will be presented, and lessons learned along the way will be highlighted.

  12. Event-driven approach of layered multicast to network adaptation in RED-based IP networks

    NASA Astrophysics Data System (ADS)

    Nahm, Kitae; Li, Qing; Kuo, C.-C. J.

    2003-11-01

    In this work, we investigate the congestion control problem for layered video multicast in IP networks of active queue management (AQM) using a simple random early detection (RED) queue model. AQM support from networks improves the visual quality of video streaming but makes network adaptation more di+/-cult for existing layered video multicast proticols that use the event-driven timer-based approach. We perform a simplified analysis on the response of the RED algorithm to burst traffic. The analysis shows that the primary problem lies in the weak correlation between the network feedback and the actual network congestion status when the RED queue is driven by burst traffic. Finally, a design guideline of the layered multicast protocol is proposed to overcome this problem.

  13. Spin Adapted versus Broken Symmetry Approaches in the Description of Magnetic Coupling in Heterodinuclear Complexes.

    PubMed

    Costa, Ramon; Valero, Rosendo; Reta Mañeru, Daniel; Moreira, Ibério de P R; Illas, Francesc

    2015-03-10

    The performance of a series of wave function and density functional theory based methods in predicting the magnetic coupling constant of a family of heterodinuclear magnetic complexes has been studied. For the former, the accuracy is similar to other simple cases involving homodinuclear complexes, the main limitation being a sufficient inclusion of dynamical correlation effects. Nevertheless, these series of calculations provide an appropriate benchmark for density functional theory based methods. Here, the usual broken symmetry approach provides a convenient framework to predict the magnetic coupling constants but requires deriving the appropriate mapping. At variance with simple dinuclear complexes, spin projection based techniques cannot recover the corresponding (approximate) spin adapted solution. Present results also show that current implementation of spin flip techniques leads to unphysical results. PMID:26579753

  14. A Rate Function Approach to Computerized Adaptive Testing for Cognitive Diagnosis.

    PubMed

    Liu, Jingchen; Ying, Zhiliang; Zhang, Stephanie

    2015-06-01

    Computerized adaptive testing (CAT) is a sequential experiment design scheme that tailors the selection of experiments to each subject. Such a scheme measures subjects' attributes (unknown parameters) more accurately than the regular prefixed design. In this paper, we consider CAT for diagnostic classification models, for which attribute estimation corresponds to a classification problem. After a review of existing methods, we propose an alternative criterion based on the asymptotic decay rate of the misclassification probabilities. The new criterion is then developed into new CAT algorithms, which are shown to achieve the asymptotically optimal misclassification rate. Simulation studies are conducted to compare the new approach with existing methods, demonstrating its effectiveness, even for moderate length tests. PMID:24327068

  15. Application of ameliorative and adaptive approaches to revegetation of historic high altitude mining waste

    SciTech Connect

    Bellitto, M.W.; Williams, H.T.; Ward, J.N.

    1999-07-01

    High altitude, historic, gold and silver tailings deposits, which included a more recent cyanide heap leach operation, were decommissioned, detoxified, re-contoured and revegetated. Detoxification of the heap included rinsing with hydrogen peroxide, lime and ferric chloride, followed by evaporation and land application of remaining solution. Grading included the removal of solution ponds, construction of a geosynthetic/clay lined pond, heap removal and site drainage development. Ameliorative and adaptive revegetation methodologies were utilized. Revegetation was complicated by limited access, lack of topsoil, low pH and evaluated metals concentrations in the tailings, and a harsh climate. Water quality sampling results for the first year following revegetation, indicate reclamation activities have contributed to a decrease in metals and sediment loading to surface waters downgradient of the site. Procedures, methodologies and results, following the first year of vegetation growth, are provided.

  16. Empirical mode decomposition, an adaptive approach for interpreting shaft vibratory signals of large rotating machinery

    NASA Astrophysics Data System (ADS)

    Yang, Wenxian; Tavner, P. J.

    2009-04-01

    The Fourier transform (FT) has been the most popular method for analyzing large rotating machine shaft vibration problems, but it assumes that these vibration signals are linear and stationary. However, in reality this is not always true. Nonlinear and non-stationary shaft vibration signals are often encountered during the start-up and shut-down processes of the machines. Additionally, mechanical faults, for example rotor-to-stator rubbing, fluid excitation, part-loosening, and shaft cracking, are nonlinear. Owing to these reasons, an accurate analysis of shaft vibration cannot always be achieved by using the FT. An alternative tool, the wavelet transform (WT), is now being used to improve the situation. But the efficiency is a problem especially when applying the WT to the accurate analysis of a large-scale, lengthy data. In view of the powerful capability of empirical mode decomposition (EMD) to process nonlinear/non-stationary signals, its algorithm efficiency and its satisfactory performance in minimizing energy leakage, the EMD is used in this paper to analyze the problem, the signals investigated are adaptively decomposed into a finite number of intrinsic mode functions (IMFs). The principal IMFs, identified using an energy-distribution threshold, dominate the signals' oscillation. So, 'purified' shaft vibration signals can be reconstructed from these principal IMFs. To remove interference present in principal IMFs, an adaptive band-pass filter is designed, whose central frequency is automatically set to the frequency dominating the IMF being investigated. To facilitate the observation of transient shaft vibration, a transient shaft orbit (TSO) is constructed by introducing timescale into the orbit drawing process. Nine mathematical criteria are also proposed to evaluate the shaft vibrations exhibited in the IMFs and TSOs. The novelty of this approach is that the EMD provides an adaptive, effective, and efficient way to obtain 'purified' shaft vibration

  17. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  18. Planning sample sizes when effect sizes are uncertain: The power-calibrated effect size approach.

    PubMed

    McShane, Blakeley B; Böckenholt, Ulf

    2016-03-01

    Statistical power and thus the sample size required to achieve some desired level of power depend on the size of the effect of interest. However, effect sizes are seldom known exactly in psychological research. Instead, researchers often possess an estimate of an effect size as well as a measure of its uncertainty (e.g., a standard error or confidence interval). Previous proposals for planning sample sizes either ignore this uncertainty thereby resulting in sample sizes that are too small and thus power that is lower than the desired level or overstate the impact of this uncertainty thereby resulting in sample sizes that are too large and thus power that is higher than the desired level. We propose a power-calibrated effect size (PCES) approach to sample size planning that accounts for the uncertainty associated with an effect size estimate in a properly calibrated manner: sample sizes determined on the basis of the PCES are neither too small nor too large and thus provide the desired level of power. We derive the PCES for comparisons of independent and dependent means, comparisons of independent and dependent proportions, and tests of correlation coefficients. We also provide a tutorial on setting sample sizes for a replication study using data from prior studies and discuss an easy-to-use website and code that implement our PCES approach to sample size planning. PMID:26651984

  19. Performances of a bent-crystal spectrometer adapted to resonant x-ray emission measurements on gas-phase samples

    SciTech Connect

    Journel, Loiec; El Khoury, Lara; Marin, Thierry; Guillemin, Renaud; Carniato, Stephane; Avila, Antoine; Delaunay, Renaud; Hague, Coryn F.; Simon, Marc

    2009-09-15

    We describe a bent-crystal spectrometer adapted to measure x-ray emission resulting from core-level excitation of gas-phase molecules in the 0.8-8 keV energy range. The spectrometer is based on the Johann principle, and uses a microfocused photon beam to provide high-resolution (resolving power of {approx}7500). A gas cell was designed to hold a high-pressure (300 mbar) sample of gas while maintaining a high vacuum (10{sup -9} mbar) in the chamber. The cell was designed to optimize the counting rate (2000 cts/s at the maximum of the Cl K{alpha} emission line), while minimizing self-absorption. Example of the K{alpha} emission lines of CH{sub 3}Cl molecules is presented to illustrate the capabilities of this new instrument.

  20. Adaptation of a speciation sampling cartridge for measuring ammonia flux from cattle feedlots using relaxed eddy accumulation

    NASA Astrophysics Data System (ADS)

    Baum, K. A.; Ham, J. M.

    Improved measurements of ammonia losses from cattle feedlots are needed to quantify the national NH 3 emissions inventory and evaluate management techniques for reducing emissions. Speciation cartridges composed of glass honeycomb denuders and filter packs were adapted to measure gaseous NH 3 and aerosol NH 4+ fluxes using relaxed eddy accumulation (REA). Laboratory testing showed that a cartridge equipped with four honeycomb denuders had a total capture capacity of 1800 μg of NH 3. In the field, a pair of cartridges was deployed adjacent to a sonic anemometer and an open-path gas analyzer on a mobile tower. High-speed valves were attached to the inlets of the cartridges and controlled by a datalogger so that up- and down-moving eddies were independently sampled based on direction of the vertical wind speed and a user-defined deadband. Air flowed continuously through the cartridges even when not sampling by means of a recirculating air handling system. Eddy covariance measurement of CO 2 and H 2O, as measured by the sonic and open-path gas analyzer, were used to determine the relaxation factor needed to compute REA-based fluxes. The REA system was field tested at the Beef Research Unit at Kansas State University in the summer and fall of 2007. Daytime NH 3 emissions ranged between 68 and 127 μg m -2 s -1; fluxes tended to follow a diurnal pattern correlated with latent heat flux. Daily fluxes of NH 3 were between 2.5 and 4.7 g m -2 d -1 and on average represented 38% of fed nitrogen. Aerosol NH 4+ fluxes were negligible compared with NH 3 emissions. An REA system designed around the high-capacity speciation cartridges can be used to measure NH 3 fluxes from cattle feedlots and other strong sources. The system could be adapted to measure fluxes of other gases and aerosols.

  1. Reliable and sample saving gene expression analysis approach for diagnostic tool development.

    PubMed

    Port, Matthias; Seidl, Christof; Ruf, Christian G; Riecke, Armin; Meineke, Viktor; Abend, Michael

    2012-08-01

    This work answers the question of whether it is necessary to hybridize individual instead of pooled RNA samples on microarrays for screening gene targets suitable as diagnostic tools for radiation exposure scenarios, while at the same time meeting comparable microarray quality criteria. For developing new clinical diagnostic tools, a two-stage study design was employed in five projects. At first, pooled and not individual RNA samples were hybridized on microarrays for screening purposes. Potential gene candidates were selected based on their fold-change only. This was followed by a validation/quantification step using individual RNA samples and quantitative RT-PCR. Quality criteria from the screening approach with pooled RNA samples were compared with published data from the MicroArray Quality Control (MAQC) consortium that hybridized each reference RNA sample separately and established quality criteria for microarrays. When comparing both approaches, only insignificant differences for quality criteria such as false positives, sensitivity, specificity, and overall agreement were found. However, material, costs, and time were drastically reduced when hybridizing pooled RNA and gene targets applicable for clinical diagnostic purposes could be successfully selected. In search of new diagnostic tools for radiation exposure scenarios, the two stage study design using either pooled or individual RNA samples on microarrays shows comparable quality criteria, but the RNA pooling approach saves unique material, costs, and efforts and successfully selects gene targets that can be used for the desired diagnostic purposes. PMID:22951474

  2. A margin based approach to determining sample sizes via tolerance bounds.

    SciTech Connect

    Newcomer, Justin T.; Freeland, Katherine Elizabeth

    2013-09-01

    This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.

  3. An error function minimization approach for the inverse problem of adaptive mirrors tuning

    NASA Astrophysics Data System (ADS)

    Vannoni, Maurizio; Yang, Fan; Siewert, Frank; Sinn, Harald

    2014-09-01

    Adaptive x-ray optics are more and more used in synchrotron beamlines, and it is probable that they will be considered for the future high-power free-electron laser sources, as the European XFEL now under construction in Hamburg, or similar projects now in discussion. These facilities will deliver a high power x-ray beam, with an expected high heat load delivered on the optics. For this reason, bendable mirrors are required to actively compensate the resulting wavefront distortion. On top of that, the mirror could have also intrinsic surface defects, as polishing errors or mounting stresses. In order to be able to correct the mirror surface with a high precision to maintain its challenging requirements, the mirror surface is usually characterized with a high accuracy metrology to calculate the actuators pulse functions and to assess its initial shape. After that, singular value decomposition (SVD) is used to find the signals to be applied into the actuators, to reach the desired surface deformation or correction. But in some cases this approach could be not robust enough for the needed performance. We present here a comparison between the classical SVD method and an error function minimization based on root-mean-square calculation. Some examples are provided, using a simulation of the European XFEL mirrors design as a case of study, and performances of the algorithms are evaluated in order to reach the ultimate quality in different scenarios. The approach could be easily generalized to other situations as well.

  4. Adaptive fuzzy approach to modeling of operational space for autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Musilek, Petr; Gupta, Madan M.

    1998-10-01

    Robots operating in an unstructured environment need high level of modeling of their operational space in order to plan a suitable path from an initial position to a desired goal. From this perspective, operational space modeling seems to be crucial to ensure a sufficient level of autonomy. In order to compile the information from various sources, we propose a fuzzy approach to evaluate each unit region on a grid map by a certain value of transition cost. This value expresses the cost of movement over the unit region: the higher the value, the more expensive the movement through the region in terms of energy, time, danger, etc. The approach for modeling, proposed in this paper, employs fuzzy granulation of information on various terrain features and their combination based on a fuzzy neural network. In order to adapt to the changing environmental conditions, and to improve the validity of constructed cost maps on-line, the system can be endowed with learning abilities. The learning subsystem would change parameters of the fuzzy neural network based decision system by reinforcements derived from comparisons of the actual cost of transition with the cost obtained from the model.

  5. An adaptive remaining energy prediction approach for lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Zhang, Chenbin; Chen, Zonghai

    2016-02-01

    With the growing number of electric vehicle (EV) applications, the function of the battery management system (BMS) becomes more sophisticated. The accuracy of remaining energy estimation is critical for energy optimization and management in EVs. Therefore the state-of-energy (SoE) is defined to indicate the remaining available energy of the batteries. Considering that there are inevitable accumulated errors caused by current and voltage integral method, an adaptive SoE estimator is first established in this paper. In order to establish a reasonable battery equivalent model, based on the experimental data of the LiFePO4 battery, a data-driven model is established to describe the relationship between the open-circuit voltage (OCV) and the SoE. What is more, the forgetting factor recursive least-square (RLS) method is used for parameter identification to get accurate model parameters. Finally, in order to analyze the robustness and the accuracy of the proposed approach, different types of dynamic current profiles are conducted on the lithium-ion batteries and the performances are calculated and compared. The results indicate that the proposed approach has robust and accurate SoE estimation results under dynamic working conditions.

  6. An adaptive level set approach for incompressible two-phase flows

    SciTech Connect

    Sussman, M.; Almgren, A.S.; Bell, J.B.

    1997-04-01

    In Sussman, Smereka and Osher, a numerical method using the level set approach was formulated for solving incompressible two-phase flow with surface tension. In the level set approach, the interface is represented as the zero level set of a smooth function; this has the effect of replacing the advection of density, which has steep gradients at the interface, with the advection of the level set function, which is smooth. In addition, the interface can merge or break up with no special treatment. The authors maintain the level set function as the signed distance from the interface in order to robustly compute flows with high density ratios and stiff surface tension effects. In this work, they couple the level set scheme to an adaptive projection method for the incompressible Navier-Stokes equations, in order to achieve higher resolution of the interface with a minimum of additional expense. They present two-dimensional axisymmetric and fully three-dimensional results of air bubble and water drop computations.

  7. A self-adaptive mean-shift segmentation approach based on graph theory for high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Luwan; Han, Ling; Ning, Xiaohong

    2015-12-01

    An auto new segmentation approach based on graph theory which named self-adaptive mean-shift for high-resolution remote sensing images was proposed in this paper. This approach could overcome some defects that classic Mean-Shift must determine the fixed bandwidth through trial many times, and could effectively distinguish the difference between different features in the texture rich region. Segmentation experiments were processed with WorldView satellite image. The results show that the presented method is adaptive, and its speed and precision can satisfy application, so it is a robust automatic segmentation algorithm.

  8. The Fate of Early Experience Following Developmental Change: Longitudinal Approaches to Individual Adaptation in Childhood.

    ERIC Educational Resources Information Center

    Sroufe, L. Alan; And Others

    1990-01-01

    Examined Bowlby's proposition that early experiences and the adaptations to which they give rise influence later development, even beyond the influence of current circumstances or very recent adaptation. Groups whose adaptation were similar during preschool years but consistently different earlier were defined and compared. Results supported…

  9. Adapting Rational Unified Process (RUP) approach in designing a secure e-Tendering model

    NASA Astrophysics Data System (ADS)

    Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman

    2016-08-01

    e-Tendering is an electronic processing of the tender document via internet and allow tenderer to publish, communicate, access, receive and submit all tender related information and documentation via internet. This study aims to design the e-Tendering system using Rational Unified Process approach. RUP provides a disciplined approach on how to assign tasks and responsibilities within the software development process. RUP has four phases that can assist researchers to adjust the requirements of various projects with different scope, problem and the size of projects. RUP is characterized as a use case driven, architecture centered, iterative and incremental process model. However the scope of this study only focusing on Inception and Elaboration phases as step to develop the model and perform only three of nine workflows (business modeling, requirements, analysis and design). RUP has a strong focus on documents and the activities in the inception and elaboration phases mainly concern the creation of diagrams and writing of textual descriptions. The UML notation and the software program, Star UML are used to support the design of e-Tendering. The e-Tendering design based on the RUP approach can contribute to e-Tendering developers and researchers in e-Tendering domain. In addition, this study also shows that the RUP is one of the best system development methodology that can be used as one of the research methodology in Software Engineering domain related to secured design of any observed application. This methodology has been tested in various studies in certain domains, such as in Simulation-based Decision Support, Security Requirement Engineering, Business Modeling and Secure System Requirement, and so forth. As a conclusion, these studies showed that the RUP one of a good research methodology that can be adapted in any Software Engineering (SE) research domain that required a few artifacts to be generated such as use case modeling, misuse case modeling, activity

  10. Bioagent Sample Matching using Elemental Composition Data: an Approach to Validation

    SciTech Connect

    Velsko, S P

    2006-04-21

    Sample matching is a fundamental capability that can have high probative value in a forensic context if proper validation studies are performed. In this report we discuss the potential utility of using the elemental composition of two bioagent samples to decide if they were produced in the same batch, or by the same process. Using guidance from the recent NRC study of bullet lead analysis and other sources, we develop a basic likelihood ratio framework for evaluating the evidentiary weight of elemental analysis data for sample matching. We define an objective metric for comparing two samples, and propose a method for constructing an unbiased population of test samples. We illustrate the basic methodology with some existing data on dry Bacillus thuringiensis preparations, and outline a comprehensive plan for experimental validation of this approach.

  11. Using Past Data to Enhance Small Sample DIF Estimation: A Bayesian Approach

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Dorans, Neil J.; Grant, Mary C.; Blew, Edwin O.

    2009-01-01

    Test administrators often face the challenge of detecting differential item functioning (DIF) with samples of size smaller than that recommended by experts. A Bayesian approach can incorporate, in the form of a prior distribution, existing information on the inference problem at hand, which yields more stable estimation, especially for small…

  12. Evaluation of PCR Approaches for Detection of Bartonella bacilliformis in Blood Samples

    PubMed Central

    Gomes, Cláudia; Martinez-Puchol, Sandra; Pons, Maria J.; Bazán, Jorge; Tinco, Carmen; del Valle, Juana; Ruiz, Joaquim

    2016-01-01

    Background The lack of an effective diagnostic tool for Carrion’s disease leads to misdiagnosis, wrong treatments and perpetuation of asymptomatic carriers living in endemic areas. Conventional PCR approaches have been reported as a diagnostic technique. However, the detection limit of these techniques is not clear as well as if its usefulness in low bacteriemia cases. The aim of this study was to evaluate the detection limit of 3 PCR approaches. Methodology/Principal Findings We determined the detection limit of 3 different PCR approaches: Bartonella-specific 16S rRNA, fla and its genes. We also evaluated the viability of dry blood spots to be used as a sample transport system. Our results show that 16S rRNA PCR is the approach with a lowest detection limit, 5 CFU/μL, and thus, the best diagnostic PCR tool studied. Dry blood spots diminish the sensitivity of the assay. Conclusions/Significance From the tested PCRs, the 16S rRNA PCR-approach is the best to be used in the direct blood detection of acute cases of Carrion’s disease. However its use in samples from dry blood spots results in easier management of transport samples in rural areas, a slight decrease in the sensitivity was observed. The usefulness to detect by PCR the presence of low-bacteriemic or asymptomatic carriers is doubtful, showing the need to search for new more sensible techniques. PMID:26959642

  13. A model-based approach for making ecological inference from distance sampling data.

    PubMed

    Johnson, Devin S; Laake, Jeffrey L; Ver Hoef, Jay M

    2010-03-01

    We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike's information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function. PMID:19459840

  14. Mixed linear model approach adapted for genome-wide association studies

    PubMed Central

    Zhang, Zhiwu; Ersoz, Elhan; Lai, Chao-Qiang; Todhunter, Rory J; Tiwari, Hemant K; Gore, Michael A; Bradbury, Peter J; Yu, Jianming; Arnett, Donna K; Ordovas, Jose M; Buckler, Edward S

    2010-01-01

    Mixed linear model (MLM) methods have proven useful in controlling for population structure and relatedness within genome-wide association studies. However, MLM-based methods can be computationally challenging for large datasets. We report a compression approach, called ‘compressed MLM’, that decreases the effective sample size of such datasets by clustering individuals into groups. We also present a complementary approach, ‘population parameters previously determined’ (P3D), that eliminates the need to re-compute variance components. We applied these two methods both independently and combined in selected genetic association datasets from human, dog and maize. The joint implementation of these two methods markedly reduced computing time and either maintained or improved statistical power. We used simulations to demonstrate the usefulness in controlling for substructure in genetic association datasets for a range of species and genetic architectures. We have made these methods available within an implementation of the software program TASSEL. PMID:20208535

  15. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  16. Exploring equivalence domain in non-linear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-02-01

    This paper presents a methodology to sample equivalence domain (ED) in non-linear PDE-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo (MCMC) algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of Magneotelluric, Controlled-source Electromagnetic (EM) and Global EM induction data.

  17. ADAPTIVE ANNEALED IMPORTANCE SAMPLING FOR MULTIMODAL POSTERIOR EXPLORATION AND MODEL SELECTION WITH APPLICATION TO EXTRASOLAR PLANET DETECTION

    SciTech Connect

    Liu, Bin

    2014-07-01

    We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.

  18. Pulsed photothermal profiling of water-based samples using a spectrally composite reconstruction approach

    NASA Astrophysics Data System (ADS)

    Majaron, B.; Milanič, M.

    2010-03-01

    Pulsed photothermal profiling involves reconstruction of temperature depth profile induced in a layered sample by single-pulse laser exposure, based on transient change in mid-infrared (IR) emission from its surface. Earlier studies have indicated that in watery tissues, featuring a pronounced spectral variation of mid-IR absorption coefficient, analysis of broadband radiometric signals within the customary monochromatic approximation adversely affects profiling accuracy. We present here an experimental comparison of pulsed photothermal profiling in layered agar gel samples utilizing a spectrally composite kernel matrix vs. the customary approach. By utilizing a custom reconstruction code, the augmented approach reduces broadening of individual temperature peaks to 14% of the absorber depth, in contrast to 21% obtained with the customary approach.

  19. A normative inference approach for optimal sample sizes in decisions from experience.

    PubMed

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    "Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  20. Adaptive Function in Preschoolers in Relation to Developmental Delay and Diagnosis of Autism Spectrum Disorders: Insights from a Clinical Sample

    ERIC Educational Resources Information Center

    Milne, Susan L.; McDonald, Jenny L.; Comino, Elizabeth J.

    2013-01-01

    This study aims to explore the relationship between developmental ability, autism and adaptive skills in preschoolers. Adaptive function was assessed in 152 preschoolers with autism, with and without developmental delay, and without autism, with and without developmental delay. Their overall adaptive function, measured by the general adaptive…

  1. Experimental Approach for Deep Proteome Measurements from Small-Scale Microbial Biomass Samples.

    SciTech Connect

    Thompson, Melissa R; Chourey, Karuna; Froelich, Jennifer M.; Erickson, Brian K; Verberkmoes, Nathan C; Hettich, Robert {Bob} L

    2008-01-01

    Many methods of microbial proteome characterizations require large quantities of cellular biomass (> 1-2 g) for sample preparation and protein identification. Our experimental approach differs from traditional techniques by providing the ability to identify the proteomic state of a microbe from a few milligrams of starting cellular material. The small-scale, guanidine-lysis method minimizes sample loss by achieving cellular lysis and protein digestion in a single tube experiment. For this experimental approach, the freshwater microbe Shewanella oneidensis MR-1 and the purple non-sulfur bacterium Rhodopseudomonas palustris CGA0010 were used as model organisms for technology development and evaluation. A 2-D LC-MS/MS comparison between a standard sonication lysis method and the small-scale guanidine-lysis techniques demonstrates that the guanidine-lysis method is more efficient with smaller sample amounts of cell pellet (i.e. down to 1 mg). The described methodology would enable deep proteome measurements from a few milliliters of confluent bacterial cultures. We also report a new protocol for efficient lysis from small amounts of natural biofilm samples for deep proteome measurements, which should greatly enhance the emerging field of microbial community proteomics. This straightforward sample boiling protocol is complementary to the small-scale guanidine-lysis technique, is amenable for small sample quantities, and requires no special reagents that might complicate the MS measurements.

  2. Technical note: An improved approach to determining background aerosol concentrations with PILS sampling on aircraft

    NASA Astrophysics Data System (ADS)

    Fukami, Christine S.; Sullivan, Amy P.; Ryan Fulgham, S.; Murschell, Trey; Borch, Thomas; Smith, James N.; Farmer, Delphine K.

    2016-07-01

    Particle-into-Liquid Samplers (PILS) have become a standard aerosol collection technique, and are widely used in both ground and aircraft measurements in conjunction with off-line ion chromatography (IC) measurements. Accurate and precise background samples are essential to account for gas-phase components not efficiently removed and any interference in the instrument lines, collection vials or off-line analysis procedures. For aircraft sampling with PILS, backgrounds are typically taken with in-line filters to remove particles prior to sample collection once or twice per flight with more numerous backgrounds taken on the ground. Here, we use data collected during the Front Range Air Pollution and Photochemistry Éxperiment (FRAPPÉ) to demonstrate that not only are multiple background filter samples are essential to attain a representative background, but that the chemical background signals do not follow the Gaussian statistics typically assumed. Instead, the background signals for all chemical components analyzed from 137 background samples (taken from ∼78 total sampling hours over 18 flights) follow a log-normal distribution, meaning that the typical approaches of averaging background samples and/or assuming a Gaussian distribution cause an over-estimation of background samples - and thus an underestimation of sample concentrations. Our approach of deriving backgrounds from the peak of the log-normal distribution results in detection limits of 0.25, 0.32, 3.9, 0.17, 0.75 and 0.57 μg m-3 for sub-micron aerosol nitrate (NO3-), nitrite (NO2-), ammonium (NH4+), sulfate (SO42-), potassium (K+) and calcium (Ca2+), respectively. The difference in backgrounds calculated from assuming a Gaussian distribution versus a log-normal distribution were most extreme for NH4+, resulting in a background that was 1.58× that determined from fitting a log-normal distribution.

  3. An Adaptive Management Approach for Summer Water Level Reductions on the Upper Mississippi River System

    USGS Publications Warehouse

    Johnson, B.L.; Barko, J.W.; Clevenstine, R.; Davis, M.; Galat, D.L.; Lubinski, S.J.; Nestler, J.M.

    2010-01-01

    The primary purpose of this report is to provide an adaptive management approach for learning more about summer water level reductions (drawdowns) as a management tool, including where and how drawdowns can be applied most effectively within the Upper Mississippi River System. The report reviews previous drawdowns conducted within the system and provides specific recommendations for learning more about the lesser known effects of drawdowns and how the outcomes can be influenced by different implementation strategies and local conditions. The knowledge gained can be used by managers to determine how best to implement drawdowns in different parts of the UMRS to help achieve management goals. The information and recommendations contained in the report are derived from results of previous drawdown projects, insights from regional disciplinary experts, and the experience of the authors in experimental design, modeling, and monitoring. Modeling is a critical part of adaptive management and can involve conceptual models, simulation models, and empirical models. In this report we present conceptual models that express current understanding regarding functioning of the UMRS as related to drawdowns and highlight interactions among key ecological components of the system. The models were developed within the constraints of drawdown timing, magnitude (depth), and spatial differences in effects (longitudinal and lateral) with attention to ecological processes affected by drawdowns. With input from regional experts we focused on the responses of vegetation, fish, mussels, other invertebrates, and birds. The conceptual models reflect current understanding about relations and interactions among system components, the expected strength of those interactions, potential responses of system components to drawdowns, likelihood of the response occurring, and key uncertainties that limit our ability to make accurate predictions of effects (Table 1, Fig. 4-10). Based on this current

  4. An integrated stochastic approach to the assessment of agricultural water demand and adaptation to water scarcity

    NASA Astrophysics Data System (ADS)

    Foster, T.; Butler, A. P.; McIntyre, N.

    2012-12-01

    Increasing water demands from growing populations coupled with changing water availability, for example due to climate change, are likely to increase water scarcity. Agriculture will be exposed to risk due to the importance of reliable water supplies as an input to crop production. To assess the efficiency of agricultural adaptation options requires a sound understanding of the relationship between crop growth and water application. However, most water resource planning models quantify agricultural water demand using highly simplified, temporally lumped estimated crop-water production functions (CWPFs). Such CWPFs fail to capture the biophysical complexities in crop-water relations and mischaracterise farmers ability to respond to water scarcity. Application of these models in policy analyses will be ineffective and may lead to unsustainable water policies. Crop simulation models provide an alternative means of defining the complex nature of the CWPF. Here we develop a daily water-limited crop model for this purpose. The model is based on the approach used in the FAO's AquaCrop model, balancing biophysical and computational complexities. We further develop the model by incorporating improved simulation routines to calculate the distribution of water through the soil profile. Consequently we obtain a more realistic representation of the soil water balance with concurrent improvements in the prediction of water-limited yield. We introduce a methodology to utilise this model for the generation of stochastic crop-water production functions (SCWPFs). This is achieved by running the model iteratively with both time series of climatic data and variable quantities of irrigation water, employing a realistic rule-based approach to farm irrigation scheduling. This methodology improves the representation of potential crop yields, capturing both the variable effects of water deficits on crop yield and the stochastic nature of the CWPF due to climatic variability. Application to

  5. An Adaptive Approach to Family Intervention: Linking Engagement in Family-Centered Intervention to Reductions in Adolescent Problem Behavior

    ERIC Educational Resources Information Center

    Connell, Arin M.; Dishion, Thomas J.; Yasui, Miwa; Kavanagh, Kathryn

    2007-01-01

    This study used Complier Average Causal Effect analysis (CACE; see G. Imbens & D. Rubin, 1997) to examine the impact of an adaptive approach to family intervention in the public schools on rates of substance use and antisocial behavior among students ages 11-17. Students were randomly assigned to a family-centered intervention (N = 998) in 6th…

  6. A Comparison of an Expert Systems Approach to Computerized Adaptive Testing and an Item Response Theory Model.

    ERIC Educational Resources Information Center

    Frick, Theodore W.

    Expert systems can be used to aid decisionmaking. A computerized adaptive test is one kind of expert system, although not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. Two versions of EXSPRT were developed, one with random…

  7. Stability evaluation and improvement of adaptive optics systems by using the Lyapunov stability approach

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Chen, Tao; Liu, Xin-yue; Lin, Xu-dong; Yang, Xiao-xia; Li, Hong-zhuang

    2016-02-01

    In this research, investigations on the closed-loop control stability of adaptive optics systems are conducted by using the Lyapunov approach. As an direct metric of the control stability, the error propagator includes the effects of both the integral gain and the influence matrix and is effective for control-stability evaluation. An experimental 97-element AO system is developed for the control-stability investigation, and the Southwell sensor-actuator configuration rather than the Fried geometry is adopted so as to suppress the potential waffle mode. Because filtering out small singular values of the influence matrix can be used to improve the control stability, the effect of the influence matrix and the effect of the integral gain are considered as a whole by using the error propagator. Then, the control stability of the AO system is evaluated for varying the integral gains and the number of filtered-out singular values. Afterwards, an analysis of the evaluations of the error propagator is made, and a conclusion can be drawn that the control stability can be improved by filtering out more singular values of the influence matrix when the integral gain is high. In other words, the error propagator is useful for trading off the bandwidth error and the fitting error of AO systems in a control-stability approach. Finally, a performance measurement of the experimental AO system is conducted when 13 smaller singular values of the influence matrix are filtered out, and the results show that filtering out a small fraction of the singular values has a minor influence on the performance of this AO system.

  8. A knowledge-based approach to the adaptive finite element analysis

    SciTech Connect

    Haghighi, K.; Kang, E.

    1995-12-31

    An automatic and knowledge-based finite element mesh generator (INTELMESH), which makes extensive use of interactive computer graphics techniques, has been developed. INTELMESH is designed for planar domains and axisymmetric 3-D structures of elasticity and heat transfer subjected to mechanical and thermal loading. It intelligently identifies the critical regions/points in the problem domain and utilizes the new concepts of substructuring and wave propagation to choose the proper mesh size for them. INTELMESH generates well-shaped triangular elements by applying triangulation and Laplacian smoothing procedures. The adaptive analysis involves the initial finite element analysis and an efficient a-posteriori error analysis and estimation. Once a problem is defined, the system automatically builds a finite element model and analyzes the problem through an automatic iterative process until the error reaches a desired level. It has been shown that the proposed approach which initiates the process with an a-priori, and near optimum mesh of the object, converges to the desired accuracy in less time and at less cost.

  9. A graph-based approach to developing adaptive representations of complex reaction mechanisms

    SciTech Connect

    He, Kaiyuan; Ierapetritou, Marianthi G.; Androulakis, Ioannis P.

    2008-12-15

    An effective adaptive mechanism reduction approach based on flux graph clustering is proposed in this paper. The instantaneous element flux is quantified and considered as a proxy for describing the reactive propensities of the system. Our underlying hypothesis is that even though particular conditions may be characterized by a multitude of combinations of species mass fraction, T, and P, the essential chemistry, and hence the reaction propensity of the mixture that is active under this family of conditions, is the same. Therefore, we opt to use the instantaneous fluxes through the active reactions as an intrinsic property of the system. Flux graphs are first constructed for the chemical reaction system under numerous conditions aiming at capturing the attainable region. Similarity between flux graphs is quantified through the distances between corresponding vectors, using the cosine coefficient and a novel graph-distance metric taking into account the magnitude of each flux and the activity distribution of different fluxes. A hierarchical clustering algorithm is implemented to group similar instantaneous flux graphs into clusters, and consequently a reduced mechanism is generated for each cluster. A search algorithm is defined afterward to assign a new query point to a particular flux graph cluster, and subsequently the reduced mechanism associated with this cluster is used to describe the system at this time point. Finally, the methodology is demonstrated using n-pentane combustion in an adiabatic plug flow reactor model and a pairwise mixing stirred reactor model. (author)

  10. The role of idiotypic interactions in the adaptive immune system: a belief-propagation approach

    NASA Astrophysics Data System (ADS)

    Bartolucci, Silvia; Mozeika, Alexander; Annibale, Alessia

    2016-08-01

    In this work we use belief-propagation techniques to study the equilibrium behaviour of a minimal model for the immune system comprising interacting T and B clones. We investigate the effect of the so-called idiotypic interactions among complementary B clones on the system’s activation. Our results show that B–B interactions increase the system’s resilience to noise, making clonal activation more stable, while increasing the cross-talk between different clones. We derive analytically the noise level at which a B clone gets activated, in the absence of cross-talk, and find that this increases with the strength of idiotypic interactions and with the number of T cells sending signals to the B clones. We also derive, analytically and numerically, via population dynamics, the critical line where clonal cross-talk arises. Our approach allows us to derive the B clone size distribution, which can be experimentally measured and gives important information about the adaptive immune system response to antigens and vaccination.

  11. An adaptive approach to facilitating research productivity in a primary care clinical department.

    PubMed

    Weber-Main, Anne Marie; Finstad, Deborah A; Center, Bruce A; Bland, Carole J

    2013-07-01

    Efforts to foster the growth of a department's or school's research mission can be informed by known correlates of research productivity, but the specific strategies to be adopted will be highly context-dependent, influenced by local, national, and discipline-specific needs and resources. The authors describe a multifaceted approach-informed by a working model of organizational research productivity-by which the University of Minnesota Department of Family Medicine and Community Health (Twin Cities campus) successfully increased its collective research productivity during a 10-year period (1997-2007) and maintained these increases over time.Facing barriers to recruitment of faculty investigators, the department focused instead on nurturing high-potential investigators among their current faculty via a new, centrally coordinated research program, with provision of training, protected time, technical resources, mentoring, and a scholarly culture to support faculty research productivity. Success of these initiatives is documented by the following: substantial increases in the department's external research funding, rise to a sustained top-five ranking based on National Institutes of Health funding to U.S. family medicine departments, later-stage growth in the faculty's publishing record, increased research capacity among the faculty, and a definitive maturation of the department's research mission. The authors offer their perspectives on three apparent drivers of success with broad applicability-namely, effective leadership, systemic culture change, and the self-awareness to adapt to changes in the local, institutional, and national research environment. PMID:23702527

  12. Estimating oxygen consumption from heart rate using adaptive neuro-fuzzy inference system and analytical approaches.

    PubMed

    Kolus, Ahmet; Dubé, Philippe-Antoine; Imbeau, Daniel; Labib, Richard; Dubeau, Denise

    2014-11-01

    In new approaches based on adaptive neuro-fuzzy systems (ANFIS) and analytical method, heart rate (HR) measurements were used to estimate oxygen consumption (VO2). Thirty-five participants performed Meyer and Flenghi's step-test (eight of which performed regeneration release work), during which heart rate and oxygen consumption were measured. Two individualized models and a General ANFIS model that does not require individual calibration were developed. Results indicated the superior precision achieved with individualized ANFIS modelling (RMSE = 1.0 and 2.8 ml/kg min in laboratory and field, respectively). The analytical model outperformed the traditional linear calibration and Flex-HR methods with field data. The General ANFIS model's estimates of VO2 were not significantly different from actual field VO2 measurements (RMSE = 3.5 ml/kg min). With its ease of use and low implementation cost, the General ANFIS model shows potential to replace any of the traditional individualized methods for VO2 estimation from HR data collected in the field. PMID:24793823

  13. Trickle-down evolution: an approach to getting major evolutionary adaptive changes into textbooks and curricula.

    PubMed

    Padian, Kevin

    2008-08-01

    Although contemporary high school and college textbooks of biology generally cover the principles and data of microevolution (genetic and populational change) and speciation rather well, coverage of what is known of the major changes in evolution (macroevolution), and how the evidence is understood is generally poor to nonexistent. It is critical to improve this because acceptance of evolution by the American public rests on the understanding of how we know what we know about the emergence of major new taxonomic groups, and about their adaptations, behaviors, and ecologies in geologic time. An efficient approach to this problem is to improve the illustrations in college textbooks to show the consilience of different lines of fossil, morphological, and molecular evidence mapped on phylogenies. Such "evograms" will markedly improve traditional illustrations of phylogenies, "menageries," and "companatomies." If "evograms" are installed at the college level, the basic principles and evidence of macroevolution will be more likely taught in K-12, thus providing an essential missing piece in biological education. PMID:21669782

  14. A chemodynamic approach for estimating losses of target organic chemicals from water during sample holding time

    USGS Publications Warehouse

    Capel, P.D.; Larson, S.J.

    1995-01-01

    Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.

  15. A false sense of security? Can tiered approach be trusted to accurately classify immunogenicity samples?

    PubMed

    Jaki, Thomas; Allacher, Peter; Horling, Frank

    2016-09-01

    Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. PMID:27262992

  16. Approaches to Teaching: Adapting Cases in Operations Management for Use in the Technical Writing Classroom.

    ERIC Educational Resources Information Center

    Morrow, John

    1988-01-01

    Describes how technical writing teachers can adapt existing operations management cases for the writing classroom by recognizing communication gaps and filling them with appropriate writing scenarios. (ARH)

  17. Analytical approaches for determination of bromine in sediment core samples by X-ray fluorescence spectrometry.

    PubMed

    Pashkova, Galina V; Aisueva, Tatyana S; Finkelshtein, Alexander L; Ivanov, Egor V; Shchetnikov, Alexander A

    2016-11-01

    Bromine has been recognized as a valuable indicator for paleoclimatic studies. Wavelength dispersive X-ray fluorescence (WDXRF) and total reflection X-ray fluorescence (TXRF) methods were applied to study the bromine distributions in lake sediment cores. Conventional WDXRF technique usually requires relatively large mass of a sediment sample and a set of calibration samples. Some analytical approaches were developed to apply WDXRF to small sediment core samples in the absence of adequate calibration samples with a known Br content. The mass of a sample to be analyzed was reduced up to 200-300mg and the internal standard method with correction using fundamental parameters was developed for Br quantification. TXRF technique based on the direct analysis of a solid suspension using 20mg of sediment sample by internal standard method was additionally tested. The accuracy of the WDXRF and TXRF techniques was assessed by the comparative analysis of reference materials of sediments, soil and biological samples. In general, good agreement was achieved between the reference values and the measured values. The detection limits of Br were 1mg/kg and 0.4mg/kg for WDXRF and TXRF respectively. The results of the Br determination obtained with different XRF techniques were comparable to each other and used for paleoclimatic reconstructions. PMID:27591627

  18. An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians

    SciTech Connect

    Hughes, Ciaran; Mehta, Dhagash; Wales, David J.

    2014-05-21

    Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems.

  19. Evaluating the efficacy of adaptive management approaches: is there a formula for success?

    PubMed

    McFadden, Jamie E; Hiller, Tim L; Tyre, Andrew J

    2011-05-01

    Within the field of natural-resources management, the application of adaptive management is appropriate for complex problems high in uncertainty. Adaptive management is becoming an increasingly popular management-decision tool within the scientific community and has developed into two primary schools of thought: the Resilience-Experimentalist School (with high emphasis on stakeholder involvement, resilience, and highly complex models) and the Decision-Theoretic School (which results in relatively simple models through emphasizing stakeholder involvement for identifying management objectives). Because of these differences, adaptive management plans implemented under each of these schools may yield varying levels of success. We evaluated peer-reviewed literature focused on incorporation of adaptive management to identify components of successful adaptive management plans. Our evaluation included adaptive management elements such as stakeholder involvement, definitions of management objectives and actions, use and complexity of predictive models, and the sequence in which these elements were applied. We also defined a scale of degrees of success to make comparisons between the two adaptive management schools of thought. Our results include the relationship between the adaptive management process documented in the reviewed literature and our defined continuum of successful outcomes. Our data suggest an increase in the number of published articles with substantive discussion of adaptive management from 2000 to 2009 at a mean rate of annual change of 0.92 (r² = 0.56). Additionally, our examination of data for temporal patterns related to each school resulted in an increase in acknowledgement of the Decision-Theoretic School of thought at a mean annual rate of change of 0.02 (r² = 0.6679) and a stable acknowledgement for the Resilience-Experimentalist School of thought (r² = 0.0042; slope = 0.0013). Identifying the elements of successful adaptive management will be

  20. Comprehensive multiphase NMR spectroscopy: basic experimental approaches to differentiate phases in heterogeneous samples.

    PubMed

    Courtier-Murias, Denis; Farooq, Hashim; Masoom, Hussain; Botana, Adolfo; Soong, Ronald; Longstaffe, James G; Simpson, Myrna J; Maas, Werner E; Fey, Michael; Andrew, Brian; Struppe, Jochem; Hutchins, Howard; Krishnamurthy, Sridevi; Kumar, Rajeev; Monette, Martine; Stronks, Henry J; Hume, Alan; Simpson, André J

    2012-04-01

    Heterogeneous samples, such as soils, sediments, plants, tissues, foods and organisms, often contain liquid-, gel- and solid-like phases and it is the synergism between these phases that determine their environmental and biological properties. Studying each phase separately can perturb the sample, removing important structural information such as chemical interactions at the gel-solid interface, kinetics across boundaries and conformation in the natural state. In order to overcome these limitations a Comprehensive Multiphase-Nuclear Magnetic Resonance (CMP-NMR) probe has been developed, and is introduced here, that permits all bonds in all phases to be studied and differentiated in whole unaltered natural samples. The CMP-NMR probe is built with high power circuitry, Magic Angle Spinning (MAS), is fitted with a lock channel, pulse field gradients, and is fully susceptibility matched. Consequently, this novel NMR probe has to cover all HR-MAS aspects without compromising power handling to permit the full range of solution-, gel- and solid-state experiments available today. Using this technology, both structures and interactions can be studied independently in each phase as well as transfer/interactions between phases within a heterogeneous sample. This paper outlines some basic experimental approaches using a model heterogeneous multiphase sample containing liquid-, gel- and solid-like components in water, yielding separate (1)H and (13)C spectra for the different phases. In addition, (19)F performance is also addressed. To illustrate the capability of (19)F NMR soil samples, containing two different contaminants, are used, demonstrating a preliminary, but real-world application of this technology. This novel NMR approach possesses a great potential for the in situ study of natural samples in their native state. PMID:22425441

  1. Comprehensive multiphase NMR spectroscopy: Basic experimental approaches to differentiate phases in heterogeneous samples

    NASA Astrophysics Data System (ADS)

    Courtier-Murias, Denis; Farooq, Hashim; Masoom, Hussain; Botana, Adolfo; Soong, Ronald; Longstaffe, James G.; Simpson, Myrna J.; Maas, Werner E.; Fey, Michael; Andrew, Brian; Struppe, Jochem; Hutchins, Howard; Krishnamurthy, Sridevi; Kumar, Rajeev; Monette, Martine; Stronks, Henry J.; Hume, Alan; Simpson, André J.

    2012-04-01

    Heterogeneous samples, such as soils, sediments, plants, tissues, foods and organisms, often contain liquid-, gel- and solid-like phases and it is the synergism between these phases that determine their environmental and biological properties. Studying each phase separately can perturb the sample, removing important structural information such as chemical interactions at the gel-solid interface, kinetics across boundaries and conformation in the natural state. In order to overcome these limitations a Comprehensive Multiphase-Nuclear Magnetic Resonance (CMP-NMR) probe has been developed, and is introduced here, that permits all bonds in all phases to be studied and differentiated in whole unaltered natural samples. The CMP-NMR probe is built with high power circuitry, Magic Angle Spinning (MAS), is fitted with a lock channel, pulse field gradients, and is fully susceptibility matched. Consequently, this novel NMR probe has to cover all HR-MAS aspects without compromising power handling to permit the full range of solution-, gel- and solid-state experiments available today. Using this technology, both structures and interactions can be studied independently in each phase as well as transfer/interactions between phases within a heterogeneous sample. This paper outlines some basic experimental approaches using a model heterogeneous multiphase sample containing liquid-, gel- and solid-like components in water, yielding separate 1H and 13C spectra for the different phases. In addition, 19F performance is also addressed. To illustrate the capability of 19F NMR soil samples, containing two different contaminants, are used, demonstrating a preliminary, but real-world application of this technology. This novel NMR approach possesses a great potential for the in situ study of natural samples in their native state.

  2. An efficient approach for Mars Sample Return using emerging commercial capabilities

    NASA Astrophysics Data System (ADS)

    Gonzales, Andrew A.; Stoker, Carol R.

    2016-06-01

    Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science (Squyres, 2011 [1]). This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as "Red Dragon", onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the

  3. Adaptive patch-based POCS approach for super resolution reconstruction of 4D-CT lung data

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Cao, Lei; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu

    2015-08-01

    Image enhancement of lung four-dimensional computed tomography (4D-CT) data is highly important because image resolution remains a crucial point in lung cancer radiotherapy. In this paper, we proposed a method for lung 4D-CT super resolution (SR) by using an adaptive-patch-based projection onto convex sets (POCS) approach, which is in contrast with the global POCS SR algorithm, to recover fine details with lesser artifacts in images. The main contribution of this patch-based approach is that the interfering local structure from other phases can be rejected by employing a similar patch adaptive selection strategy. The effectiveness of our approach is demonstrated through experiments on simulated images and real lung 4D-CT datasets. A comparison with previously published SR reconstruction methods highlights the favorable characteristics of the proposed method.

  4. Comparing Stream DOC Fluxes from Sensor- and Sample-Based Approaches

    NASA Astrophysics Data System (ADS)

    Shanley, J. B.; Saraceno, J.; Aulenbach, B. T.; Mast, A.; Clow, D. W.; Hood, K.; Walker, J. F.; Murphy, S. F.; Torres-Sanchez, A.; Aiken, G.; McDowell, W. H.

    2015-12-01

    DOC transport by streamwater is a significant flux that does not consistently show up in ecosystem carbon budgets. In an effort to quantify stream DOC flux, we analyzed three to four years of high-frequency in situ fluorescing dissolved organic matter (FDOM) concentrations and turbidity measured by optical sensors at the five diverse forested and/or alpine headwater sites of the U.S. Geological Survey (USGS) Water, Energy, and Biogeochemical Budgets (WEBB) program. FDOM serves as a proxy for DOC. We also took discrete samples over a range of hydrologic conditions, using both manual weekly and automated event-based sampling. After compensating FDOM for temperature effects and turbidity interference - which was successful even at the high-turbidity Luquillo, PR site -- we evaluated the DOC-FDOM relation based on discrete sample DOC analyses matched to corrected FDOM at the time of sampling. FDOM was a moderately robust predictor of DOC, with r2 from 0.60 to more than 0.95 among sites. We then formed continuous DOC time series by two independent approaches: (1) DOC predicted from FDOM; and (2) the composite method, based on modeled DOC from regression on stream discharge, season, air temperature, and time, forcing the model to observations and adjusting modeled concentrations between observations by linearly-interpolated model residuals. DOC flux from each approach was then computed directly as concentration times discharge. DOC fluxes based on the sensor approach were consistently greater than the sample-based approach. At Loch Vale, CO (2.5 years) and Panola Mountain GA (1 year), the difference was 5-17%. At Sleepers River, VT (3 years), preliminary differences were greater than 20%. The difference is driven by the highest events, but we are investigating these results further. We will also present comparisons from Luquillo, PR, and Allequash Creek, WI. The higher sensor-based DOC fluxes could result from their accuracy during hysteresis, which is difficult to model

  5. Development and Climate Change: A Mainstreaming Approach for Assessing Economic, Social, and Environmental Impacts of Adaptation Measures

    NASA Astrophysics Data System (ADS)

    Halsnæs, Kirsten; Trærup, Sara

    2009-05-01

    The paper introduces the so-called climate change mainstreaming approach, where vulnerability and adaptation measures are assessed in the context of general development policy objectives. The approach is based on the application of a limited set of indicators. These indicators are selected as representatives of focal development policy objectives, and a stepwise approach for addressing climate change impacts, development linkages, and the economic, social and environmental dimensions related to vulnerability and adaptation are introduced. Within this context it is illustrated using three case studies how development policy indicators in practice can be used to assess climate change impacts and adaptation measures based on three case studies, namely a road project in flood prone areas of Mozambique, rainwater harvesting in the agricultural sector in Tanzania and malaria protection in Tanzania. The conclusions of the paper confirm that climate risks can be reduced at relatively low costs, but the uncertainty is still remaining about some of the wider development impacts of implementing climate change adaptation measures.

  6. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  7. An Open IMS-Based User Modelling Approach for Developing Adaptive Learning Management Systems

    ERIC Educational Resources Information Center

    Boticario, Jesus G.; Santos, Olga C.

    2007-01-01

    Adaptive LMS have not yet reached the eLearning marketplace due to methodological, technological and management open issues. At aDeNu group, we have been working on two key challenges for the last five years in related research projects. Firstly, develop the general framework and a running architecture to support the adaptive life cycle (i.e.,…

  8. Adaptive Link Generation for Multiperspective Thinking on the Web: An Approach to Motivate Learners to Think

    ERIC Educational Resources Information Center

    Mitsuhara, Hiroyuki; Kanenishi, Kazuhide; Yano, Yoneo

    2006-01-01

    To increase the efficiency of exploratory learning on the Web, we previously developed a free-hyperlink environment that allows adaptive link generation. In this environment, learners can make new hyperlinks independent of static hyperlinks and share them on the Web. To reduce hyperlink overflow, the adaptive link generation filters out sharable…

  9. The GANA Program: A Tailoring Approach to Adapting Parent Child Interaction Therapy for Mexican Americans

    ERIC Educational Resources Information Center

    McCabe, Kristen M.; Yeh, May; Garland, Ann F.; Lau, Anna S.; Chavez, Gloria

    2005-01-01

    The current manuscript describes the process of developing the GANA program, a version of PCIT that has been culturally adapted for Mexican American families. The adaptation process involved combining information from 1) clinical literature on Mexican American families, 2) empirical literature on barriers to treatment access and effectiveness, and…

  10. Student Approaches to Learning in Physics--Validity and Exploration Using Adapted SPQ

    ERIC Educational Resources Information Center

    Sharma, Manjula Devi; Stewart, Chris; Wilson, Rachel; Gokalp, Muhammed Sait

    2013-01-01

    The aim of this study was to investigate an adaptation of the Study Processes Questionnaire for the discipline of physics. A total of 2030 first year physics students at an Australian metropolitan university completed the questionnaire over three different year cohorts. The resultant data has been used to explore whether the adaptation of the…

  11. Comparing catchment sediment fingerprinting procedures using an auto-evaluation approach with virtual sample mixtures.

    PubMed

    Palazón, Leticia; Latorre, Borja; Gaspar, Leticia; Blake, William H; Smith, Hugh G; Navas, Ana

    2015-11-01

    Information on sediment sources in river catchments is required for effective sediment control strategies, to understand sediment, nutrient and pollutant transport, and for developing soil erosion management plans. Sediment fingerprinting procedures are employed to quantify sediment source contributions and have become a widely used tool. As fingerprinting procedures are naturally variable and locally dependant, there are different applications of the procedure. Here, the auto-evaluation of different fingerprinting procedures using virtual sample mixtures is proposed to support the selection of the fingerprinting procedure with the best capacity for source discrimination and apportionment. Surface samples from four land uses from a Central Spanish Pyrenean catchment were used i) as sources to generate the virtual sample mixtures and ii) to characterise the sources for the fingerprinting procedures. The auto-evaluation approach involved comparing fingerprinting procedures based on four optimum composite fingerprints selected by three statistical tests, three source characterisations (mean, median and corrected mean) and two types of objective functions for the mixing model. A total of 24 fingerprinting procedures were assessed by this new approach which were solved by Monte Carlo simulations and compared using the root mean squared error (RMSE) between known and assessed source ascriptions for the virtual sample mixtures. It was found that the source ascriptions with the highest accuracy were achieved using the corrected mean source characterisations for the composite fingerprints selected by the Kruskal Wallis H-test and principal components analysis. Based on the RMSE results, high goodness of fit (GOF) values were not always indicative of accurate source apportionment results, and care should be taken when using GOF to assess mixing model performance. The proposed approach to test different fingerprinting procedures using virtual sample mixtures provides an

  12. Overcoming the matched-sample bottleneck: an orthogonal approach to integrate omic data

    PubMed Central

    Nguyen, Tin; Diaz, Diana; Tagett, Rebecca; Draghici, Sorin

    2016-01-01

    MicroRNAs (miRNAs) are small non-coding RNA molecules whose primary function is to regulate the expression of gene products via hybridization to mRNA transcripts, resulting in suppression of translation or mRNA degradation. Although miRNAs have been implicated in complex diseases, including cancer, their impact on distinct biological pathways and phenotypes is largely unknown. Current integration approaches require sample-matched miRNA/mRNA datasets, resulting in limited applicability in practice. Since these approaches cannot integrate heterogeneous information available across independent experiments, they neither account for bias inherent in individual studies, nor do they benefit from increased sample size. Here we present a novel framework able to integrate miRNA and mRNA data (vertical data integration) available in independent studies (horizontal meta-analysis) allowing for a comprehensive analysis of the given phenotypes. To demonstrate the utility of our method, we conducted a meta-analysis of pancreatic and colorectal cancer, using 1,471 samples from 15 mRNA and 14 miRNA expression datasets. Our two-dimensional data integration approach greatly increases the power of statistical analysis and correctly identifies pathways known to be implicated in the phenotypes. The proposed framework is sufficiently general to integrate other types of data obtained from high-throughput assays. PMID:27403564

  13. Overcoming the matched-sample bottleneck: an orthogonal approach to integrate omic data.

    PubMed

    Nguyen, Tin; Diaz, Diana; Tagett, Rebecca; Draghici, Sorin

    2016-01-01

    MicroRNAs (miRNAs) are small non-coding RNA molecules whose primary function is to regulate the expression of gene products via hybridization to mRNA transcripts, resulting in suppression of translation or mRNA degradation. Although miRNAs have been implicated in complex diseases, including cancer, their impact on distinct biological pathways and phenotypes is largely unknown. Current integration approaches require sample-matched miRNA/mRNA datasets, resulting in limited applicability in practice. Since these approaches cannot integrate heterogeneous information available across independent experiments, they neither account for bias inherent in individual studies, nor do they benefit from increased sample size. Here we present a novel framework able to integrate miRNA and mRNA data (vertical data integration) available in independent studies (horizontal meta-analysis) allowing for a comprehensive analysis of the given phenotypes. To demonstrate the utility of our method, we conducted a meta-analysis of pancreatic and colorectal cancer, using 1,471 samples from 15 mRNA and 14 miRNA expression datasets. Our two-dimensional data integration approach greatly increases the power of statistical analysis and correctly identifies pathways known to be implicated in the phenotypes. The proposed framework is sufficiently general to integrate other types of data obtained from high-throughput assays. PMID:27403564

  14. Depth Analogy: Data-Driven Approach for Single Image Depth Estimation Using Gradient Samples.

    PubMed

    Choi, Sunghwan; Min, Dongbo; Ham, Bumsub; Kim, Youngjung; Oh, Changjae; Sohn, Kwanghoon

    2015-12-01

    Inferring scene depth from a single monocular image is a highly ill-posed problem in computer vision. This paper presents a new gradient-domain approach, called depth analogy, that makes use of analogy as a means for synthesizing a target depth field, when a collection of RGB-D image pairs is given as training data. Specifically, the proposed method employs a non-parametric learning process that creates an analogous depth field by sampling reliable depth gradients using visual correspondence established on training image pairs. Unlike existing data-driven approaches that directly select depth values from training data, our framework transfers depth gradients as reconstruction cues, which are then integrated by the Poisson reconstruction. The performance of most conventional approaches relies heavily on the training RGB-D data used in the process, and such a dependency severely degenerates the quality of reconstructed depth maps when the desired depth distribution of an input image is quite different from that of the training data, e.g., outdoor versus indoor scenes. Our key observation is that using depth gradients in the reconstruction is less sensitive to scene characteristics, providing better cues for depth recovery. Thus, our gradient-domain approach can support a great variety of training range datasets that involve substantial appearance and geometric variations. The experimental results demonstrate that our (depth) gradient-domain approach outperforms existing data-driven approaches directly working on depth domain, even when only uncorrelated training datasets are available. PMID:26529766

  15. Sampling for area estimation - A comparison of full-frame sampling with the sample segment approach. [from classifications of Landsat data

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The objective of this investigation was to evaluate the effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of Landsat MSS data. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  16. Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne

    We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

  17. An Adaptive Approach for Identifying Cocaine Dependent Patients Who Benefit from Extended Continuing Care

    PubMed Central

    McKay, James R.; Van Horn, Deborah; Lynch, Kevin G.; Ivey, Megan; Cary, Mark S.; Drapkin, Michelle; Coviello, Donna M.; Plebani, Jennifer G.

    2014-01-01

    Objective Study tested whether cocaine dependent patients using cocaine or alcohol at intake or in the first few weeks of intensive outpatient treatment would benefit more from extended continuing care than patients abstinent during this period. The effect of incentives for continuing care attendance was also examined. Methods Participants (N=321) were randomized to: treatment as usual (TAU), TAU and Telephone Monitoring and Counseling (TMC), or TAU and TMC plus incentives (TMC+). The primary outcomes were: (1) abstinence from all drugs and heavy alcohol use, and (2) cocaine urine toxicology. Follow-ups were at 3, 6, 9, 12, 18, and 24 months. Results Cocaine and alcohol use at intake or early in treatment predicted worse outcomes on both measures (ps≤ .0002). Significant effects favoring TMC over TAU on the abstinence composite were obtained in participants who used cocaine (OR=1.95 [1.02, 3.73]) or alcohol (OR=2.47 [1.28, 4.78]) at intake or early in treatment. A significant effect favoring TMC+ over TAU on cocaine urine toxicology was obtained in those using cocaine during that period (OR= 0.55 [0.31, 0.95]). Conversely, there were no treatment effects in participants abstinent at baseline, and no overall treatment main effects. Incentives almost doubled the number of continuing care sessions received, but did not further improve outcomes. Conclusion An adaptive approach for cocaine dependence in which extended continuing care is provided only to patients who are using cocaine or alcohol at intake or early in treatment improves outcomes in this group while reducing burden and costs in lower risk patients. PMID:24041231

  18. Coupled modeling approach to assess climate change impacts on groundwater recharge and adaptation in arid areas

    NASA Astrophysics Data System (ADS)

    Hashemi, H.; Uvo, C. B.; Berndtsson, R.

    2015-10-01

    The effect of future climate scenarios on surface and groundwater resources was simulated using a modeling approach for an artificial recharge area in arid southern Iran. Future climate data for the periods of 2010-2030 and 2030-2050 were acquired from the Canadian Global Coupled Model (CGCM 3.1) for scenarios A1B, A2, and B1. These scenarios were adapted to the studied region using the delta-change method. A conceptual rainfall-runoff model (Qbox) was used to simulate runoff in a flash flood prone catchment. The model was calibrated and validated for the period 2002-2011 using daily discharge data. The projected climate variables were used to simulate future runoff. The rainfall-runoff model was then coupled to a calibrated groundwater flow and recharge model (MODFLOW) to simulate future recharge and groundwater hydraulic heads. As a result of the rainfall-runoff modeling, under the B1 scenario the number of floods is projected to slightly increase in the area. This in turn calls for proper management, as this is the only source of fresh water supply in the studied region. The results of the groundwater recharge modeling showed no significant difference between present and future recharge for all scenarios. Owing to that, four abstraction and recharge scenarios were assumed to simulate the groundwater level and recharge amount in the studied aquifer. The results showed that the abstraction scenarios have the most substantial effect on the groundwater level and the continuation of current pumping rate would lead to a groundwater decline by 18 m up to 2050.

  19. a Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2013-02-01

    In recent years, a demand for 3D models of various scales and precisions has been growing for a wide range of applications; among them, cultural heritage recording is a particularly important and challenging field. We outline an automatic 3D reconstruction pipeline, mainly focusing on dense stereo-matching which relies on a hierarchical, local optimization scheme. Our matching framework consists of a combination of robust cost measures, extracted via an intuitive cost aggregation support area and set within a coarse-tofine strategy. The cost function is formulated by combining three individual costs: a cost computed on an extended census transformation of the images; the absolute difference cost, taking into account information from colour channels; and a cost based on the principal image derivatives. An efficient adaptive method of aggregating matching cost for each pixel is then applied, relying on linearly expanded cross skeleton support regions. Aggregated cost is smoothed via a 3D Gaussian function. Finally, a simple "winnertakes- all" approach extracts the disparity value with minimum cost. This keeps algorithmic complexity and system computational requirements acceptably low for high resolution images (or real-time applications), when compared to complex matching functions of global formulations. The stereo algorithm adopts a hierarchical scheme to accommodate high-resolution images and complex scenes. In a last step, a robust post-processing work-flow is applied to enhance the disparity map and, consequently, the geometric quality of the reconstructed scene. Successful results from our implementation, which combines pre-existing algorithms and novel considerations, are presented and evaluated on the Middlebury platform.

  20. Bioassessment Tools for Stony Corals: Monitoring Approaches and Proposed Sampling Plan for the U.S. Virgin Islands

    EPA Science Inventory

    This document describes three general approaches to the design of a sampling plan for biological monitoring of coral reefs. Status assessment, trend detection and targeted monitoring each require a different approach to site selection and statistical analysis. For status assessm...

  1. A comprehensive approach to the determination of two benzimidazoles in environmental samples.

    PubMed

    Wagil, Marta; Maszkowska, Joanna; Białk-Bielińska, Anna; Stepnowski, Piotr; Kumirska, Jolanta

    2015-01-01

    Among the various pharmaceuticals regarded as emerging pollutants, benzimidazoles--represented by flubendazole and fenbendazole--are of particular concern because of their large-scale use in veterinary medicine and their health effects on aquatic organisms. For this reason, it is essential to have reliable analytical methods which can be used to simultaneously monitor their appearance in environmental matrices such as water, sediment and tissue samples. To date, however, such methods relating to these three matrices have not been available. In this paper we present a comprehensive approach to the determination of both drugs in the mentioned above matrices using liquid chromatography-ion trap mass spectrometry (LC-MS/MS). Special attention was paid to the sample preparation step. The optimal extraction methods were further validated by experiments with spiked water, sediment and fish tissue samples. Matrix effects were established. The following absolute recoveries of flubendazole and fenbendazole were achieved: 96.2% and 95.4% from waters, 103.4% and 98.3% from sediments, and 98.3% and 97.6% from fish tissue samples, respectively. Validation of the LC-MS/MS methods enable flubendazole and fenbendazole to be determined with method detection limits: 1.6 ng L(-1) and 1.7 ng L(-1) in water samples; 0.3 ng g(-1) for both compounds in sediment samples, and 3.3 ng g(-1) and 3.5 ng g(-1) in tissue samples, respectively. The proposed methods were successfully used for analysing selected pharmaceuticals in real samples collected in northern Poland. There is first data on the concentration in the environment of the target compounds in Poland. PMID:24890838

  2. INTEGRATING EVOLUTIONARY AND FUNCTIONAL APPROACHES TO INFER ADAPTATION AT SPECIFIC LOCI

    PubMed Central

    Storz, Jay F.; Wheat, Christopher W.

    2010-01-01

    Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally, population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation. PMID:20500215

  3. Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.

  4. Diagnosing Intellectual Disability in a Forensic Sample: Gender and Age Effects on the Relationship between Cognitive and Adaptive Functioning

    ERIC Educational Resources Information Center

    Hayes, Susan C.

    2005-01-01

    Background: The relationship between adaptive behaviour and cognitive functioning in offenders with intellectual disabilities is not well researched. This study aims to examine gender and age effects on the relationship between these two areas of functioning. Method: The "Vineland Adaptive Behavior Scales" (VABS) and the "Kaufman Brief…

  5. Developing an Instructional Material Using a Concept Cartoon Adapted to the 5E Model: A Sample of Teaching Erosion

    ERIC Educational Resources Information Center

    Birisci, Salih; Metin, Mustafa

    2010-01-01

    Using different instructional materials adapted within the constructivist learning theory will enhance students' conceptual understanding. From this point of view, an instructional instrument using a concept cartoon adapted with 5E model has developed and introduced in this study. The study has some deficiencies in investigating students'…

  6. MRI-based treatment plan simulation and adaptation for ion radiotherapy using a classification-based approach

    PubMed Central

    2013-01-01

    Background In order to benefit from the highly conformal irradiation of tumors in ion radiotherapy, sophisticated treatment planning and simulation are required. The purpose of this study was to investigate the potential of MRI for ion radiotherapy treatment plan simulation and adaptation using a classification-based approach. Methods Firstly, a voxelwise tissue classification was applied to derive pseudo CT numbers from MR images using up to 8 contrasts. Appropriate MR sequences and parameters were evaluated in cross-validation studies of three phantoms. Secondly, ion radiotherapy treatment plans were optimized using both MRI-based pseudo CT and reference CT and recalculated on reference CT. Finally, a target shift was simulated and a treatment plan adapted to the shift was optimized on a pseudo CT and compared to reference CT optimizations without plan adaptation. Results The derivation of pseudo CT values led to mean absolute errors in the range of 81 - 95 HU. Most significant deviations appeared at borders between air and different tissue classes and originated from partial volume effects. Simulations of ion radiotherapy treatment plans using pseudo CT for optimization revealed only small underdosages in distal regions of a target volume with deviations of the mean dose of PTV between 1.4 - 3.1% compared to reference CT optimizations. A plan adapted to the target volume shift and optimized on the pseudo CT exhibited a comparable target dose coverage as a non-adapted plan optimized on a reference CT. Conclusions We were able to show that a MRI-based derivation of pseudo CT values using a purely statistical classification approach is feasible although no physical relationship exists. Large errors appeared at compact bone classes and came from an imperfect distinction of bones and other tissue types in MRI. In simulations of treatment plans, it was demonstrated that these deviations are comparable to uncertainties of a target volume shift of 2 mm in two directions

  7. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    PubMed

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  8. New Approach for IIR Adaptive Lattice Filter Structure Using Simultaneous Perturbation Algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Jorge Ivan Medina; Nakano, Kazushi; Higuchi, Kohji

    Adaptive infinite impulse response (IIR), or recursive, filters are less attractive mainly because of the stability and the difficulties associated with their adaptive algorithms. Therefore, in this paper the adaptive IIR lattice filters are studied in order to devise algorithms that preserve the stability of the corresponding direct-form schemes. We analyze the local properties of stationary points, a transformation achieving this goal is suggested, which gives algorithms that can be efficiently implemented. Application to the Steiglitz-McBride (SM) and Simple Hyperstable Adaptive Recursive Filter (SHARF) algorithms is presented. Also a modified version of Simultaneous Perturbation Stochastic Approximation (SPSA) is presented in order to get the coefficients in a lattice form more efficiently and with a lower computational cost and complexity. The results are compared with previous lattice versions of these algorithms. These previous lattice versions may fail to preserve the stability of stationary points.

  9. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  10. A New Trans-Disciplinary Approach to Regional Integrated Assessment of Climate Impact and Adaptation in Agricultural Systems (Invited)

    NASA Astrophysics Data System (ADS)

    Antle, J. M.; Valdivia, R. O.; Jones, J.; Rosenzweig, C.; Ruane, A. C.

    2013-12-01

    This presentation provides an overview of the new methods developed by researchers in the Agricultural Model Inter-comparison and Improvement Project (AgMIP) for regional climate impact assessment and analysis of adaptation in agricultural systems. This approach represents a departure from approaches in the literature in several dimensions. First, the approach is based on the analysis of agricultural systems (not individual crops) and is inherently trans-disciplinary: it is based on a deep collaboration among a team of climate scientists, agricultural scientists and economists to design and implement regional integrated assessments of agricultural systems. Second, in contrast to previous approaches that have imposed future climate on models based on current socio-economic conditions, this approach combines bio-physical and economic models with a new type of pathway analysis (Representative Agricultural Pathways) to parameterize models consistent with a plausible future world in which climate change would be occurring. Third, adaptation packages for the agricultural systems in a region are designed by the research team with a level of detail that is useful to decision makers, such as research administrators and donors, who are making agricultural R&D investment decisions. The approach is illustrated with examples from AgMIP's projects currently being carried out in Africa and South Asia.

  11. Enhancement in xylose utilization using Kluyveromyces marxianus NIRE-K1 through evolutionary adaptation approach.

    PubMed

    Sharma, Nilesh Kumar; Behera, Shuvashish; Arora, Richa; Kumar, Sachin

    2016-05-01

    The evolutionary adaptation was carried out on the thermotolerant yeast Kluyveromyces marxianus NIRE-K1 at 45 °C up to 60 batches to enhance its xylose utilization capability. The adapted strain showed higher specific growth rate and 3-fold xylose uptake rate and short lag phase as compared to the native strain. During aerobic growth adapted yeast showed 2.81-fold higher xylose utilization than that of native. In anaerobic batch fermentation, adapted yeast utilized about 91% of xylose in 72 h and produced 2.88 and 18.75 g l⁻¹ of ethanol and xylitol, respectively, which were 5.11 and 5.71-fold higher than that of native. Ethanol yield, xylitol yield and specific sugar consumption rate obtained by the adapted cells were found to be 1.57, 1.65 and 4.84-fold higher than that of native yeast, respectively. Aforesaid results suggested that the evolutionary adaptation will be a very effective strategy in the near future for economic lignocellulosic ethanol production. PMID:26886223

  12. Statistical approaches to account for false-positive errors in environmental DNA samples.

    PubMed

    Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid

    2016-05-01

    Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. PMID:26558345

  13. A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications

    NASA Technical Reports Server (NTRS)

    Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco

    2015-01-01

    This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.

  14. A Principled Approach to Deriving Approximate Conditional Sampling Distributions in Population Genetics Models with Recombination

    PubMed Central

    Paul, Joshua S.; Song, Yun S.

    2010-01-01

    The multilocus conditional sampling distribution (CSD) describes the probability that an additionally sampled DNA sequence is of a certain type, given that a collection of sequences has already been observed. The CSD has a wide range of applications in both computational biology and population genomics analysis, including phasing genotype data into haplotype data, imputing missing data, estimating recombination rates, inferring local ancestry in admixed populations, and importance sampling of coalescent genealogies. Unfortunately, the true CSD under the coalescent with recombination is not known, so approximations, formulated as hidden Markov models, have been proposed in the past. These approximations have led to a number of useful statistical tools, but it is important to recognize that they were not derived from, though were certainly motivated by, principles underlying the coalescent process. The goal of this article is to develop a principled approach to derive improved CSDs directly from the underlying population genetics model. Our approach is based on the diffusion process approximation and the resulting mathematical expressions admit intuitive genealogical interpretations, which we utilize to introduce further approximations and make our method scalable in the number of loci. The general algorithm presented here applies to an arbitrary number of loci and an arbitrary finite-alleles recurrent mutation model. Empirical results are provided to demonstrate that our new CSDs are in general substantially more accurate than previously proposed approximations. PMID:20592264

  15. A Bayesian approach for the estimation of patient compliance based on the last sampling information.

    PubMed

    Barrière, Olivier; Li, Jun; Nekka, Fahima

    2011-06-01

    Poor adherence to a drug prescription significantly impacts on the efficacy and safety of a planned therapy. The relationship between drug intake and pharmacokinetics (PK) is only partially known. In this work, we focus on the so-called "inverse problem", concerned with the issue of retracing the patient compliance scenario using limited clinical knowledge. Using a reported Pop-PK model of imatinib, and accounting for the variability around its PK parameters, we were able to simulate a whole range of drug concentration values at a specific sampling point for a population of patients with all possible drug compliance profiles. Using a Bayesian decision rule, we developed a methodology for the determination of the associated compliance profile prior to a given sampling value. The adopted approach allows, for the first time, to quantitatively acquire knowledge about the compliance patterns having a causal effect on a given PK. Moreover, using a simulation approach, we were able to evaluate the evolution of success rate of the retracing process in terms of the considered time period before sampling as well as the model-inherited variability. In conclusion, this work allows, from a probability viewpoint, to propose a solution for this inverse problem of compliance determination. PMID:21445612

  16. Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems

    SciTech Connect

    Omitaomu, Olufemi A; Ganguly, Auroop R; Jiao, Yu

    2009-01-01

    In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.

  17. Ball-and-Stick Local Elevation Umbrella Sampling: Molecular Simulations Involving Enhanced Sampling within Conformational or Alchemical Subspaces of Low Internal Dimensionalities, Minimal Irrelevant Volumes, and Problem-Adapted Geometries.

    PubMed

    Hansen, Halvor S; Hünenberger, Philippe H

    2010-09-14

    A new method, ball-and-stick local elevation umbrella sampling (B&S-LEUS), is proposed to enhance the sampling in computer simulations of (bio)molecular systems. It enables the calculation of conformational free-energy differences between states (or alchemical free-energy differences between molecules), even in situations where the definition of these states relies on a conformational subspace involving more than a few degrees of freedom. The B&S-LEUS method consists of the following steps: (A) choice of a reduced conformational subspace; (B) representation of the relevant states by means of spheres ("balls"), each associated with a biasing potential involving a one-dimensional radial memory-based term and a radial confinement term; (C) definition of a set of lines ("sticks") connecting these spheres, each associated with a biasing potential involving a one-dimensional longitudinal memory-based term and a transverse confinement term; (D) unification of the biasing potentials corresponding to the union of all of the spheres and lines (active subspace) into a single biasing potential according to the enveloping distribution sampling (EDS) scheme; (E) build-up of the memory using the local elevation (LE) procedure, leading to a biasing potential enabling a nearly uniform sampling (radially within the spheres, longitudinally within the lines) of the active subspace; (F) generation of a biased ensemble of configurations using this preoptimized biasing potential, following an umbrella sampling (US) approach; and (G) calculation of the relative free energies of the states via reweighting and state assignment. The main characteristics of this approach are: (i) a low internal dimensionality, that is, the memory only involves one-dimensional grids (acceptable memory requirements); (ii) a minimal irrelevant volume, that is, the conformational volume opened to sampling includes a minimal fraction of irrelevant regions in terms of the free energy of the physical system or of

  18. Micro-TLC Approach for Fast Screening of Environmental Samples Derived from Surface and Sewage Waters.

    PubMed

    Zarzycki, Paweł K; Slączka, Magdalena M; Włodarczyk, Elżbieta; Baran, Michał J

    2013-01-01

    In this work we demonstrated analytical capability of micro-planar (micro-TLC) technique comprising one and two-dimensional (2D) separation modes to generate fingerprints of environmental samples originated from sewage and ecosystems waters. We showed that elaborated separation and detection protocols are complementary to previously invented HPLC method based on temperature-dependent inclusion chromatography and UV-DAD detection. Presented 1D and 2D micro-TLC chromatograms of SPE (solid-phase extraction) extracts were optimized for fast and low-cost screening of water samples collected from lakes and rivers located in the area of Middle Pomerania in northern part of Poland. Moreover, we studied highly organic compounds loaded in the treated and untreated sewage waters obtained from municipal wastewater treatment plant "Jamno" near Koszalin City (Poland). Analyzed environmental samples contained number of substances characterized by polarity range from estetrol to progesterone as well as chlorophyll-related dyes previously isolated and pre-purified by simple SPE protocol involving C18 cartridges. Optimization of micro-TLC separation and quantification protocols of such samples were discussed from the practical point of view using simple separation efficiency criteria including total peaks number, log(product ΔhR F), signal intensity and peak asymmetry. Outcomes of the presented analytical approach, especially using detection involving direct fluorescence (UV366/Vis) and phosphomolybdic acid (PMA) visualization are compared with UV-DAD HPLC-generated data reported previously. Chemometric investigation based on principal components analysis revealed that SPE extracts separated by micro-TLC and detected under fluorescence and PMA visualization modes can be used for robust sample fingerprinting even after long-term storage of the extracts (up to 4 years) at subambient temperature (-20 °C). Such approach allows characterization of wide range of sample components that

  19. Developing Coastal Adaptation to Climate Change in the New York City Infrastructure-Shed: Process, Approach, Tools, and Strategies

    NASA Technical Reports Server (NTRS)

    Rosenzweig, Cynthia; Solecki, William D.; Blake, Reginald; Bowman, Malcolm; Faris, Craig; Gornitz, Vivien; Horton, Radley; Jacob, Klaus; LeBlanc, Alice; Leichenko, Robin; Linkin, Megan; Major, David; O'Grady, Megan; Patrick, Lesley; Sussman, Edna; Yohe, Gary; Zimmerman, Rae

    2010-01-01

    While current rates of sea level rise and associated coastal flooding in the New York City region appear to be manageable by stakeholders responsible for communications, energy, transportation, and water infrastructure, projections for sea level rise and associated flooding in the future, especially those associated with rapid icemelt of the Greenland and West Antarctic Icesheets, may be beyond the range of current capacity because an extreme event might cause flooding and inundation beyond the planning and preparedness regimes. This paper describes the comprehensive process, approach, and tools developed by the New York City Panel on Climate Change (NPCC) in conjunction with the region s stakeholders who manage its critical infrastructure, much of which lies near the coast. It presents the adaptation approach and the sea-level rise and storm projections related to coastal risks developed through the stakeholder process. Climate change adaptation planning in New York City is characterized by a multi-jurisdictional stakeholder-scientist process, state-of-the-art scientific projections and mapping, and development of adaptation strategies based on a risk-management approach.

  20. Practical approach to determine sample size for building logistic prediction models using high-throughput data.

    PubMed

    Son, Dae-Soon; Lee, DongHyuk; Lee, Kyusang; Jung, Sin-Ho; Ahn, Taejin; Lee, Eunjin; Sohn, Insuk; Chung, Jongsuk; Park, Woongyang; Huh, Nam; Lee, Jae Won

    2015-02-01

    An empirical method of sample size determination for building prediction models was proposed recently. Permutation method which is used in this procedure is a commonly used method to address the problem of overfitting during cross-validation while evaluating the performance of prediction models constructed from microarray data. But major drawback of such methods which include bootstrapping and full permutations is prohibitively high cost of computation required for calculating the sample size. In this paper, we propose that a single representative null distribution can be used instead of a full permutation by using both simulated and real data sets. During simulation, we have used a dataset with zero effect size and confirmed that the empirical type I error approaches to 0.05. Hence this method can be confidently applied to reduce overfitting problem during cross-validation. We have observed that pilot data set generated by random sampling from real data could be successfully used for sample size determination. We present our results using an experiment that was repeated for 300 times while producing results comparable to that of full permutation method. Since we eliminate full permutation, sample size estimation time is not a function of pilot data size. In our experiment we have observed that this process takes around 30min. With the increasing number of clinical studies, developing efficient sample size determination methods for building prediction models is critical. But empirical methods using bootstrap and permutation usually involve high computing costs. In this study, we propose a method that can reduce required computing time drastically by using representative null distribution of permutations. We use data from pilot experiments to apply this method for designing clinical studies efficiently for high throughput data. PMID:25555898

  1. A Hierarchical Distance Sampling Approach to Estimating Mortality Rates from Opportunistic Carcass Surveillance Data

    PubMed Central

    Bellan, Steve E.; Gimenez, Olivier; Choquet, Rémi; Getz, Wayne M.

    2012-01-01

    Summary Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of transects during survey design. Because mortality events are rare, however, it is often not possible to obtain precise estimates in this way without infeasible levels of effort. A great deal of wildlife data, including mortality data, is available via road-based surveys. Interpreting these data in a distance sampling framework requires accounting for the non-uniformity sampling. Additionally, analyses of opportunistic mortality data must account for the decline in carcass detectability through time. We develop several extensions to distance sampling theory to address these problems.We build mortality estimators in a hierarchical framework that integrates animal movement data, surveillance effort data, and motion-sensor camera trap data, respectively, to relax the uniformity assumption, account for spatiotemporal variation in surveillance effort, and explicitly model carcass detection and disappearance as competing ongoing processes.Analysis of simulated data showed that our estimators were unbiased and that their confidence intervals had good coverage.We also illustrate our approach on opportunistic carcass surveillance data acquired in 2010 during an anthrax outbreak in the plains zebra of Etosha National Park, Namibia.The methods developed here will allow researchers and managers to infer mortality rates from opportunistic surveillance data. PMID:24224079

  2. A Hierarchical Distance Sampling Approach to Estimating Mortality Rates from Opportunistic Carcass Surveillance Data.

    PubMed

    Bellan, Steve E; Gimenez, Olivier; Choquet, Rémi; Getz, Wayne M

    2013-04-01

    Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of transects during survey design. Because mortality events are rare, however, it is often not possible to obtain precise estimates in this way without infeasible levels of effort. A great deal of wildlife data, including mortality data, is available via road-based surveys. Interpreting these data in a distance sampling framework requires accounting for the non-uniformity sampling. Additionally, analyses of opportunistic mortality data must account for the decline in carcass detectability through time. We develop several extensions to distance sampling theory to address these problems.We build mortality estimators in a hierarchical framework that integrates animal movement data, surveillance effort data, and motion-sensor camera trap data, respectively, to relax the uniformity assumption, account for spatiotemporal variation in surveillance effort, and explicitly model carcass detection and disappearance as competing ongoing processes.Analysis of simulated data showed that our estimators were unbiased and that their confidence intervals had good coverage.We also illustrate our approach on opportunistic carcass surveillance data acquired in 2010 during an anthrax outbreak in the plains zebra of Etosha National Park, Namibia.The methods developed here will allow researchers and managers to infer mortality rates from opportunistic surveillance data. PMID:24224079

  3. Improving Ramsey spectroscopy in the extreme-ultraviolet region with a random-sampling approach

    SciTech Connect

    Eramo, R.; Bellini, M.; Corsi, C.; Liontos, I.; Cavalieri, S.

    2011-04-15

    Ramsey-like techniques, based on the coherent excitation of a sample by delayed and phase-correlated pulses, are promising tools for high-precision spectroscopic tests of QED in the extreme-ultraviolet (xuv) spectral region, but currently suffer experimental limitations related to long acquisition times and critical stability issues. Here we propose a random subsampling approach to Ramsey spectroscopy that, by allowing experimentalists to reach a given spectral resolution goal in a fraction of the usual acquisition time, leads to substantial improvements in high-resolution spectroscopy and may open the way to a widespread application of Ramsey-like techniques to precision measurements in the xuv spectral region.

  4. Perfluoroalkyl substances in aquatic environment-comparison of fish and passive sampling approaches.

    PubMed

    Cerveny, Daniel; Grabic, Roman; Fedorova, Ganna; Grabicova, Katerina; Turek, Jan; Kodes, Vit; Golovko, Oksana; Zlabek, Vladimir; Randak, Tomas

    2016-01-01

    The concentrations of seven perfluoroalkyl substances (PFASs) were investigated in 36 European chub (Squalius cephalus) individuals from six localities in the Czech Republic. Chub muscle and liver tissue were analysed at all sampling sites. In addition, analyses of 16 target PFASs were performed in Polar Organic Chemical Integrative Samplers (POCISs) deployed in the water at the same sampling sites. We evaluated the possibility of using passive samplers as a standardized method for monitoring PFAS contamination in aquatic environments and the mutual relationships between determined concentrations. Only perfluorooctane sulphonate was above the LOQ in fish muscle samples and 52% of the analysed fish individuals exceeded the Environmental Quality Standard for water biota. Fish muscle concentration is also particularly important for risk assessment of fish consumers. The comparison of fish tissue results with published data showed the similarity of the Czech results with those found in Germany and France. However, fish liver analysis and the passive sampling approach resulted in different fish exposure scenarios. The total concentration of PFASs in fish liver tissue was strongly correlated with POCIS data, but pollutant patterns differed between these two matrices. The differences could be attributed to the metabolic activity of the living organism. In addition to providing a different view regarding the real PFAS cocktail to which the fish are exposed, POCISs fulfil the Three Rs strategy (replacement, reduction, and refinement) in animal testing. PMID:26599587

  5. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach.

    PubMed

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-01-01

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So

  6. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach

    PubMed Central

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-01-01

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called Aη, is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So

  7. Assessing Model Structural Uncertainty Using a Split Sample Approach for a Distributed Water Quality Model

    NASA Astrophysics Data System (ADS)

    Meixner, T.; van Griensven, A.

    2003-12-01

    A method for assessing model structural uncertainty as opposed to the more commonly investigated parameter uncertainty is presented that should aid in the development of improved water quality models. Elsewhere (see van Griensven and Meixner abstract, this session) we have developed a methodology (ParaSol) to estimate model parameter uncertainty. Uncertainty is typically estimated with a specific time period of data. However from experience with model calibration problems we know that we need to employ split sample and other evaluation tests to estimate the confidence we should have in our models and our methods. Evaluation tests generally give us qualitative data about confidence in our models. Here we propose a method that uses the split sample approach to generate a quantitative estimate of model structural uncertainty. The Sources of Uncertainty Global Assessment using Split SamplES (SUNGLASSES) method is designed to assess predictive uncertainty that is not captured by parameter or physical input uncertainty. We assume that this additional uncertainty represents model structural error in how the model represents the physical, chemical, and biological processes incorporated into water quality models. This method operates by selecting a threshold for a sample statistic (bias in our case), when the sample statistic for a model simulation is below the threshold the simulation is acceptable. Where this methodology differs from others is that the threshold is determined by evaluating whether the chosen threshold will capture simulations during an evaluation time period (hence split sample) that was not used to initially calibrate the model and generate parameter estimates. Most existing methods rely solely on sample statistics during a calibration period. The new method thus captures an element of predictive error that originates in the structural conception of the processes controlling water quality. The described method is applied on a Soil Water Assessment Tool

  8. Evaluation of Online/Offline Image Guidance/Adaptation Approaches for Prostate Cancer Radiation Therapy

    SciTech Connect

    Qin, An; Sun, Ying; Liang, Jian; Yan, Di

    2015-04-01

    Purpose: To evaluate online/offline image-guided/adaptive treatment techniques for prostate cancer radiation therapy with daily cone-beam CT (CBCT) imaging. Methods and Materials: Three treatment techniques were evaluated retrospectively using daily pre- and posttreatment CBCT images on 22 prostate cancer patients. Prostate, seminal vesicles (SV), rectal wall, and bladder were delineated on all CBCT images. For each patient, a pretreatment intensity modulated radiation therapy plan with clinical target volume (CTV) = prostate + SV and planning target volume (PTV) = CTV + 3 mm was created. The 3 treatment techniques were as follows: (1) Daily Correction: The pretreatment intensity modulated radiation therapy plan was delivered after online CBCT imaging, and position correction; (2) Online Planning: Daily online inverse plans with 3-mm CTV-to-PTV margin were created using online CBCT images, and delivered; and (3) Hybrid Adaption: Daily Correction plus an offline adaptive inverse planning performed after the first week of treatment. The adaptive plan was delivered for all remaining 15 fractions. Treatment dose for each technique was constructed using the daily posttreatment CBCT images via deformable image registration. Evaluation was performed using treatment dose distribution in target and critical organs. Results: Treatment equivalent uniform dose (EUD) for the CTV was within [85.6%, 100.8%] of the pretreatment planned target EUD for Daily Correction; [98.7%, 103.0%] for Online Planning; and [99.2%, 103.4%] for Hybrid Adaptation. Eighteen percent of the 22 patients in Daily Correction had a target dose deficiency >5%. For rectal wall, the mean ± SD of the normalized EUD was 102.6% ± 2.7% for Daily Correction, 99.9% ± 2.5% for Online Planning, and 100.6% ± 2.1% for Hybrid Adaptation. The mean ± SD of the normalized bladder EUD was 108.7% ± 8.2% for Daily Correction, 92.7% ± 8.6% for Online Planning, and 89.4% ± 10.8% for Hybrid

  9. Synchronization of a class of chaotic systems with fully unknown parameters using adaptive sliding mode approach

    NASA Astrophysics Data System (ADS)

    Roopaei, M.; Zolghadri Jahromi, M.

    2008-12-01

    In this paper, an adaptive sliding mode control method for synchronization of a class of chaotic systems with fully unknown parameters is introduced. In this method, no knowledge of the bounds of parameters is required in advance and the parameters are updated through an adaptive control process. We use our proposed method to synchronize two chaotic gyros, which has been the subject of intense study during the recent years for its application in the navigational, aeronautical, and space engineering domains. The effectiveness of our method is demonstrated in simulation environment and the results are compared with some recent schemes proposed in the literature for the same task.

  10. An adaptive approach to the dynamic allocation of buffer storage. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Crooke, S. C.

    1970-01-01

    Several strategies for the dynamic allocation of buffer storage are simulated and compared. The basic algorithms investigated, using actual statistics observed in the Univac 1108 EXEC 8 System, include the buddy method and the first-fit method. Modifications are made to the basic methods in an effort to improve and to measure allocation performance. A simulation model of an adaptive strategy is developed which permits interchanging the two different methods, the buddy and the first-fit methods with some modifications. Using an adaptive strategy, each method may be employed in the statistical environment in which its performance is superior to the other method.

  11. A Laser-Deposition Approach to Compositional-Spread Discovery of Materials on Conventional Sample Sizes

    SciTech Connect

    Christen, Hans M; Okubo, Isao; Rouleau, Christopher M; Jellison Jr, Gerald Earle; Puretzky, Alexander A; Geohegan, David B; Lowndes, Douglas H

    2005-01-01

    Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (Sr{sub x}Ba{sub 1-x}Nb{sub 2}O{sub 6}) and magnetic perovskites (Sr{sub 1-x}Ca{sub x}RuO{sub 3}), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.

  12. SARA: a self-adaptive and resource-aware approach towards secure wireless ad hoc and sensor networks

    NASA Astrophysics Data System (ADS)

    Chigan, Chunxiao; Li, Leiyuan

    2005-05-01

    Providing security is essential for mission critical Wireless Ad Hoc and Sensor Networks (WAHSN) applications. Often a highly secure mechanism inevitably consumes a rather large amount of system resources, which in turn may unintentionally cause a Security Service Denial of Service (SSDoS) attack. This paper proposes a self-adaptive resource-aware (SARA) security provisioning approach for WAHSNs. For resource scarce WAHSNs, SARA strives to provide the optimal tradeoff between the sufficient security (which is reflected by the Security Index (SI)) and the acceptable network performance degradation (which is reflected by the Performance Index (PI)). With the support of the offline optimal secure protocol selection module and the online self-adaptive security control module, SARA is capable of employing different combinations of secure protocol sets to satisfy different security need at different condition for different applications. To determine the security index SI of a secure protocol set, a heuristic cross-layer security-service mapping mechanism is presented. Furthermore, we evaluate performance index PI of a secure protocol set via simulation followed by Analysis of Variance (ANOVA). Consequently, the proposed self-adaptive security provisioning based on both SI and PI achieves the maximum overall network security services and network performance services, without causing the SSDoS attack. Furthermore, this self-adaptive mechanism is capable of switching from one secure protocol set to another while keeping similar level of security and performance, it thus provides additional security by security service hopping.

  13. Peers as Resources for Learning: A Situated Learning Approach to Adapted Physical Activity in Rehabilitation

    ERIC Educational Resources Information Center

    Standal, Oyvind F.; Jespersen, Ejgil

    2008-01-01

    The purpose of this study was to investigate the learning that takes place when people with disabilities interact in a rehabilitation context. Data were generated through in-depth interviews and close observations in a 2 one-half week-long rehabilitation program, where the participants learned both wheelchair skills and adapted physical…

  14. Can Approaches to Research in Art and Design Be Beneficially Adapted for Research into Higher Education?

    ERIC Educational Resources Information Center

    Trowler, Paul

    2013-01-01

    This paper examines the research practices in Art and Design that are distinctively different from those common in research into higher education outside those fields. It considers whether and what benefit could be derived from their adaptation by the latter. The paper also examines the factors that are conducive and obstructive to adaptive…

  15. Constructive, Self-Regulated, Situated, and Collaborative Learning: An Approach for the Acquisition of Adaptive Competence

    ERIC Educational Resources Information Center

    de Corte, Erik

    2012-01-01

    In today's learning society, education must focus on fostering adaptive competence (AC) defined as the ability to apply knowledge and skills flexibly in different contexts. In this article, four major types of learning are discussed--constructive, self-regulated, situated, and collaborative--in relation to what students must learn in order to…

  16. A Multiple Objective Test Assembly Approach for Exposure Control Problems in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.; Verschoor, Angela J.; Eggen, Theo J. H. M.

    2010-01-01

    Overexposure and underexposure of items in the bank are serious problems in operational computerized adaptive testing (CAT) systems. These exposure problems might result in item compromise, or point at a waste of investments. The exposure control problem can be viewed as a test assembly problem with multiple objectives. Information in the test has…

  17. An Approach for Automatic Generation of Adaptive Hypermedia in Education with Multilingual Knowledge Discovery Techniques

    ERIC Educational Resources Information Center

    Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana

    2007-01-01

    This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…

  18. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function

    PubMed Central

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061

  19. An Adaptive Approach to Teaching the Use of the Sonicguide with Modifications for Orthopedic Involvement.

    ERIC Educational Resources Information Center

    Kitzhoffer, Gerald J.

    1983-01-01

    Use of the Sonicguide, a binaural sensory aid, by a quadraplegic, totally blind 18-year-old student is described. The rationale for training, device adaptations, and the eventual use of the device as a primary mobility aid in areas familiar to the student are explained. (Author/MC)

  20. An adaptive approach to computing the spectrum and mean frequency of Doppler signals.

    PubMed

    Herment, A; Giovannelli, J F

    1995-01-01

    Modern ultrasound Doppler systems are facing the problem of processing increasingly shorter data sets. Spectral analysis of the strongly nonstationary Doppler signal needs to shorten the analysis window while maintaining a low variance and high resolution spectrum. Color flow imaging requires estimation of the Doppler mean frequency from even shorter Doppler data sets to obtain both a high frame rate and high spatial resolution. We reconsider these two estimation problems in light of adaptive methods. A regularized parametric method for spectral analysis as well as an adapted mean frequency estimator are developed. The choice of the adaptive criterion is then addressed and adaptive spectral and mean frequency estimators are developed to minimize the mean square error on estimation in the presence of noise. Two suboptimal spectral and mean-frequency estimators are then derived for real-time applications. Finally, their performance is compared to that of both the FFT based periodogram and the AR parametric spectral analysis for the spectral estimator, and, to both the correlation angle and the Kristoffersen's [8] estimators for the mean frequency estimator using Doppler data recorded in vitro. PMID:7638930

  1. Difference, Adapted Physical Activity and Human Development: Potential Contribution of Capabilities Approach

    ERIC Educational Resources Information Center

    Silva, Carla Filomena; Howe, P. David

    2012-01-01

    This paper is a call to Adapted Physical Activity (APA) professionals to increase the reflexive nature of their practice. Drawing upon Foucault's concept of governmentality (1977) APA action may work against its own publicized goals of empowerment and self-determination. To highlight these inconsistencies, we will draw upon historical and social…

  2. Assessing Skin Blood Flow Dynamics in Older Adults Using a Modified Sample Entropy Approach

    PubMed Central

    Liao, Fuyuan; Jan, Yih-Kuen

    2015-01-01

    The aging process may result in attenuated microvascular reactivity in response to environmental stimuli, which can be evaluated by analyzing skin blood flow (SBF) signals. Among various methods for analyzing physiological signals, sample entropy (SE) is commonly used to quantify the degree of regularity of time series. However, we found that for temporally correlated data, SE value depends on the sampling rate. When data are oversampled, SE may give misleading results. To address this problem, we propose to modify the definition of SE by using time-lagged vectors in the calculation of the conditional probability that any two vectors of successive data points are within a tolerance r for m points remain within the tolerance at the next point. The lag could be chosen as the first minimum of the auto mutual information function. We tested the performance of modified SE using simulated signals and SBF data. The results showed that modified SE is able to quantify the degree of regularity of the signals regardless of sampling rate. Using this approach, we observed a more regular behavior of blood flow oscillations (BFO) during local heating-induced maximal vasodilation period compared to the baseline in young and older adults and a more regular behavior of BFO in older adults compared to young adults. These results suggest that modified SE may be useful in the study of SBF dynamics. PMID:25570060

  3. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    PubMed

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions. PMID:26098967

  4. Semi-Supervised Approach to Phase Identification from Combinatorial Sample Diffraction Patterns

    NASA Astrophysics Data System (ADS)

    Bunn, Jonathan Kenneth; Hu, Jianjun; Hattrick-Simpers, Jason R.

    2016-07-01

    Manual attribution of crystallographic phases from high-throughput x-ray diffraction studies is an arduous task, and represents a rate-limiting step in high-throughput exploration of new materials. Here, we demonstrate a semi-supervised machine learning technique, SS-AutoPhase, which uses a two-step approach to identify automatically phases from diffraction data. First, clustering analysis is used to select a representative subset of samples automatically for human analysis. Second, an AdaBoost classifier uses the labeled samples to identify the presence of the different phases in diffraction data. SS-AutoPhase was used to identify the metallographic phases in 278 diffraction patterns from a FeGaPd composition spread sample. The accuracy of SS-AutoPhase was >82.6% for all phases when 15% of the diffraction patterns were used for training. The SS-AutoPhase predicted phase diagram showed excellent agreement with human expert analysis. Furthermore it was able to determine and identify correctly a previously unreported phase.

  5. Domain adaptation from multiple sources: a domain-dependent regularization approach.

    PubMed

    Duan, Lixin; Xu, Dong; Tsang, Ivor Wai-Hung

    2012-03-01

    In this paper, we propose a new framework called domain adaptation machine (DAM) for the multiple source domain adaption problem. Under this framework, we learn a robust decision function (referred to as target classifier) for label prediction of instances from the target domain by leveraging a set of base classifiers which are prelearned by using labeled instances either from the source domains or from the source domains and the target domain. With the base classifiers, we propose a new domain-dependent regularizer based on smoothness assumption, which enforces that the target classifier shares similar decision values with the relevant base classifiers on the unlabeled instances from the target domain. This newly proposed regularizer can be readily incorporated into many kernel methods (e.g., support vector machines (SVM), support vector regression, and least-squares SVM (LS-SVM)). For domain adaptation, we also develop two new domain adaptation methods referred to as FastDAM and UniverDAM. In FastDAM, we introduce our proposed domain-dependent regularizer into LS-SVM as well as employ a sparsity regularizer to learn a sparse target classifier with the support vectors only from the target domain, which thus makes the label prediction on any test instance very fast. In UniverDAM, we additionally make use of the instances from the source domains as Universum to further enhance the generalization ability of the target classifier. We evaluate our two methods on the challenging TRECIVD 2005 dataset for the large-scale video concept detection task as well as on the 20 newsgroups and email spam datasets for document retrieval. Comprehensive experiments demonstrate that FastDAM and UniverDAM outperform the existing multiple source domain adaptation methods for the two applications. PMID:24808555

  6. Flow Cell Sampling Technique: A new approach to analyze physical soil and particle surface properties of undisturbed soil samples

    NASA Astrophysics Data System (ADS)

    Krueger, Jiem; Leue, Martin; Heinze, Stefanie; Bachmann, Jörg

    2016-04-01

    During unsaturated water conditions, water flow occurs in the soil mainly by water film flow and depends on moisture content and pore surface properties. More attention is attributed to coatings enclosing soil particles and thus may affect wetting properties as well as hydraulic soil functions. Particle coatings are most likely responsible for many adsorption processes and are expected to favor local heterogeneous microstructure with enhanced biological activity. Many of the effects described cannot be detected on the basis of conventional soil column experiments, which were usually made to study soil hydraulic processes or surface - soil solution exchange processes. The general objective of this study was to develop a new field sampling method to unravel heterogeneous flow processes on small scales in an undisturbed soil under controlled lab conditions. This will be done by using modified flow cells (Plexiglas). Beside the measurements within a flow cell as breakthrough curves, the developed technique has several additional advantages in contrast to common columns or existing flow chamber/cell designs. The direct modification from the sampling frame to the flow cell provides the advantage to combine several analyses. The new technique enables to cut up to 5 thin undisturbed soil slices (quasi-replicates) down to 10 and/or 5 mm. Relative large particles, for instance, may limit this sampling method. The large observation area of up to 150 cm2 allows the characterization of particle surface properties in a high spatial resolution within an undisturbed soil sample. This sampling technique, as shown in our study, has the opportunity to link soil wetting hydraulic and several particle surface properties to spatial soil heterogeneities. This was shown with tracer experiments, small-scale contact angle measurements and analyses of the spatial distribution of functional groups of soil organic matter via DRIFT mapping.

  7. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1994-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  8. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.

  9. Adaptive fuzzy output-feedback controller design for nonlinear systems via backstepping and small-gain approach.

    PubMed

    Liu, Zhi; Wang, Fang; Zhang, Yun; Chen, Xin; Chen, C L Philip

    2014-10-01

    This paper focuses on an input-to-state practical stability (ISpS) problem of nonlinear systems which possess unmodeled dynamics in the presence of unstructured uncertainties and dynamic disturbances. The dynamic disturbances depend on the states and the measured output of the system, and its assumption conditions are relaxed compared with the common restrictions. Based on an input-driven filter, fuzzy logic systems are directly used to approximate the unknown and desired control signals instead of the unknown nonlinear functions, and an integrated backstepping technique is used to design an adaptive output-feedback controller that ensures robustness with respect to unknown parameters and uncertain nonlinearities. This paper, by applying the ISpS theory and the generalized small-gain approach, shows that the proposed adaptive fuzzy controller guarantees the closed-loop system being semi-globally uniformly ultimately bounded. A main advantage of the proposed controller is that it contains only three adaptive parameters that need to be updated online, no matter how many states there are in the systems. Finally, the effectiveness of the proposed approach is illustrated by two simulation examples. PMID:25222716

  10. Adaptive antennas

    NASA Astrophysics Data System (ADS)

    Barton, P.

    1987-04-01

    The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.

  11. The genomic architecture and association genetics of adaptive characters using a candidate SNP approach in boreal black spruce

    PubMed Central

    2013-01-01

    Background The genomic architecture of adaptive traits remains poorly understood in non-model plants. Various approaches can be used to bridge this gap, including the mapping of quantitative trait loci (QTL) in pedigrees, and genetic association studies in non-structured populations. Here we present results on the genomic architecture of adaptive traits in black spruce, which is a widely distributed conifer of the North American boreal forest. As an alternative to the usual candidate gene approach, a candidate SNP approach was developed for association testing. Results A genetic map containing 231 gene loci was used to identify QTL that were related to budset timing and to tree height assessed over multiple years and sites. Twenty-two unique genomic regions were identified, including 20 that were related to budset timing and 6 that were related to tree height. From results of outlier detection and bulk segregant analysis for adaptive traits using DNA pool sequencing of 434 genes, 52 candidate SNPs were identified and subsequently tested in genetic association studies for budset timing and tree height assessed over multiple years and sites. A total of 34 (65%) SNPs were significantly associated with budset timing, or tree height, or both. Although the percentages of explained variance (PVE) by individual SNPs were small, several significant SNPs were shared between sites and among years. Conclusions The sharing of genomic regions and significant SNPs between budset timing and tree height indicates pleiotropic effects. Significant QTLs and SNPs differed quite greatly among years, suggesting that different sets of genes for the same characters are involved at different stages in the tree’s life history. The functional diversity of genes carrying significant SNPs and low observed PVE further indicated that a large number of polymorphisms are involved in adaptive genetic variation. Accordingly, for undomesticated species such as black spruce with natural populations

  12. Testing Set-Point Theory in a Swiss National Sample: Reaction and Adaptation to Major Life Events

    PubMed Central

    Anusic, Ivana; Yap, Stevie C. Y.; Lucas, Richard E.

    2014-01-01

    Set-point theory posits that individuals react to the experience of major life events, but quickly adapt back to pre-event baseline levels of subjective well-being in the years following the event. A large, nationally representative panel study of Swiss households was used to examine set-point theory by investigating the extent of adaptation following the experience of marriage, childbirth, widowhood, unemployment, and disability. Our results demonstrate that major life events are associated with marked change in life satisfaction and, for some events (e.g., marriage, disability), these changes are relatively long lasting even when accounting for normative, age related change. PMID:25419036

  13. Testing Set-Point Theory in a Swiss National Sample: Reaction and Adaptation to Major Life Events.

    PubMed

    Anusic, Ivana; Yap, Stevie C Y; Lucas, Richard E

    2014-12-01

    Set-point theory posits that individuals react to the experience of major life events, but quickly adapt back to pre-event baseline levels of subjective well-being in the years following the event. A large, nationally representative panel study of Swiss households was used to examine set-point theory by investigating the extent of adaptation following the experience of marriage, childbirth, widowhood, unemployment, and disability. Our results demonstrate that major life events are associated with marked change in life satisfaction and, for some events (e.g., marriage, disability), these changes are relatively long lasting even when accounting for normative, age related change. PMID:25419036

  14. Sampling variability and estimates of density dependence: a composite-likelihood approach.

    PubMed

    Lele, Subhash R

    2006-01-01

    It is well known that sampling variability, if not properly taken into account, affects various ecologically important analyses. Statistical inference for stochastic population dynamics models is difficult when, in addition to the process error, there is also sampling error. The standard maximum-likelihood approach suffers from large computational burden. In this paper, I discuss an application of the composite-likelihood method for estimation of the parameters of the Gompertz model in the presence of sampling variability. The main advantage of the method of composite likelihood is that it reduces the computational burden substantially with little loss of statistical efficiency. Missing observations are a common problem with many ecological time series. The method of composite likelihood can accommodate missing observations in a straightforward fashion. Environmental conditions also affect the parameters of stochastic population dynamics models. This method is shown to handle such nonstationary population dynamics processes as well. Many ecological time series are short, and statistical inferences based on such short time series tend to be less precise. However, spatial replications of short time series provide an opportunity to increase the effective sample size. Application of likelihood-based methods for spatial time-series data for population dynamics models is computationally prohibitive. The method of composite likelihood is shown to have significantly less computational burden, making it possible to analyze large spatial time-series data. After discussing the methodology in general terms, I illustrate its use by analyzing a time series of counts of American Redstart (Setophaga ruticilla) from the Breeding Bird Survey data, San Joaquin kit fox (Vulpes macrotis mutica) population abundance data, and spatial time series of Bull trout (Salvelinus confluentus) redds count data. PMID:16634310

  15. A CT reconstruction approach from sparse projection with adaptive-weighted diagonal total-variation in biomedical application.

    PubMed

    Deng, Luzhen; Mi, Deling; He, Peng; Feng, Peng; Yu, Pengwei; Chen, Mianyi; Li, Zhichao; Wang, Jian; Wei, Biao

    2015-01-01

    For lack of directivity in Total Variation (TV) which only uses x-coordinate and y-coordinate gradient transform as its sparse representation approach during the iteration process, this paper brought in Adaptive-weighted Diagonal Total Variation (AwDTV) that uses the diagonal direction gradient to constraint reconstructed image and adds associated weights which are expressed as an exponential function and can be adaptively adjusted by the local image-intensity diagonal gradient for the purpose of preserving the edge details, then using the steepest descent method to solve the optimization problem. Finally, we did two sets of numerical simulation and the results show that the proposed algorithm can reconstruct high-quality CT images from few-views projection, which has lower Root Mean Square Error (RMSE) and higher Universal Quality Index (UQI) than Algebraic Reconstruction Technique (ART) and TV-based reconstruction method. PMID:26405935

  16. A Monte Carlo simulation based two-stage adaptive resonance theory mapping approach for offshore oil spill vulnerability index classification.

    PubMed

    Li, Pu; Chen, Bing; Li, Zelin; Zheng, Xiao; Wu, Hongjing; Jing, Liang; Lee, Kenneth

    2014-09-15

    In this paper, a Monte Carlo simulation based two-stage adaptive resonance theory mapping (MC-TSAM) model was developed to classify a given site into distinguished zones representing different levels of offshore Oil Spill Vulnerability Index (OSVI). It consisted of an adaptive resonance theory (ART) module, an ART Mapping module, and a centroid determination module. Monte Carlo simulation was integrated with the TSAM approach to address uncertainties that widely exist in site conditions. The applicability of the proposed model was validated by classifying a large coastal area, which was surrounded by potential oil spill sources, based on 12 features. Statistical analysis of the results indicated that the classification process was affected by multiple features instead of one single feature. The classification results also provided the least or desired number of zones which can sufficiently represent the levels of offshore OSVI in an area under uncertainty and complexity, saving time and budget in spill monitoring and response. PMID:25044043

  17. [Targeted chemotherapy for breast cancer: patients perception of the use of tumor gene profiling approaches to better adapt treatments].

    PubMed

    Pellegrini, Isabelle; Rapti, Myrto; Extra, Jean-Marc; Petri-Cal, Anouk; Apostolidis, Themis; Ferrero, Jean-Marc; Bachelot, Thomas; Viens, Patrice; Bertucci, François; Julian-Reynier, Claire

    2012-03-01

    The purpose of this review of the literature is to document how breast cancer patients perceive the use of tumor gene profiling approaches to better adapt treatments, and to identify the features of these approaches that may impact their clinical application. In general, the use of tumor genomic analysis was perceived by patients as an approach facilitating personalized medicine and received considerable support. Nevertheless, a number of confusions and worries about these practices were also identified. Improving the quality of provider/patient communications should enable patients to play a more active part in the decision-making about their treatment. This will ensure that those who agree to their tumor gene analysis have realistic expectations and sound deductions of the final result disclosure process. PMID:22494653

  18. A Neural Network Approach to Intention Modeling for User-Adapted Conversational Agents.

    PubMed

    Griol, David; Callejas, Zoraida

    2016-01-01

    Spoken dialogue systems have been proposed to enable a more natural and intuitive interaction with the environment and human-computer interfaces. In this contribution, we present a framework based on neural networks that allows modeling of the user's intention during the dialogue and uses this prediction to dynamically adapt the dialogue model of the system taking into consideration the user's needs and preferences. We have evaluated our proposal to develop a user-adapted spoken dialogue system that facilitates tourist information and services and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, and the quality perceived by the users. PMID:26819592

  19. A Neural Network Approach to Intention Modeling for User-Adapted Conversational Agents

    PubMed Central

    Griol, David

    2016-01-01

    Spoken dialogue systems have been proposed to enable a more natural and intuitive interaction with the environment and human-computer interfaces. In this contribution, we present a framework based on neural networks that allows modeling of the user's intention during the dialogue and uses this prediction to dynamically adapt the dialogue model of the system taking into consideration the user's needs and preferences. We have evaluated our proposal to develop a user-adapted spoken dialogue system that facilitates tourist information and services and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, and the quality perceived by the users. PMID:26819592

  20. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-10-01

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.

  1. Toward a systems-oriented approach to the role of the extended amygdala in adaptive responding.

    PubMed

    Waraczynski, Meg

    2016-09-01

    Research into the structure and function of the basal forebrain macrostructure called the extended amygdala (EA) has recently seen considerable growth. This paper reviews that work, with the objectives of identifying underlying themes and developing a common goal towards which investigators of EA function might work. The paper begins with a brief review of the structure and the ontological and phylogenetic origins of the EA. It continues with a review of research into the role of the EA in both aversive and appetitive states, noting that these two seemingly disparate avenues of research converge on the concept of reinforcement - either negative or positive - of adaptive responding. These reviews lead to a proposal as to where the EA may fit in the organization of the basal forebrain, and an invitation to investigators to place their findings in a unifying conceptual framework of the EA as a collection of neural ensembles that mediate adaptive responding. PMID:27216212

  2. A New Approach to Parallel Dynamic Partitioning for Adaptive Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.

    1999-01-01

    Classical mesh partitioning algorithms were designed for rather static situations, and their straightforward application in a dynamical framework may lead to unsatisfactory results, e.g., excessive data migration among processors. Furthermore, special attention should be paid to their amenability to parallelization. In this paper, a novel parallel method for the dynamic partitioning of adaptive unstructured meshes is described. It is based on a linear representation of the mesh using self-avoiding walks.

  3. Adaptive Developmental Delay in Chagas Disease Vectors: An Evolutionary Ecology Approach

    PubMed Central

    Menu, Frédéric; Ginoux, Marine; Rajon, Etienne; Lazzari, Claudio R.; Rabinovich, Jorge E.

    2010-01-01

    Background The developmental time of vector insects is important in population dynamics, evolutionary biology, epidemiology and in their responses to global climatic change. In the triatomines (Triatominae, Reduviidae), vectors of Chagas disease, evolutionary ecology concepts, which may allow for a better understanding of their biology, have not been applied. Despite delay in the molting in some individuals observed in triatomines, no effort was made to explain this variability. Methodology We applied four methods: (1) an e-mail survey sent to 30 researchers with experience in triatomines, (2) a statistical description of the developmental time of eleven triatomine species, (3) a relationship between development time pattern and climatic inter-annual variability, (4) a mathematical optimization model of evolution of developmental delay (diapause). Principal Findings 85.6% of responses informed on prolonged developmental times in 5th instar nymphs, with 20 species identified with remarkable developmental delays. The developmental time analysis showed some degree of bi-modal pattern of the development time of the 5th instars in nine out of eleven species but no trend between development time pattern and climatic inter-annual variability was observed. Our optimization model predicts that the developmental delays could be due to an adaptive risk-spreading diapause strategy, only if survival throughout the diapause period and the probability of random occurrence of “bad” environmental conditions are sufficiently high. Conclusions/Significance Developmental delay may not be a simple non-adaptive phenotypic plasticity in development time, and could be a form of adaptive diapause associated to a physiological mechanism related to the postponement of the initiation of reproduction, as an adaptation to environmental stochasticity through a spreading of risk (bet-hedging) strategy. We identify a series of parameters that can be measured in the field and laboratory to test

  4. Polar Microalgae: New Approaches towards Understanding Adaptations to an Extreme and Changing Environment

    PubMed Central

    Lyon, Barbara R.; Mock, Thomas

    2014-01-01

    Polar Regions are unique and highly prolific ecosystems characterized by extreme environmental gradients. Photosynthetic autotrophs, the base of the food web, have had to adapt physiological mechanisms to maintain growth, reproduction and metabolic activity despite environmental conditions that would shut-down cellular processes in most organisms. High latitudes are characterized by temperatures below the freezing point, complete darkness in winter and continuous light and high UV in the summer. Additionally, sea-ice, an ecological niche exploited by microbes during the long winter seasons when the ocean and land freezes over, is characterized by large salinity fluctuations, limited gas exchange, and highly oxic conditions. The last decade has been an exciting period of insights into the molecular mechanisms behind adaptation of microalgae to the cryosphere facilitated by the advancement of new scientific tools, particularly “omics” techniques. We review recent insights derived from genomics, transcriptomics, and proteomics studies. Genes, proteins and pathways identified from these highly adaptable polar microbes have far-reaching biotechnological applications. Furthermore, they may provide insights into life outside this planet, as well as glimpses into the past. High latitude regions also have disproportionately large inputs into global biogeochemical cycles and are the region most sensitive to climate change. PMID:24833335

  5. Local adaptive approach toward segmentation of microscopic images of activated sludge flocs

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Lo, Po Kim; Yap, Vooi Voon

    2015-11-01

    Activated sludge process is a widely used method to treat domestic and industrial effluents. The conditions of activated sludge wastewater treatment plant (AS-WWTP) are related to the morphological properties of flocs (microbial aggregates) and filaments, and are required to be monitored for normal operation of the plant. Image processing and analysis is a potential time-efficient monitoring tool for AS-WWTPs. Local adaptive segmentation algorithms are proposed for bright-field microscopic images of activated sludge flocs. Two basic modules are suggested for Otsu thresholding-based local adaptive algorithms with irregular illumination compensation. The performance of the algorithms has been compared with state-of-the-art local adaptive algorithms of Sauvola, Bradley, Feng, and c-mean. The comparisons are done using a number of region- and nonregion-based metrics at different microscopic magnifications and quantification of flocs. The performance metrics show that the proposed algorithms performed better and, in some cases, were comparable to the state-of the-art algorithms. The performance metrics were also assessed subjectively for their suitability for segmentations of activated sludge images. The region-based metrics such as false negative ratio, sensitivity, and negative predictive value gave inconsistent results as compared to other segmentation assessment metrics.

  6. Testing Local Adaptation in a Natural Great Tit-Malaria System: An Experimental Approach

    PubMed Central

    Jenkins, Tania; Delhaye, Jessica; Christe, Philippe

    2015-01-01

    Finding out whether Plasmodium spp. are coevolving with their vertebrate hosts is of both theoretical and applied interest and can influence our understanding of the effects and dynamics of malaria infection. In this study, we tested for local adaptation as a signature of coevolution between malaria blood parasites, Plasmodium spp. and its host, the great tit, Parus major. We conducted a reciprocal transplant experiment of birds in the field, where we exposed birds from two populations to Plasmodium parasites. This experimental set-up also provided a unique opportunity to study the natural history of malaria infection in the wild and to assess the effects of primary malaria infection on juvenile birds. We present three main findings: i) there was no support for local adaptation; ii) there was a male-biased infection rate; iii) infection occurred towards the end of the summer and differed between sites. There were also site-specific effects of malaria infection on the hosts. Taken together, we present one of the few experimental studies of parasite-host local adaptation in a natural malaria system, and our results shed light on the effects of avian malaria infection in the wild. PMID:26555892

  7. Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.

    PubMed

    Candy, J V

    2015-09-01

    The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements. PMID:26428765

  8. A morphological adaptation approach to path planning inspired by slime mould

    NASA Astrophysics Data System (ADS)

    Jones, Jeff

    2015-04-01

    Path planning is a classic problem in computer science and robotics which has recently been implemented in unconventional computing substrates such as chemical reaction-diffusion computers. These novel computing schemes utilise the parallel spatial propagation of information and often use a two-stage method involving diffusive propagation to discover all paths and a second stage to highlight or visualise the path between two particular points in the arena. The true slime mould Physarum polycephalum is known to construct efficient transport networks between nutrients in its environment. These networks are continuously remodelled as the organism adapts its body plan to changing spatial stimuli. It can be guided towards attractant stimuli (nutrients, warm regions) and it avoids locations containing hazardous stimuli (light irradiation, repellents, or regions occupied by predatory threats). Using a particle model of slime mould we demonstrate scoping experiments which explore how path planning may be performed by morphological adaptation. We initially demonstrate simple path planning by a shrinking blob of virtual plasmodium between two attractant sources within a polygonal arena. We examine the case where multiple paths are required and the subsequent selection of a single path from multiple options. Collision-free paths are implemented via repulsion from the borders of the arena. Finally, obstacle avoidance is implemented by repulsion from obstacles as they are uncovered by the shrinking blob. These examples show proof-of-concept results of path planning by morphological adaptation which complement existing research on path planning in novel computing substrates.

  9. Phenotypic and genotypic approach to characterize Arcanobacterium pluranimalium isolated from bovine milk samples.

    PubMed

    Wickhorst, Jörn-Peter; Hassan, Abdulwahed Ahmed; Sammra, Osama; Huber-Schlenstedt, Reglindis; Lämmler, Christoph; Prenger-Berninghoff, Ellen; Timke, Markus; Abdulmawjood, Amir

    2016-09-01

    In the present study, three Arcanobacterium pluranimalium strains isolated from bovine milk samples of three cows of three farms (two cows with subclinical mastitis) could successfully be identified by phenotypical investigations, by matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) analysis and genotypically by sequencing the molecular targets 16S rDNA, 16S-23S rDNA intergenic spacer region (ISR), the β subunit of bacterial RNA polymerase encoding gene rpoB, the glyceraldehyde 3-phosphate dehydrogenase encoding gene gap, the elongation factor tu encoding gene tuf, and the pluranimaliumlysin encoding gene pla. The latter could also be identified by a loop-mediated isothermal amplification (LAMP) assay. The presented phenotypic and genotypic approaches might support the identification of A. pluranimalium in future and might help to understand the role this species plays in bovine mastitis. PMID:26883140

  10. Determination of avermectins: a QuEChERS approach to the analysis of food samples.

    PubMed

    Rúbies, A; Antkowiak, S; Granados, M; Companyó, R; Centrich, F

    2015-08-15

    We present a simple method for extracting avermectines from meat, based on a QuEChERS approach followed by liquid chromatography (LC) coupled to triple quadrupole (QqQ) tandem mass spectrometry (MS/MS). The compounds considered are ivermectin, abamectin, emamectin, eprinomectin, doramectin and moxidectin. The new method has been fully validated according to the requirements of European Decision 657/2002/CE (EU, 2002). The method is suitable for the analysis of avermectins at concentration as low as 2.5 μg kg(-1), and allows high sample throughput. In addition, the detection of avermectins by high resolution mass spectrometry using a quadrupole-Orbritrap (Q-Orbitrap) hybrid instrument has been explored, and the target Selected Ion Monitoring data dependent MS/MS (t-SIM-dd MS/MS) mode has been found to provide excellent performance for residue determination of target analytes. PMID:25794721

  11. Optimal unified approach for rare-variant association testing with application to small-sample case-control whole-exome sequencing studies.

    PubMed

    Lee, Seunggeun; Emond, Mary J; Bamshad, Michael J; Barnes, Kathleen C; Rieder, Mark J; Nickerson, Deborah A; Christiani, David C; Wurfel, Mark M; Lin, Xihong

    2012-08-10

    We propose in this paper a unified approach for testing the association between rare variants and phenotypes in sequencing association studies. This approach maximizes power by adaptively using the data to optimally combine the burden test and the nonburden sequence kernel association test (SKAT). Burden tests are more powerful when most variants in a region are causal and the effects are in the same direction, whereas SKAT is more powerful when a large fraction of the variants in a region are noncausal or the effects of causal variants are in different directions. The proposed unified test maintains the power in both scenarios. We show that the unified test corresponds to the optimal test in an extended family of SKAT tests, which we refer to as SKAT-O. The second goal of this paper is to develop a small-sample adjustment procedure for the proposed methods for the correction of conservative type I error rates of SKAT family tests when the trait of interest is dichotomous and the sample size is small. Both small-sample-adjusted SKAT and the optimal unified test (SKAT-O) are computationally efficient and can easily be applied to genome-wide sequencing association studies. We evaluate the finite sample performance of the proposed methods using extensive simulation studies and illustrate their application using the acute-lung-injury exome-sequencing data of the National Heart, Lung, and Blood Institute Exome Sequencing Project. PMID:22863193

  12. Optimal Unified Approach for Rare-Variant Association Testing with Application to Small-Sample Case-Control Whole-Exome Sequencing Studies

    PubMed Central

    Lee, Seunggeun; Emond, Mary J.; Bamshad, Michael J.; Barnes, Kathleen C.; Rieder, Mark J.; Nickerson, Deborah A.; Christiani, David C.; Wurfel, Mark M.; Lin, Xihong

    2012-01-01

    We propose in this paper a unified approach for testing the association between rare variants and phenotypes in sequencing association studies. This approach maximizes power by adaptively using the data to optimally combine the burden test and the nonburden sequence kernel association test (SKAT). Burden tests are more powerful when most variants in a region are causal and the effects are in the same direction, whereas SKAT is more powerful when a large fraction of the variants in a region are noncausal or the effects of causal variants are in different directions. The proposed unified test maintains the power in both scenarios. We show that the unified test corresponds to the optimal test in an extended family of SKAT tests, which we refer to as SKAT-O. The second goal of this paper is to develop a small-sample adjustment procedure for the proposed methods for the correction of conservative type I error rates of SKAT family tests when the trait of interest is dichotomous and the sample size is small. Both small-sample-adjusted SKAT and the optimal unified test (SKAT-O) are computationally efficient and can easily be applied to genome-wide sequencing association studies. We evaluate the finite sample performance of the proposed methods using extensive simulation studies and illustrate their application using the acute-lung-injury exome-sequencing data of the National Heart, Lung, and Blood Institute Exome Sequencing Project. PMID:22863193

  13. A Framework Approach to Evaluate Cross-Cultural Adaptation of Public Engagement Strategies for Radioactive Waste Management - 13430

    SciTech Connect

    Hermann, Laura

    2013-07-01

    The complex interplay of politics, economics and culture undermines attempts to define universal best practices for public engagement in the management of nuclear materials. In the international context, communicators must rely on careful adaptation and creative execution to make standard communication techniques succeed in their local communities. Nuclear professionals need an approach to assess and adapt culturally specific public engagement strategies to meet the demands of their particular political, economic and social structures. Using participant interviews and public sources, the Potomac Communications Group reviewed country-specific examples of nuclear-related communication efforts to provide insight into a proposed approach. The review considered a spectrum of cultural dimensions related to diversity, authority, conformity, proximity and time. Comparisons help to identify cross-cultural influences of various public engagement tactics and to inform a framework for communicators. While not prescriptive in its application, the framework offers a way for communicators to assess the salience of outreach tactics in specific situations. The approach can guide communicators to evaluate and tailor engagement strategies to achieve localized public outreach goals. (authors)

  14. A Novel Quantitative Approach for Eliminating Sample-To-Sample Variation Using a Hue Saturation Value Analysis Program

    PubMed Central

    McMullen, Eri; Figueiredo, Jose Luiz; Aikawa, Masanori; Aikawa, Elena

    2014-01-01

    Objectives As computing technology and image analysis techniques have advanced, the practice of histology has grown from a purely qualitative method to one that is highly quantified. Current image analysis software is imprecise and prone to wide variation due to common artifacts and histological limitations. In order to minimize the impact of these artifacts, a more robust method for quantitative image analysis is required. Methods and Results Here we present a novel image analysis software, based on the hue saturation value color space, to be applied to a wide variety of histological stains and tissue types. By using hue, saturation, and value variables instead of the more common red, green, and blue variables, our software offers some distinct advantages over other commercially available programs. We tested the program by analyzing several common histological stains, performed on tissue sections that ranged from 4 µm to 10 µm in thickness, using both a red green blue color space and a hue saturation value color space. Conclusion We demonstrated that our new software is a simple method for quantitative analysis of histological sections, which is highly robust to variations in section thickness, sectioning artifacts, and stain quality, eliminating sample-to-sample variation. PMID:24595280

  15. A novel four-dimensional analytical approach for analysis of complex samples.

    PubMed

    Stephan, Susanne; Jakob, Cornelia; Hippler, Jörg; Schmitz, Oliver J

    2016-05-01

    A two-dimensional LC (2D-LC) method, based on the work of Erni and Frei in 1978, was developed and coupled to an ion mobility-high-resolution mass spectrometer (IM-MS), which enabled the separation of complex samples in four dimensions (2D-LC, ion mobility spectrometry (IMS), and mass spectrometry (MS)). This approach works as a continuous multiheart-cutting LC system, using a long modulation time of 4 min, which allows the complete transfer of most of the first - dimension peaks to the second - dimension column without fractionation, in comparison to comprehensive two-dimensional liquid chromatography. Hence, each compound delivers only one peak in the second dimension, which simplifies the data handling even when ion mobility spectrometry as a third and mass spectrometry as a fourth dimension are introduced. The analysis of a plant extract from Ginkgo biloba shows the separation power of this four-dimensional separation method with a calculated total peak capacity of more than 8700. Furthermore, the advantage of ion mobility for characterizing unknown compounds by their collision cross section (CCS) and accurate mass in a non-target approach is shown for different matrices like plant extracts and coffee. Graphical abstract Principle of the four-dimensional separation. PMID:27038056

  16. A non-iterative sampling approach using noise subspace projection for EIT

    NASA Astrophysics Data System (ADS)

    Bellis, Cédric; Constantinescu, Andrei; Coquet, Thomas; Jaravel, Thomas; Lechleiter, Armin

    2012-07-01

    This study concerns the problem of the reconstruction of inclusions embedded in a conductive medium in the context of electrical impedance tomography (EIT), which is investigated within the framework of a non-iterative sampling approach. This type of identification strategy relies on the construction of a special indicator function that takes, roughly speaking, small values outside the inclusion and large values inside. Such a function is constructed in this paper from the projection of a fundamental singular solution onto the space spanned by the singular vectors associated with some of the smallest singular values of the data-to-measurement operator. The behavior of the novel indicator function is analyzed. For a subsequent implementation in a discrete setting, the quality of classical finite-dimensional approximations of the measurement operator is discussed. The robustness of this approach is also analyzed when only noisy spectral information is available. Finally, this identification method is implemented numerically and experimentally, and its efficiency is discussed on a set of, partly experimental, examples.

  17. Approaching Ultimate Intrinsic SNR in a Uniform Spherical Sample with Finite Arrays of Loop Coils

    PubMed Central

    Vaidya, Manushka V.; Sodickson, Daniel K.; Lattanzi, Riccardo

    2015-01-01

    We investigated to what degree and at what rate the ultimate intrinsic (UI) signal-to-noise ratio (SNR) may be approached using finite radiofrequency detector arrays. We used full-wave electromagnetic field simulations based on dyadic Green’s functions to compare the SNR of arrays of loops surrounding a uniform sphere with the ultimate intrinsic SNR (UISNR), for increasing numbers of elements over a range of magnetic field strengths, voxel positions, sphere sizes, and acceleration factors. We evaluated the effect of coil conductor losses and the performance of a variety of distinct geometrical arrangements such as “helmet” and “open-pole” configurations in multiple imaging planes. Our results indicate that UISNR at the center is rapidly approached with encircling arrays and performance is substantially lower near the surface, where a quadrature detection configuration tailored to voxel position is optimal. Coil noise is negligible at high field, where sample noise dominates. Central SNR for practical array configurations such as the helmet is similar to that of close-packed arrangements. The observed trends can provide physical insights to improve coil design. PMID:26097442

  18. Novel approach for the development of axenic microalgal cultures from environmental samples.

    PubMed

    Cho, Dae-Hyun; Ramanan, Rishiram; Kim, Byung-Hyuk; Lee, Jimin; Kim, Sora; Yoo, Chan; Choi, Gang-Guk; Oh, Hee-Mock; Kim, Hee-Sik

    2013-08-01

    We demonstrated a comprehensive approach for development of axenic cultures of microalgae from environmental samples. A combination of ultrasonication, fluorescence-activated cell sorting (FACS), and micropicking was used to isolate axenic cultures of Chlorella vulgaris Beyerinck (Beijerinck) and Chlorella sorokiniana Shihira & R.W. Krauss from swine wastewater, and Scenedesmus sp. YC001 from an open pond. Ultrasonication dispersed microorganisms attached to microalgae and reduced the bacterial population by 70%, and when followed by cell sorting yielded 99.5% pure microalgal strains. The strains were rendered axenic by the novel method of micropicking and were tested for purity in both solid and liquid media under different trophic states. Denaturing gradient gel electrophoresis (DGGE) of 16S rRNA gene confirmed the absence of unculturable bacteria, whereas fluorescence microscopy and scanning electron microscopy (SEM) further confirmed the axenicity. This is the most comprehensive approach developed to date for obtaining axenic microalgal strains without the use of antibiotics and repetitive subculturing. PMID:27007211

  19. Virtual MEG Helmet: Computer Simulation of an Approach to Neuromagnetic Field Sampling.

    PubMed

    Medvedovsky, Mordekhay; Nenonen, Jukka; Koptelova, Alexandra; Butorina, Anna; Paetau, Ritva; Makela, Jyrki P; Ahonen, Antti; Simola, Juha; Gazit, Tomer; Taulu, Samu

    2016-03-01

    Head movements during an MEG recording are commonly considered an obstacle. In this computer simulation study, we introduce an approach, the virtual MEG helmet (VMH), which employs the head movements for data quality improvement. With a VMH, a denser MEG helmet is constructed by adding new sensors corresponding to different head positions. Based on the Shannon's theory of communication, we calculated the total information as a figure of merit for comparing the actual 306-sensor Elekta Neuromag helmet to several types of the VMH. As source models, we used simulated randomly distributed source current (RDSC), simulated auditory and somatosensory evoked fields. Using the RDSC model with the simulation of 360 recorded events, the total information (bits/sample) was 989 for the most informative single head position and up to 1272 for the VMH (addition of 28.6%). Using simulated AEFs, the additional contribution of a VMH was 12.6% and using simulated SEF only 1.1%. For the distributed and bilateral sources, a VMH can provide a more informative sampling of the neuromagnetic field during the same recording time than measuring the MEG from one head position. VMH can, in some situations, improve source localization of the neuromagnetic fields related to the normal and pathological brain activity. This should be investigated further employing real MEG recordings. PMID:25616085

  20. Matrix compatible solid phase microextraction coating, a greener approach to sample preparation in vegetable matrices.

    PubMed

    Naccarato, Attilio; Pawliszyn, Janusz

    2016-09-01

    This work proposes the novel PDMS/DVB/PDMS fiber as a greener strategy for analysis by direct immersion solid phase microextraction (SPME) in vegetables. SPME is an established sample preparation approach that has not yet been adequately explored for food analysis in direct immersion mode due to the limitations of the available commercial coatings. The robustness and endurance of this new coating were investigated by direct immersion extractions in raw blended vegetables without any further sample preparation steps. The PDMS/DVB/PDMS coating exhibited superior features related to the capability of the external PDMS layer to protect the commercial coating, and showed improvements in terms of extraction capability and in the cleanability of the coating surface. In addition to having contributed to the recognition of the superior features of this new fiber concept before commercialization, the outcomes of this work serve to confirm advancements in the matrix compatibility of the PDMS-modified fiber, and open new prospects for the development of greener high-throughput analytical methods in food analysis using solid phase microextraction in the near future. PMID:27041299