Adaptive Sampling approach to environmental site characterization: Phase 1 demonstration
Floran, R.J.; Bujewski, G.E.; Johnson, R.L.
1995-07-01
A technology demonstration that optimizes sampling strategies and real-time data collection was carried out at the Kirtland Air Force Base (KAFB) RB-11 Radioactive Burial Site, Albuquerque, New Mexico in August 1994. The project, which was funded by the Strategic Environmental Research and Development Program (SERDP), involved the application of a geostatistical-based Adaptive Sampling methodology and software with on-site field screening of soils for radiation, organic compounds and metals. The software, known as Plume{trademark}, was developed at Argonne National Laboratory as part of the DOE/OTD-funded Mixed Waste Landfill Integrated Demonstration (MWLID). The objective of the investigation was to compare an innovative Adaptive Sampling approach that stressed real-time decision-making with a conventional RCRA-driven site characterization carried out by the Air Force. The latter investigation used a standard drilling and sampling plan as mandated by the Environmental Protection Agency (EPA). To make the comparison realistic, the same contractors and sampling equipment (Geoprobe{reg_sign} soil samplers) were used. In both investigations, soil samples were collected at several depths at numerous locations adjacent to burial trenches that contain low-level radioactive waste and animal carcasses; some trenches may also contain mixed waste. Neither study revealed the presence of contaminants appreciably above risk based action levels, indicating that minimal to no migration has occurred away from the trenches. The combination of Adaptive Sampling with field screening achieved a similar level of confidence compared to the Resource Conservation and Recovery Act (RCRA) investigation regarding the potential migration of contaminants at the site.
A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification
Li, Weixuan; Zhang, Dongxiao; Lin, Guang
2015-02-25
A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a sample of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching
Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach
Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei
2016-01-01
Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795
Adaptive Sampling Proxy Application
Energy Science and Technology Software Center (ESTSC)
2012-10-22
ASPA is an implementation of an adaptive sampling algorithm [1-3], which is used to reduce the computational expense of computer simulations that couple disparate physical scales. The purpose of ASPA is to encapsulate the algorithms required for adaptive sampling independently from any specific application, so that alternative algorithms and programming models for exascale computers can be investigated more easily.
Adaptive sampling for noisy problems
Cantu-Paz, E
2004-03-26
The usual approach to deal with noise present in many real-world optimization problems is to take an arbitrary number of samples of the objective function and use the sample average as an estimate of the true objective value. The number of samples is typically chosen arbitrarily and remains constant for the entire optimization process. This paper studies an adaptive sampling technique that varies the number of samples based on the uncertainty of deciding between two individuals. Experiments demonstrate the effect of adaptive sampling on the final solution quality reached by a genetic algorithm and the computational cost required to find the solution. The results suggest that the adaptive technique can effectively eliminate the need to set the sample size a priori, but in many cases it requires high computational costs.
Benner, Philipp; Elze, Tobias
2012-01-01
We present a predictive account on adaptive sequential sampling of stimulus-response relations in psychophysical experiments. Our discussion applies to experimental situations with ordinal stimuli when there is only weak structural knowledge available such that parametric modeling is no option. By introducing a certain form of partial exchangeability, we successively develop a hierarchical Bayesian model based on a mixture of Pólya urn processes. Suitable utility measures permit us to optimize the overall experimental sampling process. We provide several measures that are either based on simple count statistics or more elaborate information theoretic quantities. The actual computation of information theoretic utilities often turns out to be infeasible. This is not the case with our sampling method, which relies on an efficient algorithm to compute exact solutions of our posterior predictions and utility measures. Finally, we demonstrate the advantages of our framework on a hypothetical sampling problem. PMID:22822269
Bujewski, G.E.; Johnson, R.L.
1996-04-01
Adaptive sampling programs provide real opportunities to save considerable time and money when characterizing hazardous waste sites. This Strategic Environmental Research and Development Program (SERDP) project demonstrated two decision-support technologies, SitePlanner{trademark} and Plume{trademark}, that can facilitate the design and deployment of an adaptive sampling program. A demonstration took place at Joliet Army Ammunition Plant (JAAP), and was unique in that it was tightly coupled with ongoing Army characterization work at the facility, with close scrutiny by both state and federal regulators. The demonstration was conducted in partnership with the Army Environmental Center`s (AEC) Installation Restoration Program and AEC`s Technology Development Program. AEC supported researchers from Tufts University who demonstrated innovative field analytical techniques for the analysis of TNT and DNT. SitePlanner{trademark} is an object-oriented database specifically designed for site characterization that provides an effective way to compile, integrate, manage and display site characterization data as it is being generated. Plume{trademark} uses a combination of Bayesian analysis and geostatistics to provide technical staff with the ability to quantitatively merge soft and hard information for an estimate of the extent of contamination. Plume{trademark} provides an estimate of contamination extent, measures the uncertainty associated with the estimate, determines the value of additional sampling, and locates additional samples so that their value is maximized.
ERIC Educational Resources Information Center
Flournoy, Nancy
Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…
Adaptive Peer Sampling with Newscast
NASA Astrophysics Data System (ADS)
Tölgyesi, Norbert; Jelasity, Márk
The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.
Adaptive Sampling in Hierarchical Simulation
Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R
2007-07-09
We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Adaptive approaches to biosecurity governance.
Cook, David C; Liu, Shuang; Murphy, Brendan; Lonsdale, W Mark
2010-09-01
This article discusses institutional changes that may facilitate an adaptive approach to biosecurity risk management where governance is viewed as a multidisciplinary, interactive experiment acknowledging uncertainty. Using the principles of adaptive governance, evolved from institutional theory, we explore how the concepts of lateral information flows, incentive alignment, and policy experimentation might shape Australia's invasive species defense mechanisms. We suggest design principles for biosecurity policies emphasizing overlapping complementary response capabilities and the sharing of invasive species risks via a polycentric system of governance. PMID:20561262
The Limits to Adaptation; A Systems Approach
The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...
Adaptive sampling program support for expedited site characterization
Johnson, R.
1993-10-01
Expedited site characterizations offer substantial savings in time and money when assessing hazardous waste sites. Key to some of these savings is the ability to adapt a sampling program to the ``real-time`` data generated by an expedited site characterization. This paper presents a two-prong approach to supporting adaptive sampling programs: a specialized object-oriented database/geographical information system for data fusion, management and display; and combined Bayesian/geostatistical methods for contamination extent estimation and sample location selection.
Adaptive Sampling for High Throughput Data Using Similarity Measures
Bulaevskaya, V.; Sales, A. P.
2015-05-06
The need for adaptive sampling arises in the context of high throughput data because the rates of data arrival are many orders of magnitude larger than the rates at which they can be analyzed. A very fast decision must therefore be made regarding the value of each incoming observation and its inclusion in the analysis. In this report we discuss one approach to adaptive sampling, based on the new data point’s similarity to the other data points being considered for inclusion. We present preliminary results for one real and one synthetic data set.
Feature Adaptive Sampling for Scanning Electron Microscopy.
Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp
2016-01-01
A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning. PMID:27150131
Feature Adaptive Sampling for Scanning Electron Microscopy
Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp
2016-01-01
A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning. PMID:27150131
Feature Adaptive Sampling for Scanning Electron Microscopy
NASA Astrophysics Data System (ADS)
Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp
2016-05-01
A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning.
Adapting Courses to Distance Delivery: Three Approaches.
ERIC Educational Resources Information Center
Landis, Melodee
1999-01-01
Describes three approaches to adapting courses to distance delivery: the most common "dive-in" technique (little preparation other than adapting print on transparencies, practicing with technology controls, and test-running); the "chunking" approach (considering how the major "chunks" of teaching can be transported to new technologies); and the…
Acquiring case adaptation knowledge: A hybrid approach
Leake, D.B.; Kinley, A.; Wilson, D.
1996-12-31
The ability of case-based reasoning (CBR) systems to apply cases to novel situations depends on their case adaptation knowledge. However, endowing CBR systems with adequate adaptation knowledge has proven to be a very difficult task. This paper describes a hybrid method for performing case adaptation, using a combination of rule-based and case-based reasoning. It shows how this approach provides a framework for acquiring flexible adaptation knowledge from experiences with autonomous adaptation and suggests its potential as a basis for acquisition of adaptation knowledge from interactive user guidance. It also presents initial experimental results examining the benefits of the approach and comparing the relative contributions of case learning and adaptation learning to reasoning performance.
Sampling and surface reconstruction with adaptive-size meshes
NASA Astrophysics Data System (ADS)
Huang, Wen-Chen; Goldgof, Dmitry B.
1992-03-01
This paper presents a new approach to sampling and surface reconstruction which uses the physically based models. We introduce adaptive-size meshes which automatically update the size of the meshes as the distance between the nodes changes. We have implemented the adaptive-size algorithm to the following three applications: (1) Sampling of the intensity data. (2) Surface reconstruction of the range data. (3) Surface reconstruction of the 3-D computed tomography left ventricle data. The LV data was acquired by the 3-D computed tomography (CT) scanner. It was provided by Dr. Eric Hoffman at University of Pennsylvania Medical school and consists of 16 volumetric (128 X 128 X 118) images taken through the heart cycle.
Flight Test Approach to Adaptive Control Research
NASA Technical Reports Server (NTRS)
Pavlock, Kate Maureen; Less, James L.; Larson, David Nils
2011-01-01
The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.
A Predictive Analysis Approach to Adaptive Testing.
ERIC Educational Resources Information Center
Kirisci, Levent; Hsu, Tse-Chi
The predictive analysis approach to adaptive testing originated in the idea of statistical predictive analysis suggested by J. Aitchison and I.R. Dunsmore (1975). The adaptive testing model proposed is based on parameter-free predictive distribution. Aitchison and Dunsmore define statistical prediction analysis as the use of data obtained from an…
Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data
Liu, Zitao; Hauskrecht, Milos
2016-01-01
Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189
Phobos Sample Return: Next Approach
NASA Astrophysics Data System (ADS)
Zelenyi, Lev; Martynov, Maxim; Zakharov, Alexander; Korablev, Oleg; Ivanov, Alexey; Karabadzak, George
The Martian moons still remain a mystery after numerous studies by Mars orbiting spacecraft. Their study cover three major topics related to (1) Solar system in general (formation and evolution, origin of planetary satellites, origin and evolution of life); (2) small bodies (captured asteroid, or remnants of Mars formation, or reaccreted Mars ejecta); (3) Mars (formation and evolution of Mars; Mars ejecta at the satellites). As reviewed by Galimov [2010] most of the above questions require the sample return from the Martian moon, while some (e.g. the characterization of the organic matter) could be also answered by in situ experiments. There is the possibility to obtain the sample of Mars material by sampling Phobos: following to Chappaz et al. [2012] a 200-g sample could contain 10-7 g of Mars surface material launched during the past 1 mln years, or 5*10-5 g of Mars material launched during the past 10 mln years, or 5*1010 individual particles from Mars, quantities suitable for accurate laboratory analyses. The studies of Phobos have been of high priority in the Russian program on planetary research for many years. Phobos-88 mission consisted of two spacecraft (Phobos-1, Phobos-2) and aimed the approach to Phobos at 50 m and remote studies, and also the release of small landers (long-living stations DAS). This mission implemented the program incompletely. It was returned information about the Martian environment and atmosphere. The next profect Phobos Sample Return (Phobos-Grunt) initially planned in early 2000 has been delayed several times owing to budget difficulties; the spacecraft failed to leave NEO in 2011. The recovery of the science goals of this mission and the delivery of the samples of Phobos to Earth remain of highest priority for Russian scientific community. The next Phobos SR mission named Boomerang was postponed following the ExoMars cooperation, but is considered the next in the line of planetary exploration, suitable for launch around 2022. A
Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks
Xu, Yunfei; Choi, Jongeun
2011-01-01
This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785
Adaptive Sampling of Time Series During Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.
2012-01-01
This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models
Connectionist approach to adaptive reasoning
NASA Astrophysics Data System (ADS)
Reddy, Mohan S.; Pandya, Abhijit S.; Reddy, D. V.
1995-06-01
This paper illustrates the neural net approach to constructing a fuzzy logic decision system. This technique employs an artificial neural network (ANN) to recognize the relationships that exist between the various inputs and outputs. An ANN is constructed based on the variable present in the application. The network is trained and tested. After successful testing, the ANN is exposed to new data and the results are grouped into fuzzy membership sets. This data grouping forms the basis of a new ANN. The network is now trained and tested with the fuzzy membership data. New data is presented to the trained network and the results from the fuzzy implications. This approach is used to compute skid resistance values from G-analyst accelerometer readings on open grid bridge decks.
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.
Distributed Database Kriging for Adaptive Sampling (D2 KAS)
NASA Astrophysics Data System (ADS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph
2015-07-01
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5-25, while retaining high accuracy for various choices of the algorithm parameters.
Flight Approach to Adaptive Control Research
NASA Technical Reports Server (NTRS)
Pavlock, Kate Maureen; Less, James L.; Larson, David Nils
2011-01-01
The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The testbed served as a full-scale vehicle to test and validate adaptive flight control research addressing technical challenges involved with reducing risk to enable safe flight in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.
Estimation of cosmological parameters using adaptive importance sampling
Wraith, Darren; Kilbinger, Martin; Benabed, Karim; Prunet, Simon; Cappe, Olivier; Fort, Gersende; Cardoso, Jean-Francois; Robert, Christian P.
2009-07-15
We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.
A modular approach to adaptive structures.
Pagitz, Markus; Pagitz, Manuel; Hühne, Christian
2014-01-01
A remarkable property of nastic, shape changing plants is their complete fusion between actuators and structure. This is achieved by combining a large number of cells whose geometry, internal pressures and material properties are optimized for a given set of target shapes and stiffness requirements. An advantage of such a fusion is that cell walls are prestressed by cell pressures which increases, decreases the overall structural stiffness, weight. Inspired by the nastic movement of plants, Pagitz et al (2012 Bioinspir. Biomim. 7) published a novel concept for pressure actuated cellular structures. This article extends previous work by introducing a modular approach to adaptive structures. An algorithm that breaks down any continuous target shapes into a small number of standardized modules is presented. Furthermore it is shown how cytoskeletons within each cell enhance the properties of adaptive modules. An adaptive passenger seat and an aircrafts leading, trailing edge is used to demonstrate the potential of a modular approach. PMID:25289521
Cross-Cultural Adaptation: Current Approaches.
ERIC Educational Resources Information Center
Kim, Young Yun, Ed.; Gudykunst, William B., Ed.
1988-01-01
Reflecting multidisciplinary and multisocietal approaches, this collection presents 14 theoretical or research-based essays dealing with cross-cultural adaptation of individuals who are born and raised in one culture and find themselves in need of modifying their customary life patterns in a foreign culture. Papers in the collection are:…
Adaptive Importance Sampling for Control and Inference
NASA Astrophysics Data System (ADS)
Kappen, H. J.; Ruiz, H. C.
2016-03-01
Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
Averaging analysis for discrete time and sampled data adaptive systems
NASA Technical Reports Server (NTRS)
Fu, Li-Chen; Bai, Er-Wei; Sastry, Shankar S.
1986-01-01
Earlier continuous time averaging theorems are extended to the nonlinear discrete time case. Theorems for the study of the convergence analysis of discrete time adaptive identification and control systems are used. Instability theorems are also derived and used for the study of robust stability and instability of adaptive control schemes applied to sampled data systems. As a by product, the effects of sampling on unmodeled dynamics in continuous time systems are also studied.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Adaptive importance sampling of random walks on continuous state spaces
Baggerly, K.; Cox, D.; Picard, R.
1998-11-01
The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material.
Adaptive video compressed sampling in the wavelet domain
NASA Astrophysics Data System (ADS)
Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi
2016-07-01
In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.
Approaching neuropsychological tasks through adaptive neurorobots
NASA Astrophysics Data System (ADS)
Gigliotta, Onofrio; Bartolomeo, Paolo; Miglino, Orazio
2015-04-01
Neuropsychological phenomena have been modelized mainly, by the mainstream approach, by attempting to reproduce their neural substrate whereas sensory-motor contingencies have attracted less attention. In this work, we introduce a simulator based on the evolutionary robotics platform Evorobot* in order to setting up in silico neuropsychological tasks. Moreover, in this study we trained artificial embodied neurorobotic agents equipped with a pan/tilt camera, provided with different neural and motor capabilities, to solve a well-known neuropsychological test: the cancellation task in which an individual is asked to cancel target stimuli surrounded by distractors. Results showed that embodied agents provided with additional motor capabilities (a zooming/attentional actuator) outperformed simple pan/tilt agents, even those equipped with more complex neural controllers and that the zooming ability is exploited to correctly categorising presented stimuli. We conclude that since the sole neural computational power cannot explain the (artificial) cognition which emerged throughout the adaptive process, such kind of modelling approach can be fruitful in neuropsychological modelling where the importance of having a body is often neglected.
The Limits to Adaptation: A Systems Approach
The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed th...
An Adaptive Critic Approach to Reference Model Adaptation
NASA Technical Reports Server (NTRS)
Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.
2003-01-01
Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.
Postolache, Dragos; Lascoux, Martin; Drouzas, Andreas D.; Källman, Thomas; Leonarduzzi, Cristina; Liepelt, Sascha; Piotti, Andrea; Popescu, Flaviu; Roschanski, Anna M.; Zhelev, Peter; Fady, Bruno; Vendramin, Giovanni Giuseppe
2016-01-01
Background Local adaptation is a key driver of phenotypic and genetic divergence at loci responsible for adaptive traits variations in forest tree populations. Its experimental assessment requires rigorous sampling strategies such as those involving population pairs replicated across broad spatial scales. Methods A hierarchical Bayesian model of selection (HBM) that explicitly considers both the replication of the environmental contrast and the hierarchical genetic structure among replicated study sites is introduced. Its power was assessed through simulations and compared to classical ‘within-site’ approaches (FDIST, BAYESCAN) and a simplified, within-site, version of the model introduced here (SBM). Results HBM demonstrates that hierarchical approaches are very powerful to detect replicated patterns of adaptive divergence with low false-discovery (FDR) and false-non-discovery (FNR) rates compared to the analysis of different sites separately through within-site approaches. The hypothesis of local adaptation to altitude was further addressed by analyzing replicated Abies alba population pairs (low and high elevations) across the species’ southern distribution range, where the effects of climatic selection are expected to be the strongest. For comparison, a single population pair from the closely related species A. cephalonica was also analyzed. The hierarchical model did not detect any pattern of adaptive divergence to altitude replicated in the different study sites. Instead, idiosyncratic patterns of local adaptation among sites were detected by within-site approaches. Conclusion Hierarchical approaches may miss idiosyncratic patterns of adaptation among sites, and we strongly recommend the use of both hierarchical (multi-site) and classical (within-site) approaches when addressing the question of adaptation across broad spatial scales. PMID:27392065
Johnson, R.L.
1993-11-01
Adaptive sampling programs offer substantial savings in time and money when assessing hazardous waste sites. Key to some of these savings is the ability to adapt a sampling program to the real-time data generated by an adaptive sampling program. This paper presents a two-prong approach to supporting adaptive sampling programs: a specialized object-oriented database/geographical information system (SitePlanner{trademark} ) for data fusion, management, and display and combined Bayesian/geostatistical methods (PLUME) for contamination-extent estimation and sample location selection. This approach is applied in a retrospective study of a subsurface chromium plume at Sandia National Laboratories` chemical waste landfill. Retrospective analyses suggest the potential for characterization cost savings on the order of 60% through a reduction in the number of sampling programs, total number of soil boreholes, and number of samples analyzed from each borehole.
Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations
Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer
2013-09-01
Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both
Anomalous human behavior detection: an adaptive approach
NASA Astrophysics Data System (ADS)
van Leeuwen, Coen; Halma, Arvid; Schutte, Klamer
2013-05-01
Detection of anomalies (outliers or abnormal instances) is an important element in a range of applications such as fault, fraud, suspicious behavior detection and knowledge discovery. In this article we propose a new method for anomaly detection and performed tested its ability to detect anomalous behavior in videos from DARPA's Mind's Eye program, containing a variety of human activities. In this semi-unsupervised task a set of normal instances is provided for training, after which unknown abnormal behavior has to be detected in a test set. The features extracted from the video data have high dimensionality, are sparse and inhomogeneously distributed in the feature space making it a challenging task. Given these characteristics a distance-based method is preferred, but choosing a threshold to classify instances as (ab)normal is non-trivial. Our novel aproach, the Adaptive Outlier Distance (AOD) is able to detect outliers in these conditions based on local distance ratios. The underlying assumption is that the local maximum distance between labeled examples is a good indicator of the variation in that neighborhood, and therefore a local threshold will result in more robust outlier detection. We compare our method to existing state-of-art methods such as the Local Outlier Factor (LOF) and the Local Distance-based Outlier Factor (LDOF). The results of the experiments show that our novel approach improves the quality of the anomaly detection.
Adaptive sample map for Monte Carlo ray tracing
NASA Astrophysics Data System (ADS)
Teng, Jun; Luo, Lixin; Chen, Zhibo
2010-07-01
Monte Carlo ray tracing algorithm is widely used by production quality renderers to generate synthesized images in films and TV programs. Noise artifact exists in synthetic images generated by Monte Carlo ray tracing methods. In this paper, a novel noise artifact detection and noise level representation method is proposed. We first apply discrete wavelet transform (DWT) on a synthetic image; the high frequency sub-bands of the DWT result encode the noise information. The sub-bands coefficients are then combined to generate a noise level description of the synthetic image, which is called noise map in the paper. This noise map is then subdivided into blocks for robust noise level metric calculation. Increasing the samples per pixel in Monte Carlo ray tracer can reduce the noise of a synthetic image to visually unnoticeable level. A noise-to-sample number mapping algorithm is thus performed on each block of the noise map, higher noise value is mapped to larger sample number, and lower noise value is mapped to smaller sample number, the result of mapping is called sample map. Each pixel in a sample map can be used by Monte Carlo ray tracer to reduce the noise level in the corresponding block of pixels in a synthetic image. However, this block based scheme produces blocky artifact as appeared in video and image compression algorithms. We use Gaussian filter to smooth the sample map, the result is adaptive sample map (ASP). ASP serves two purposes in rendering process; its statistics information can be used as noise level metric in synthetic image, and it can also be used by a Monte Carlo ray tracer to refine the synthetic image adaptively in order to reduce the noise to unnoticeable level but with less rendering time than the brute force method.
Russian Loanword Adaptation in Persian; Optimal Approach
ERIC Educational Resources Information Center
Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat
2011-01-01
In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…
Improving Wang-Landau sampling with adaptive windows
NASA Astrophysics Data System (ADS)
Cunha-Netto, A. G.; Caparica, A. A.; Tsai, Shan-Ho; Dickman, Ronald; Landau, D. P.
2008-11-01
Wang-Landau sampling (WLS) of large systems requires dividing the energy range into “windows” and joining the results of simulations in each window. The resulting density of states (and associated thermodynamic functions) is shown to suffer from boundary effects in simulations of lattice polymers and the five-state Potts model. Here, we implement WLS using adaptive windows. Instead of defining fixed energy windows (or windows in the energy-magnetization plane for the Potts model), the boundary positions depend on the set of energy values on which the histogram is flat at a given stage of the simulation. Shifting the windows each time the modification factor f is reduced, we eliminate border effects that arise in simulations using fixed windows. Adaptive windows extend significantly the range of system sizes that may be studied reliably using WLS.
Binary hologram generation based on shape adaptive sampling
NASA Astrophysics Data System (ADS)
Tsang, P. W. M.; Pan, Y.; Poon, T.-C.
2014-05-01
Past research has revealed that by down-sampling the projected intensity profile of a source object scene with a regular sampling lattice, a binary Fresnel hologram can be generated swiftly to preserve favorable quality on its reconstructed image. However, this method also results in a prominent textural pattern which is conflicting to the geometrical profile of the object scene, leading to an unnatural visual perception. In this paper, we shall overcome this problem with a down-sampling process that is adaptive to the geometry of the object. Experimental results demonstrate that by applying our proposed method to generate a binary hologram, the reconstructed image is rendered with a texture which abides with the shape of the three-dimensional object(s).
Smith, David R; Gray, Brian R; Newton, Teresa J; Nichols, Doug
2010-11-01
Adaptive sampling designs are recommended where, as is typical with freshwater mussels, the outcome of interest is rare and clustered. However, the performance of adaptive designs has not been investigated when outcomes are not only rare and clustered but also imperfectly detected. We address this combination of challenges using data simulated to mimic properties of freshwater mussels from a reach of the upper Mississippi River. Simulations were conducted under a range of sample sizes and detection probabilities. Under perfect detection, efficiency of the adaptive sampling design increased relative to the conventional design as sample size increased and as density decreased. Also, the probability of sampling occupied habitat was four times higher for adaptive than conventional sampling of the lowest density population examined. However, imperfect detection resulted in substantial biases in sample means and variances under both adaptive sampling and conventional designs. The efficiency of adaptive sampling declined with decreasing detectability. Also, the probability of encountering an occupied unit during adaptive sampling, relative to conventional sampling declined with decreasing detectability. Thus, the potential gains in the application of adaptive sampling to rare and clustered populations relative to conventional sampling are reduced when detection is imperfect. The results highlight the need to increase or estimate detection to improve performance of conventional and adaptive sampling designs. PMID:19946742
Smith, D.R.; Gray, B.R.; Newton, T.J.; Nichols, D.
2010-01-01
Adaptive sampling designs are recommended where, as is typical with freshwater mussels, the outcome of interest is rare and clustered. However, the performance of adaptive designs has not been investigated when outcomes are not only rare and clustered but also imperfectly detected. We address this combination of challenges using data simulated to mimic properties of freshwater mussels from a reach of the upper Mississippi River. Simulations were conducted under a range of sample sizes and detection probabilities. Under perfect detection, efficiency of the adaptive sampling design increased relative to the conventional design as sample size increased and as density decreased. Also, the probability of sampling occupied habitat was four times higher for adaptive than conventional sampling of the lowest density population examined. However, imperfect detection resulted in substantial biases in sample means and variances under both adaptive sampling and conventional designs. The efficiency of adaptive sampling declined with decreasing detectability. Also, the probability of encountering an occupied unit during adaptive sampling, relative to conventional sampling declined with decreasing detectability. Thus, the potential gains in the application of adaptive sampling to rare and clustered populations relative to conventional sampling are reduced when detection is imperfect. The results highlight the need to increase or estimate detection to improve performance of conventional and adaptive sampling designs.
Elucidating Microbial Adaptation Dynamics via Autonomous Exposure and Sampling
NASA Astrophysics Data System (ADS)
Grace, J. M.; Verseux, C.; Gentry, D.; Moffet, A.; Thayabaran, R.; Wong, N.; Rothschild, L.
2013-12-01
The adaptation of micro-organisms to their environments is a complex process of interaction between the pressures of the environment and of competition. Reducing this multifactorial process to environmental exposure in the laboratory is a common tool for elucidating individual mechanisms of evolution, such as mutation rates[Wielgoss et al., 2013]. Although such studies inform fundamental questions about the way adaptation and even speciation occur, they are often limited by labor-intensive manual techniques[Wassmann et al., 2010]. Current methods for controlled study of microbial adaptation limit the length of time, the depth of collected data, and the breadth of applied environmental conditions. Small idiosyncrasies in manual techniques can have large effects on outcomes; for example, there are significant variations in induced radiation resistances following similar repeated exposure protocols[Alcántara-Díaz et al., 2004; Goldman and Travisano, 2011]. We describe here a project under development to allow rapid cycling of multiple types of microbial environmental exposure. The system allows continuous autonomous monitoring and data collection of both single species and sampled communities, independently and concurrently providing multiple types of controlled environmental pressure (temperature, radiation, chemical presence or absence, and so on) to a microbial community in dynamic response to the ecosystem's current status. When combined with DNA sequencing and extraction, such a controlled environment can cast light on microbial functional development, population dynamics, inter- and intra-species competition, and microbe-environment interaction. The project's goal is to allow rapid, repeatable iteration of studies of both natural and artificial microbial adaptation. As an example, the same system can be used both to increase the pH of a wet soil aliquot over time while periodically sampling it for genetic activity analysis, or to repeatedly expose a culture of
Charney, Noah D; Kubel, Jacob E; Eiseman, Charles S
2015-01-01
Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224
Charney, Noah D.; Kubel, Jacob E.; Eiseman, Charles S.
2015-01-01
Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224
Using continuous in-situ measurements to adaptively trigger urban storm water samples
NASA Astrophysics Data System (ADS)
Wong, B. P.; Kerkez, B.
2015-12-01
Until cost-effective in-situ sensors are available for biological parameters, nutrients and metals, automated samplers will continue to be the primary source of reliable water quality measurements. Given limited samples bottles, however, autosamplers often obscure insights on nutrient sources and biogeochemical processes which would otherwise be captured using a continuous sampling approach. To that end, we evaluate the efficacy a novel method to measure first-flush nutrient dynamics in flashy, urban watersheds. Our approach reduces the number of samples required to capture water quality dynamics by leveraging an internet-connected sensor node, which is equipped with a suite of continuous in-situ sensors and an automated sampler. To capture both the initial baseflow as well as storm concentrations, a cloud-hosted adaptive algorithm analyzes the high-resolution sensor data along with local weather forecasts to optimize a sampling schedule. The method was tested in a highly developed urban catchment in Ann Arbor, Michigan and collected samples of nitrate, phosphorus, and suspended solids throughout several storm events. Results indicate that the watershed does not exhibit first flush dynamics, a behavior that would have been obscured when using a non-adaptive sampling approach.
The iterative adaptive approach in medical ultrasound imaging.
Jensen, Are Charles; Austeng, Andreas
2014-10-01
Many medical ultrasound imaging systems are based on sweeping the image plane with a set of narrow beams. Usually, the returning echo from each of these beams is used to form one or a few azimuthal image samples. We model, for each radial distance, jointly the full azimuthal scanline. The model consists of the amplitudes of a set of densely placed potential reflectors (or scatterers), cf. sparse signal representation. To fit the model, we apply the iterative adaptive approach (IAA) on data formed by a sequenced time delay and phase shift. The performance of the IAA in combination with our time-delayed and phase-shifted data are studied on both simulated data of scenes consisting of point targets and hollow cyst-like structures, and recorded ultrasound phantom data from a specially adapted commercially available scanner. The results show that the proposed IAA is more capable of resolving point targets and gives better defined and more geometrically correct cyst-like structures in speckle images compared with the conventional delay-and-sum (DAS) approach. Compared with a Capon beamformer, the IAA showed an improved rendering of cyst-like structures and a similar point-target resolvability. Unlike the Capon beamformer, the IAA has no user parameters and seems unaffected by signal cancellation. The disadvantage of the IAA is a high computational load. PMID:25265177
POF-Darts: Geometric adaptive sampling for probability of failure
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; Romero, Vicente J.; Rushdi, Ahmad A.
2016-06-18
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
Passive and active adaptive management: Approaches and an example
Williams, B.K.
2011-01-01
Adaptive management is a framework for resource conservation that promotes iterative learning-based decision making. Yet there remains considerable confusion about what adaptive management entails, and how to actually make resource decisions adaptively. A key but somewhat ambiguous distinction in adaptive management is between active and passive forms of adaptive decision making. The objective of this paper is to illustrate some approaches to active and passive adaptive management with a simple example involving the drawdown of water impoundments on a wildlife refuge. The approaches are illustrated for the drawdown example, and contrasted in terms of objectives, costs, and potential learning rates. Some key challenges to the actual practice of AM are discussed, and tradeoffs between implementation costs and long-term benefits are highlighted. ?? 2010 Elsevier Ltd.
On adaptive robustness approach to Anti-Jam signal processing
NASA Astrophysics Data System (ADS)
Poberezhskiy, Y. S.; Poberezhskiy, G. Y.
An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.
Structured estimation - Sample size reduction for adaptive pattern classification
NASA Technical Reports Server (NTRS)
Morgera, S.; Cooper, D. B.
1977-01-01
The Gaussian two-category classification problem with known category mean value vectors and identical but unknown category covariance matrices is considered. The weight vector depends on the unknown common covariance matrix, so the procedure is to estimate the covariance matrix in order to obtain an estimate of the optimum weight vector. The measure of performance for the adapted classifier is the output signal-to-interference noise ratio (SIR). A simple approximation for the expected SIR is gained by using the general sample covariance matrix estimator; this performance is both signal and true covariance matrix independent. An approximation is also found for the expected SIR obtained by using a Toeplitz form covariance matrix estimator; this performance is found to be dependent on both the signal and the true covariance matrix.
Concept Based Approach for Adaptive Personalized Course Learning System
ERIC Educational Resources Information Center
Salahli, Mehmet Ali; Özdemir, Muzaffer; Yasar, Cumali
2013-01-01
One of the most important factors for improving the personalization aspects of learning systems is to enable adaptive properties to them. The aim of the adaptive personalized learning system is to offer the most appropriate learning path and learning materials to learners by taking into account their profiles. In this paper, a new approach to…
Responsiveness-to-Intervention: A "Systems" Approach to Instructional Adaptation
ERIC Educational Resources Information Center
Fuchs, Douglas; Fuchs, Lynn S.
2016-01-01
Classroom research on adaptive teaching indicates few teachers modify instruction for at-risk students in a manner that benefits them. Responsiveness-To-Intervention, with its tiers of increasingly intensive instruction, represents an alternative approach to adaptive instruction that may prove more workable in today's schools.
Superresolution restoration of an image sequence: adaptive filtering approach.
Elad, M; Feuer, A
1999-01-01
This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented. PMID:18262881
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
Approach for reconstructing anisoplanatic adaptive optics images.
Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J
2007-08-20
Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366
Sample Sealing Approaches for Mars Sample Return Caching
NASA Technical Reports Server (NTRS)
Younse, Paulo; deAlwis, Thimal; Backes, Paul; Trebi-Ollennu, Ashitey
2012-01-01
Objective ot this project was to investigate sealing methods for encapsulating samples in 1 cm diameter thin-walled sample tubes applicable to future proposed Mars Sample Return Techniques implemented include a spring energized Teflon sleeve plug, a crimped tube seal, a heat-activated shape memory alloy plug, a shape memory alloy activated cap, a solder-based plug, and a solder-based cap
Innovation and adaptation in a Turkish sample: a preliminary study.
Oner, B
2000-11-01
The aim of this study was to examine the representations of adaptation and innovation among adults in Turkey. Semi-structured interviews were carried out with a sample of 20 Turkish adults (10 men, 10 women) from various occupations. The participants' ages ranged from 21 to 58 years. Results of content analysis showed that the representation of innovation varied with the type of context. Innovation was not preferred within the family and interpersonal relationship contexts, whereas it was relatively more readily welcomed within the contexts of work, science, and technology. This finding may indicate that the concept of innovation that is assimilated in traditional Turkish culture has limits. Contents of the interviews were also analyzed with respect to M. J. Kirton's (1976) subscales of originality, efficiency, and rule-group conformity. The participants favored efficient innovators, whereas they thought that the risk of failure was high in cases of inefficient innovation. The reasons for and indications of the representations of innovativeness among Turkish people are discussed in relation to their social structure and cultural expectations. PMID:11092420
Adapting to the Digital Age: A Narrative Approach
ERIC Educational Resources Information Center
Cousins, Sarah; Bissar, Dounia
2012-01-01
The article adopts a narrative inquiry approach to foreground informal learning and exposes a collection of stories from tutors about how they adapted comfortably to the digital age.We were concerned that despite substantial evidence that bringing about changes in pedagogic practices can be difficult, there is a gap in convincing approaches to…
The AdaptiV Approach to Verification of Adaptive Systems
Rouff, Christopher; Buskens, Richard; Pullum, Laura L; Cui, Xiaohui; Hinchey, Mike
2012-01-01
Adaptive systems are critical for future space and other unmanned and intelligent systems. Verification of these systems is also critical for their use in systems with potential harm to human life or with large financial investments. Due to their nondeterministic nature and extremely large state space, current methods for verification of software systems are not adequate to provide a high level of assurance. The combination of stabilization science, high performance computing simulations, compositional verification and traditional verification techniques, plus operational monitors, provides a complete approach to verification and deployment of adaptive systems that has not been used before. This paper gives an overview of this approach.
Adaptive millimeter-wave synthetic aperture imaging for compressive sampling of sparse scenes.
Mrozack, Alex; Heimbeck, Martin; Marks, Daniel L; Richard, Jonathan; Everitt, Henry O; Brady, David J
2014-06-01
We apply adaptive sensing techniques to the problem of locating sparse metallic scatterers using high-resolution, frequency modulated continuous wave W-band RADAR. Using a single detector, a frequency stepped source, and a lateral translation stage, inverse synthetic aperture RADAR reconstruction techniques are used to search for one or two wire scatterers within a specified range, while an adaptive algorithm determined successive sampling locations. The two-dimensional location of each scatterer is thereby identified with sub-wavelength accuracy in as few as 1/4 the number of lateral steps required for a simple raster scan. The implications of applying this approach to more complex scattering geometries are explored in light of the various assumptions made. PMID:24921545
A Monte Carlo Approach to the Design, Assembly, and Evaluation of Multistage Adaptive Tests
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.
2008-01-01
This article presents an application of Monte Carlo methods for developing and assembling multistage adaptive tests (MSTs). A major advantage of the Monte Carlo assembly over other approaches (e.g., integer programming or enumerative heuristics) is that it provides a uniform sampling from all MSTs (or MST paths) available from a given item pool.…
A new approach to adaptive control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
An approach in which the manipulator inverse is used as a feedforward controller is employed in the adaptive control of manipulators in order to achieve trajectory tracking by the joint angles. The desired trajectory is applied as an input to the feedforward controller, and the controller output is used as the driving torque for the manipulator. An adaptive algorithm obtained from MRAC theory is used to update the controller gains to cope with variations in the manipulator inverse due to changes of the operating point. An adaptive feedback controller and an auxiliary signal enhance closed-loop stability and achieve faster adaptation. Simulation results demonstrate the effectiveness of the proposed control scheme for different reference trajectories, and despite large variations in the payload.
Approach to nonparametric cooperative multiband segmentation with adaptive threshold.
Sebari, Imane; He, Dong-Chen
2009-07-10
We present a new nonparametric cooperative approach to multiband image segmentation. It is based on cooperation between region-growing segmentation and edge segmentation. This approach requires no input data other than the images to be processed. It uses a spectral homogeneity criterion whose threshold is determined automatically. The threshold is adaptive and varies depending on the objects to be segmented. Applying this new approach to very high resolution satellite imagery has yielded satisfactory results. The approach demonstrated its performance on images of varied complexity and was able to detect objects of great spatial and spectral heterogeneity. PMID:19593349
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach
Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.
2014-01-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.
Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A
2013-02-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856
Searching for adaptive traits in genetic resources - phenology based approach
NASA Astrophysics Data System (ADS)
Bari, Abdallah
2015-04-01
Searching for adaptive traits in genetic resources - phenology based approach Abdallah Bari, Kenneth Street, Eddy De Pauw, Jalal Eddin Omari, and Chandra M. Biradar International Center for Agricultural Research in the Dry Areas, Rabat Institutes, Rabat, Morocco Phenology is an important plant trait not only for assessing and forecasting food production but also for searching in genebanks for adaptive traits. Among the phenological parameters we have been considering to search for such adaptive and rare traits are the onset (sowing period) and the seasonality (growing period). Currently an application is being developed as part of the focused identification of germplasm strategy (FIGS) approach to use climatic data in order to identify crop growing seasons and characterize them in terms of onset and duration. These approximations of growing period characteristics can then be used to estimate flowering and maturity dates for dryland crops, such as wheat, barley, faba bean, lentils and chickpea, and assess, among others, phenology-related traits such as days to heading [dhe] and grain filling period [gfp]. The approach followed here is based on first calculating long term average daily temperatures by fitting a curve to the monthly data over days from beginning of the year. Prior to the identification of these phenological stages the onset is extracted first from onset integer raster GIS layers developed based on a model of the growing period that considers both moisture and temperature limitations. The paper presents some examples of real applications of the approach to search for rare and adaptive traits.
Sampling of Complex Networks: A Datamining Approach
NASA Astrophysics Data System (ADS)
Loecher, Markus; Dohrmann, Jakob; Bauer, Gernot
2007-03-01
Efficient and accurate sampling of big complex networks is still an unsolved problem. As the degree distribution is one of the most commonly used attributes to characterize a network, there have been many attempts in recent papers to derive the original degree distribution from the data obtained during a traceroute- like sampling process. This talk describes a strategy for predicting the original degree of a node using the data obtained from a network by traceroute-like sampling making use of datamining techniques. Only local quantities (the sampled degree k, the redundancy of node detection r, the time of the first discovery of a node t and the distance to the sampling source d) are used as input for the datamining models. Global properties like the betweenness centrality are ignored. These local quantities are examined theoretically and in simulations to increase their value for the predictions. The accuracy of the models is discussed as a function of the number of sources used in the sampling process and the underlying topology of the network. The purpose of this work is to introduce the techniques of the relatively young field of datamining to the discussion on network sampling.
Lotterhos, Katie E; Whitlock, Michael C
2015-03-01
Although genome scans have become a popular approach towards understanding the genetic basis of local adaptation, the field still does not have a firm grasp on how sampling design and demographic history affect the performance of genome scans on complex landscapes. To explore these issues, we compared 20 different sampling designs in equilibrium (i.e. island model and isolation by distance) and nonequilibrium (i.e. range expansion from one or two refugia) demographic histories in spatially heterogeneous environments. We simulated spatially complex landscapes, which allowed us to exploit local maxima and minima in the environment in 'pair' and 'transect' sampling strategies. We compared F(ST) outlier and genetic-environment association (GEA) methods for each of two approaches that control for population structure: with a covariance matrix or with latent factors. We show that while the relative power of two methods in the same category (F(ST) or GEA) depended largely on the number of individuals sampled, overall GEA tests had higher power in the island model and F(ST) had higher power under isolation by distance. In the refugia models, however, these methods varied in their power to detect local adaptation at weakly selected loci. At weakly selected loci, paired sampling designs had equal or higher power than transect or random designs to detect local adaptation. Our results can inform sampling designs for studies of local adaptation and have important implications for the interpretation of genome scans based on landscape data. PMID:25648189
An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
An information theoretic approach of designing sparse kernel adaptive filters.
Liu, Weifeng; Park, Il; Principe, José C
2009-12-01
This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented. PMID:19923047
Variable neural adaptive robust control: a switched system approach.
Lian, Jianming; Hu, Jianghai; Żak, Stanislaw H
2015-05-01
Variable neural adaptive robust control strategies are proposed for the output tracking control of a class of multiinput multioutput uncertain systems. The controllers incorporate a novel variable-structure radial basis function (RBF) network as the self-organizing approximator for unknown system dynamics. It can determine the network structure online dynamically by adding or removing RBFs according to the tracking performance. The structure variation is systematically considered in the stability analysis of the closed-loop system using a switched system approach with the piecewise quadratic Lyapunov function. The performance of the proposed variable neural adaptive robust controllers is illustrated with simulations. PMID:25881366
Novel Approaches to Adaptive Angular Approximations in Computational Transport
Marvin L. Adams; Igor Carron; Paul Nelson
2006-06-04
The particle-transport equation is notoriously difficult to discretize accurately, largely because the solution can be discontinuous in every variable. At any given spatial position and energy E, for example, the transport solution can be discontinuous at an arbitrary number of arbitrary locations in the direction domain. Even if the solution is continuous it is often devoid of smoothness. This makes the direction variable extremely difficult to discretize accurately. We have attacked this problem with adaptive discretizations in the angle variables, using two distinctly different approaches. The first approach used wavelet function expansions directly and exploited their ability to capture sharp local variations. The second used discrete ordinates with a spatially varying quadrature set that adapts to the local solution. The first approach is very different from that in today’s transport codes, while the second could conceivably be implemented in such codes. Both approaches succeed in reducing angular discretization error to any desired level. The work described and results presented in this report add significantly to the understanding of angular discretization in transport problems and demonstrate that it is possible to solve this important long-standing problem in deterministic transport. Our results show that our adaptive discrete-ordinates (ADO) approach successfully: 1) Reduces angular discretization error to user-selected “tolerance” levels in a variety of difficult test problems; 2) Achieves a given error with significantly fewer unknowns than non-adaptive discrete ordinates methods; 3) Can be implemented within standard discrete-ordinates solution techniques, and thus could generate a significant impact on the field in a relatively short time. Our results show that our adaptive wavelet approach: 1) Successfully reduces the angular discretization error to arbitrarily small levels in a variety of difficult test problems, even when using the
Application of adaptive cluster sampling to low-density populations of freshwater mussels
Smith, D.R.; Villella, R.F.; Lemarie, D.P.
2003-01-01
Freshwater mussels appear to be promising candidates for adaptive cluster sampling because they are benthic macroinvertebrates that cluster spatially and are frequently found at low densities. We applied adaptive cluster sampling to estimate density of freshwater mussels at 24 sites along the Cacapon River, WV, where a preliminary timed search indicated that mussels were present at low density. Adaptive cluster sampling increased yield of individual mussels and detection of uncommon species; however, it did not improve precision of density estimates. Because finding uncommon species, collecting individuals of those species, and estimating their densities are important conservation activities, additional research is warranted on application of adaptive cluster sampling to freshwater mussels. However, at this time we do not recommend routine application of adaptive cluster sampling to freshwater mussel populations. The ultimate, and currently unanswered, question is how to tell when adaptive cluster sampling should be used, i.e., when is a population sufficiently rare and clustered for adaptive cluster sampling to be efficient and practical? A cost-effective procedure needs to be developed to identify biological populations for which adaptive cluster sampling is appropriate.
Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids
Stoyanov, Miroslav K
2013-09-01
We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Camera calibration approach based on adaptive active target
NASA Astrophysics Data System (ADS)
Zhang, Yalin; Zhou, Fuqiang; Deng, Peng
2011-12-01
Aiming at calibrating camera on site, where the lighting condition is hardly controlled and the quality of target images would be declined when the angle between camera and target changes, an adaptive active target is designed and the camera calibration approach based on the target is proposed. The active adaptive target in which LEDs are embedded is flat, providing active feature point. Therefore the brightness of the feature point can be modified via adjusting the electricity, judging from the threshold of image feature criteria. In order to extract features of the image accurately, the concept of subpixel-precise thresholding is also proposed. It converts the discrete representation of the digital image to continuous function by bilinear interpolation, and the sub-pixel contours are acquired by the intersection of the continuous function and the appropriate selection of threshold. According to analysis of the relationship between the features of the image and the brightness of the target, the area ratio of convex hulls and the grey value variance are adopted as the criteria. Result of experiments revealed that the adaptive active target accommodates well to the changing of the illumination in the environment, the camera calibration approach based on adaptive active target can obtain high level of accuracy and fit perfectly for image targeting in various industrial sites.
An Approach to V&V of Embedded Adaptive Systems
NASA Technical Reports Server (NTRS)
Liu, Yan; Yerramalla, Sampath; Fuller, Edgar; Cukic, Bojan; Gururajan, Srikaruth
2004-01-01
Rigorous Verification and Validation (V&V) techniques are essential for high assurance systems. Lately, the performance of some of these systems is enhanced by embedded adaptive components in order to cope with environmental changes. Although the ability of adapting is appealing, it actually poses a problem in terms of V&V. Since uncertainties induced by environmental changes have a significant impact on system behavior, the applicability of conventional V&V techniques is limited. In safety-critical applications such as flight control system, the mechanisms of change must be observed, diagnosed, accommodated and well understood prior to deployment. In this paper, we propose a non-conventional V&V approach suitable for online adaptive systems. We apply our approach to an intelligent flight control system that employs a particular type of Neural Networks (NN) as the adaptive learning paradigm. Presented methodology consists of a novelty detection technique and online stability monitoring tools. The novelty detection technique is based on Support Vector Data Description that detects novel (abnormal) data patterns. The Online Stability Monitoring tools based on Lyapunov's Stability Theory detect unstable learning behavior in neural networks. Cases studies based on a high fidelity simulator of NASA's Intelligent Flight Control System demonstrate a successful application of the presented V&V methodology. ,
NASA Astrophysics Data System (ADS)
Zhang, Xiaofeng; Badea, Cristian T.; Hood, Greg; Wetzel, Arthur W.; Stiles, Joel R.; Johnson, G. Allan
2010-02-01
Image reconstruction is one of the main challenges for fluorescence tomography. For in vivo experiments on small animals, in particular, the inhomogeneous optical properties and irregular surface of the animal make free-space image reconstruction challenging because of the difficulties in accurately modeling the forward problem and the finite dynamic range of the photodetector. These two factors are fundamentally limited by the currently available forward models and photonic technologies. Nonetheless, both limitations can be significantly eased using a signal processing approach. We have recently constructed a free-space panoramic fluorescence diffuse optical tomography system to take advantage of co-registered microCT data acquired from the same animal. In this article, we present a data processing strategy that adaptively selects the optical sampling points in the raw 2-D fluorescent CCD images. Specifically, the general sampling area and sampling density are initially specified to create a set of potential sampling points sufficient to cover the region of interest. Based on 3-D anatomical information from the microCT and the fluorescent CCD images, data points are excluded from the set when they are located in an area where either the forward model is known to be problematic (e.g., large wrinkles on the skin) or where the signal is unreliable (e.g., saturated or low signal-to-noise ratio). Parallel Monte Carlo software was implemented to compute the sensitivity function for image reconstruction. Animal experiments were conducted on a mouse cadaver with an artificial fluorescent inclusion. Compared to our previous results using a finite element method, the newly developed parallel Monte Carlo software and the adaptive sampling strategy produced favorable reconstruction results.
The adaptive, cut-cell Cartesian approach (warts and all)
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.
1995-01-01
Solution-adaptive methods based on cutting bodies out of Cartesian grids are gaining popularity now that the ways of circumventing the accuracy problems associated with small cut cells have been developed. Researchers are applying Cartesian-based schemes to a broad class of problems now, and, although there is still development work to be done, it is becoming clearer which problems are best suited to the approach (and which are not). The purpose of this paper is to give a candid assessment, based on applying Cartesian schemes to a variety of problems, of the strengths and weaknesses of the approach as it is currently implemented.
Allaby, Robin G.; Gutaker, Rafal; Clarke, Andrew C.; Pearson, Neil; Ware, Roselyn; Palmer, Sarah A.; Kitchen, James L.; Smith, Oliver
2015-01-01
Our understanding of the evolution of domestication has changed radically in the past 10 years, from a relatively simplistic rapid origin scenario to a protracted complex process in which plants adapted to the human environment. The adaptation of plants continued as the human environment changed with the expansion of agriculture from its centres of origin. Using archaeogenomics and computational models, we can observe genome evolution directly and understand how plants adapted to the human environment and the regional conditions to which agriculture expanded. We have applied various archaeogenomics approaches as exemplars to study local adaptation of barley to drought resistance at Qasr Ibrim, Egypt. We show the utility of DNA capture, ancient RNA, methylation patterns and DNA from charred remains of archaeobotanical samples from low latitudes where preservation conditions restrict ancient DNA research to within a Holocene timescale. The genomic level of analyses that is now possible, and the complexity of the evolutionary process of local adaptation means that plant studies are set to move to the genome level, and account for the interaction of genes under selection in systems-level approaches. This way we can understand how plants adapted during the expansion of agriculture across many latitudes with rapidity. PMID:25487329
Allaby, Robin G; Gutaker, Rafal; Clarke, Andrew C; Pearson, Neil; Ware, Roselyn; Palmer, Sarah A; Kitchen, James L; Smith, Oliver
2015-01-19
Our understanding of the evolution of domestication has changed radically in the past 10 years, from a relatively simplistic rapid origin scenario to a protracted complex process in which plants adapted to the human environment. The adaptation of plants continued as the human environment changed with the expansion of agriculture from its centres of origin. Using archaeogenomics and computational models, we can observe genome evolution directly and understand how plants adapted to the human environment and the regional conditions to which agriculture expanded. We have applied various archaeogenomics approaches as exemplars to study local adaptation of barley to drought resistance at Qasr Ibrim, Egypt. We show the utility of DNA capture, ancient RNA, methylation patterns and DNA from charred remains of archaeobotanical samples from low latitudes where preservation conditions restrict ancient DNA research to within a Holocene timescale. The genomic level of analyses that is now possible, and the complexity of the evolutionary process of local adaptation means that plant studies are set to move to the genome level, and account for the interaction of genes under selection in systems-level approaches. This way we can understand how plants adapted during the expansion of agriculture across many latitudes with rapidity. PMID:25487329
Lange, Oliver F; Baker, David
2012-01-01
Recent work has shown that NMR structures can be determined by integrating sparse NMR data with structure prediction methods such as Rosetta. The experimental data serve to guide the search for the lowest energy state towards the deep minimum at the native state which is frequently missed in Rosetta de novo structure calculations. However, as the protein size increases, sampling again becomes limiting; for example, the standard Rosetta protocol involving Monte Carlo fragment insertion starting from an extended chain fails to converge for proteins over 150 amino acids even with guidance from chemical shifts (CS-Rosetta) and other NMR data. The primary limitation of this protocol—that every folding trajectory is completely independent of every other—was recently overcome with the development of a new approach involving resolution-adapted structural recombination (RASREC). Here we describe the RASREC approach in detail and compare it to standard CS-Rosetta. We show that the improved sampling of RASREC is essential in obtaining accurate structures over a benchmark set of 11 proteins in the 15-25 kDa size range using chemical shifts, backbone RDCs and HN-HN NOE data; in a number of cases the improved sampling methodology makes a larger contribution than incorporation of additional experimental data. Experimental data are invaluable for guiding sampling to the vicinity of the global energy minimum, but for larger proteins, the standard Rosetta fold-from-extended-chain protocol does not converge on the native minimum even with experimental data and the more powerful RASREC approach is necessary to converge to accurate solutions. PMID:22423358
Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.
2008-01-01
Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of
SAR imaging via iterative adaptive approach and sparse Bayesian learning
NASA Astrophysics Data System (ADS)
Xue, Ming; Santiago, Enrique; Sedehi, Matteo; Tan, Xing; Li, Jian
2009-05-01
We consider sidelobe reduction and resolution enhancement in synthetic aperture radar (SAR) imaging via an iterative adaptive approach (IAA) and a sparse Bayesian learning (SBL) method. The nonparametric weighted least squares based IAA algorithm is a robust and user parameter-free adaptive approach originally proposed for array processing. We show that it can be used to form enhanced SAR images as well. SBL has been used as a sparse signal recovery algorithm for compressed sensing. It has been shown in the literature that SBL is easy to use and can recover sparse signals more accurately than the l 1 based optimization approaches, which require delicate choice of the user parameter. We consider using a modified expectation maximization (EM) based SBL algorithm, referred to as SBL-1, which is based on a three-stage hierarchical Bayesian model. SBL-1 is not only more accurate than benchmark SBL algorithms, but also converges faster. SBL-1 is used to further enhance the resolution of the SAR images formed by IAA. Both IAA and SBL-1 are shown to be effective, requiring only a limited number of iterations, and have no need for polar-to-Cartesian interpolation of the SAR collected data. This paper characterizes the achievable performance of these two approaches by processing the complex backscatter data from both a sparse case study and a backhoe vehicle in free space with different aperture sizes.
Adaptive optics for deeper imaging of biological samples.
Girkin, John M; Poland, Simon; Wright, Amanda J
2009-02-01
Optical microscopy has been a cornerstone of life science investigations since its first practical application around 400 years ago with the goal being subcellular resolution, three-dimensional images, at depth, in living samples. Nonlinear microscopy brought this dream a step closer, but as one images more deeply the material through which you image can greatly distort the view. By using optical devices, originally developed for astronomy, whose optical properties can be changed in real time, active compensation for sample-induced aberrations is possible. Submicron resolution images are now routinely recorded from depths over 1mm into tissue. Such active optical elements can also be used to keep conventional microscopes, both confocal and widefield, in optimal alignment. PMID:19272766
Variable Neural Adaptive Robust Control: A Switched System Approach
Lian, Jianming; Hu, Jianghai; Zak, Stanislaw H.
2015-05-01
Variable neural adaptive robust control strategies are proposed for the output tracking control of a class of multi-input multi-output uncertain systems. The controllers incorporate a variable-structure radial basis function (RBF) network as the self-organizing approximator for unknown system dynamics. The variable-structure RBF network solves the problem of structure determination associated with fixed-structure RBF networks. It can determine the network structure on-line dynamically by adding or removing radial basis functions according to the tracking performance. The structure variation is taken into account in the stability analysis of the closed-loop system using a switched system approach with the aid of the piecewise quadratic Lyapunov function. The performance of the proposed variable neural adaptive robust controllers is illustrated with simulations.
Adaptive virulence evolution: the good old fitness-based approach.
Alizon, Samuel; Michalakis, Yannis
2015-05-01
Infectious diseases could be expected to evolve towards complete avirulence to their hosts if given enough time. However, this is not the case. Often, virulence is maintained because it is linked to adaptive advantages to the parasite, a situation that is often associated with the hypothesis known as the transmission-virulence trade-off hypothesis. Here, we argue that this hypothesis has three limitations, which are related to how virulence is defined, the possibility of multiple trade-offs, and the difficulty of testing the hypothesis empirically. By adopting a fitness-based approach, where the relation between virulence and the fitness of the parasite throughout its life cycle is directly assessed, it is possible to address these limitations and to determine directly whether virulence is adaptive. PMID:25837917
Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach
NASA Technical Reports Server (NTRS)
Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.
Sample Size Reassessment and Hypothesis Testing in Adaptive Survival Trials.
Magirr, Dominic; Jaki, Thomas; Koenig, Franz; Posch, Martin
2016-01-01
Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies. PMID:26863139
Sample Size Reassessment and Hypothesis Testing in Adaptive Survival Trials
Magirr, Dominic; Jaki, Thomas; Koenig, Franz; Posch, Martin
2016-01-01
Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies. PMID:26863139
Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Thompson, David R.; Hsiang, Kian
2013-01-01
This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.
Adaptive Wing Camber Optimization: A Periodic Perturbation Approach
NASA Technical Reports Server (NTRS)
Espana, Martin; Gilyard, Glenn
1994-01-01
Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.
A ``Limited First Sample'' Approach to Mars Sample Return — Lessons from the Apollo Program
NASA Astrophysics Data System (ADS)
Eppler, D. B.; Draper, D.; Gruener, J.
2012-06-01
Complex, multi-opportunity Mars sample return approaches have failed to be selected as a new start twice since 1985. We advocate adopting a simpler strategy of "grab-and-go" for the initial sample return, similar to the approach taken on Apollo 11.
Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology
NASA Technical Reports Server (NTRS)
Mandic, Milan; Acikmese, Behcet; Blackmore, Lars
2011-01-01
The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal
Bosson, Maël; Grudinin, Sergei; Redon, Stephane
2013-03-01
We present a novel Block-Adaptive Quantum Mechanics (BAQM) approach to interactive quantum chemistry. Although quantum chemistry models are known to be computationally demanding, we achieve interactive rates by focusing computational resources on the most active parts of the system. BAQM is based on a divide-and-conquer technique and constrains some nucleus positions and some electronic degrees of freedom on the fly to simplify the simulation. As a result, each time step may be performed significantly faster, which in turn may accelerate attraction to the neighboring local minima. By applying our approach to the nonself-consistent Atom Superposition and Electron Delocalization Molecular Orbital theory, we demonstrate interactive rates and efficient virtual prototyping for systems containing more than a thousand of atoms on a standard desktop computer. PMID:23108532
Hwang, Wei-Chin
2010-01-01
How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458
Kim, Hyejung; Van Hoof, Chris; Yazicioglu, Refet Firat
2011-01-01
This paper describes a mixed-signal ECG processing platform with an 12-bit ADC architecture that can adapt its sampling rate according to the input signals rate of change. This enables the sampling of ECG signals with significantly reduced data rate without loss of information. The presented adaptive sampling scheme reduces the ADC power consumption, enables the processing of ECG signals with lower power consumption, and reduces the power consumption of the radio while streaming the ECG signals. The test results show that running a CWT-based R peak detection algorithm using the adaptively sampled ECG signals consumes only 45.6 μW and it leads to 36% less overall system power consumption. PMID:22254775
Pi sampling: a methodical and flexible approach to initial macromolecular crystallization screening
Gorrec, Fabrice Palmer, Colin M.; Lebon, Guillaume; Warne, Tony
2011-05-01
Pi sampling, derived from the incomplete factorial approach, is an effort to maximize the diversity of macromolecular crystallization conditions and to facilitate the preparation of 96-condition initial screens. The Pi sampling method is derived from the incomplete factorial approach to macromolecular crystallization screen design. The resulting ‘Pi screens’ have a modular distribution of a given set of up to 36 stock solutions. Maximally diverse conditions can be produced by taking into account the properties of the chemicals used in the formulation and the concentrations of the corresponding solutions. The Pi sampling method has been implemented in a web-based application that generates screen formulations and recipes. It is particularly adapted to screens consisting of 96 different conditions. The flexibility and efficiency of Pi sampling is demonstrated by the crystallization of soluble proteins and of an integral membrane-protein sample.
A fast approach for accurate content-adaptive mesh generation.
Yang, Yongyi; Wernick, Miles N; Brankov, Jovan G
2003-01-01
Mesh modeling is an important problem with many applications in image processing. A key issue in mesh modeling is how to generate a mesh structure that well represents an image by adapting to its content. We propose a new approach to mesh generation, which is based on a theoretical result derived on the error bound of a mesh representation. In the proposed method, the classical Floyd-Steinberg error-diffusion algorithm is employed to place mesh nodes in the image domain so that their spatial density varies according to the local image content. Delaunay triangulation is next applied to connect the mesh nodes. The result of this approach is that fine mesh elements are placed automatically in regions of the image containing high-frequency features while coarse mesh elements are used to represent smooth areas. The proposed algorithm is noniterative, fast, and easy to implement. Numerical results demonstrate that, at very low computational cost, the proposed approach can produce mesh representations that are more accurate than those produced by several existing methods. Moreover, it is demonstrated that the proposed algorithm performs well with images of various kinds, even in the presence of noise. PMID:18237961
NASA Astrophysics Data System (ADS)
Bo, Yizhou; Shifa, Naima
2013-09-01
An estimator for finding the abundance of a rare, clustered and mobile population has been introduced. This model is based on adaptive cluster sampling (ACS) to identify the location of the population and negative binomial distribution to estimate the total in each site. To identify the location of the population we consider both sampling with replacement (WR) and sampling without replacement (WOR). Some mathematical properties of the model are also developed.
Adaptive Neuro-fuzzy approach in friction identification
NASA Astrophysics Data System (ADS)
Zaiyad Muda @ Ismail, Muhammad
2016-05-01
Friction is known to affect the performance of motion control system, especially in terms of its accuracy. Therefore, a number of techniques or methods have been explored and implemented to alleviate the effects of friction. In this project, the Artificial Intelligent (AI) approach is used to model the friction which will be then used to compensate the friction. The Adaptive Neuro-Fuzzy Inference System (ANFIS) is chosen among several other AI methods because of its reliability and capabilities of solving complex computation. ANFIS is a hybrid AI-paradigm that combines the best features of neural network and fuzzy logic. This AI method (ANFIS) is effective for nonlinear system identification and compensation and thus, being used in this project.
NASA Astrophysics Data System (ADS)
Huda, J.; Kauneckis, D. L.
2013-12-01
Climate change adaptation represents a number of unique policy-making challenges. Foremost among these is dealing with the range of future climate impacts to a wide scope of inter-related natural systems, their interaction with social and economic systems, and uncertainty resulting from the variety of downscaled climate model scenarios and climate science projections. These cascades of uncertainty have led to a number of new approaches as well as a reexamination of traditional methods for evaluating risk and uncertainty in policy-making. Policy makers are required to make decisions and formulate policy irrespective of the level of uncertainty involved and while a debate continues regarding the level of scientific certainty required in order to make a decision, incremental change in the climate policy continues at multiple governance levels. This project conducts a comparative analysis of the range of methodological approaches that are evolving to address uncertainty in climate change policy. It defines 'methodologies' to include a variety of quantitative and qualitative approaches involving both top-down and bottom-up policy processes that attempt to enable policymakers to synthesize climate information into the policy process. The analysis examines methodological approaches to decision-making in climate policy based on criteria such as sources of policy choice information, sectors to which the methodology has been applied, sources from which climate projections were derived, quantitative and qualitative methods used to deal with uncertainty, and the benefits and limitations of each. A typology is developed to better categorize the variety of approaches and methods, examine the scope of policy activities they are best suited for, and highlight areas for future research and development.
DiffeRential Evolution Adaptive Metropolis with Sampling From Past States
NASA Astrophysics Data System (ADS)
Vrugt, J. A.; Laloy, E.; Ter Braak, C.
2010-12-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. In a previous paper te{vrugt_1} we have presented the {D}iffe{R}ential {E}volution {A}daptive {M}etropolis (DREAM) MCMC scheme that automatically tunes the scale and orientation of the proposal distribution during evolution to the posterior target distribution. In the same paper, detailed balance and ergodicity of DREAM have been proved, and various examples involving nonlinearity, high-dimensionality, and multimodality have shown that DREAM is generally superior to other adaptive MCMC sampling approaches. Standard DREAM requires at least N = d chains to be run in parallel, where d is the dimensionality of the posterior. Unfortunately, running many parallel chains is a potential source of inefficiency, as each individual chain must travel to high density region of the posterior. The lower the number of parallel chains required, the greater the practical applicability of DREAM for computationally demanding problems. This paper extends DREAM with a snooker updater and shows by simulation and real examples that DREAM can work for d up to 50-100 with far fewer parallel chains (e.g. N = 3) by generating jumps using differences of pairs of past states
Bouman, A. C.; ten Cate-Hoek, A. J.; Ramaekers, B. L. T.; Joore, M. A.
2015-01-01
Background Non-inferiority trials are performed when the main therapeutic effect of the new therapy is expected to be not unacceptably worse than that of the standard therapy, and the new therapy is expected to have advantages over the standard therapy in costs or other (health) consequences. These advantages however are not included in the classic frequentist approach of sample size calculation for non-inferiority trials. In contrast, the decision theory approach of sample size calculation does include these factors. The objective of this study is to compare the conceptual and practical aspects of the frequentist approach and decision theory approach of sample size calculation for non-inferiority trials, thereby demonstrating that the decision theory approach is more appropriate for sample size calculation of non-inferiority trials. Methods The frequentist approach and decision theory approach of sample size calculation for non-inferiority trials are compared and applied to a case of a non-inferiority trial on individually tailored duration of elastic compression stocking therapy compared to two years elastic compression stocking therapy for the prevention of post thrombotic syndrome after deep vein thrombosis. Results The two approaches differ substantially in conceptual background, analytical approach, and input requirements. The sample size calculated according to the frequentist approach yielded 788 patients, using a power of 80% and a one-sided significance level of 5%. The decision theory approach indicated that the optimal sample size was 500 patients, with a net value of €92 million. Conclusions This study demonstrates and explains the differences between the classic frequentist approach and the decision theory approach of sample size calculation for non-inferiority trials. We argue that the decision theory approach of sample size estimation is most suitable for sample size calculation of non-inferiority trials. PMID:26076354
An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Maldague, Xavier
2016-01-01
A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.
Schnöller, Johannes; Aschenbrenner, Philipp; Hahn, Manuel; Fellner, Johann; Rechberger, Helmut
2014-11-01
The biogenic fraction of a simple solid recovered fuel (SRF) mixture (80 wt% printer paper/20 wt% high density polyethylene) is analyzed with the in-house developed adapted balance method (aBM). This fairly new approach is a combination of combustion elemental analysis (CHNS) and a data reconciliation algorithm based on successive linearisation for evaluation of the analysis results. This method shows a great potential as an alternative way to determine the biomass content in SRF. However, the employed analytical technique (CHNS elemental analysis) restricts the probed sample mass to low amounts in the range of a few hundred milligrams. This requires sample comminution to small grain sizes (<200 μm) to generate representative SRF specimen. This is not easily accomplished for certain material mixtures (e.g. SRF with rubber content) by conventional means of sample size reduction. This paper presents a proof of principle investigation of the sample preparation and analysis of an SRF model mixture with the use of cryogenic impact milling (final sample comminution) and the adapted balance method (determination of biomass content). The so derived sample preparation methodology (cutting mills and cryogenic impact milling) shows a better performance in accuracy and precision for the determination of the biomass content than one solely based on cutting mills. The results for the determination of the biogenic fraction are within 1-5% of the data obtained by the reference methods, selective dissolution method (SDM) and (14)C-method ((14)C-M). PMID:25060675
Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach
ERIC Educational Resources Information Center
Wang, Yuling
2010-01-01
Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.
Schnöller, Johannes Aschenbrenner, Philipp; Hahn, Manuel; Fellner, Johann; Rechberger, Helmut
2014-11-15
Highlights: • An alternative sample comminution procedure for SRF is tested. • Proof of principle is shown on a SRF model mixture. • The biogenic content of the SRF is analyzed with the adapted balance method. • The novel method combines combustion analysis and a data reconciliation algorithm. • Factors for the variance of the analysis results are statistically quantified. - Abstract: The biogenic fraction of a simple solid recovered fuel (SRF) mixture (80 wt% printer paper/20 wt% high density polyethylene) is analyzed with the in-house developed adapted balance method (aBM). This fairly new approach is a combination of combustion elemental analysis (CHNS) and a data reconciliation algorithm based on successive linearisation for evaluation of the analysis results. This method shows a great potential as an alternative way to determine the biomass content in SRF. However, the employed analytical technique (CHNS elemental analysis) restricts the probed sample mass to low amounts in the range of a few hundred milligrams. This requires sample comminution to small grain sizes (<200 μm) to generate representative SRF specimen. This is not easily accomplished for certain material mixtures (e.g. SRF with rubber content) by conventional means of sample size reduction. This paper presents a proof of principle investigation of the sample preparation and analysis of an SRF model mixture with the use of cryogenic impact milling (final sample comminution) and the adapted balance method (determination of biomass content). The so derived sample preparation methodology (cutting mills and cryogenic impact milling) shows a better performance in accuracy and precision for the determination of the biomass content than one solely based on cutting mills. The results for the determination of the biogenic fraction are within 1–5% of the data obtained by the reference methods, selective dissolution method (SDM) and {sup 14}C-method ({sup 14}C-M)
A sampling approach for protein backbone fragment conformations.
Yu, J Y; Zhang, W
2013-01-01
In protein structure prediction, backbone fragment bias information can narrow down the conformational space of the whole polypeptide chain significantly. Unlike existing methods that use fragments as building blocks, the paper presents a probabilistic sampling approach for protein backbone torsion angles by modelling angular correlation of (phi, psi) with a directional statistics distribution. Given a protein sequence and secondary structure information, this method samples backbone fragments conformations by using a backtrack sampling algorithm for the hidden Markov model with multiple inputs and a single output. The proposed approach is applied to a fragment library, and some well-known structural motifs are sampled very well on the optimal path. Computational results show that the method can help to obtain native-like backbone fragments conformations. PMID:23777175
ERIC Educational Resources Information Center
Rossier, Jerome; Zecca, Gregory; Stauffer, Sarah D.; Maggiori, Christian; Dauwalder, Jean-Pierre
2012-01-01
The aim of this study was to analyze the psychometric properties of the Career Adapt-Abilities Scale (CAAS) in a French-speaking Swiss sample and its relationship with personality dimensions and work engagement. The heterogeneous sample of 391 participants (M[subscript age] = 39.59, SD = 12.30) completed the CAAS-International and a short version…
Adaptation of the Athlete Burnout Questionnaire in a Spanish sample of athletes.
Arce, Constantino; De Francisco, Cristina; Andrade, Elena; Seoane, Gloria; Raedeke, Thomas
2012-11-01
In this paper, we offer a general version of the Spanish adaptation of Athlete Burnout Questionnaire (ABQ) designed to measure the syndrome of burnout in athletes of different sports. In previous works, the Spanish version of ABQ was administered to different samples of soccer players. Its psychometric properties were appropriate and similar to the findings in original ABQ. The purpose of this study was to examine the generalization to others sports of the Spanish adaptation. We started from this adaptation, but we included three alternative statements (one for each dimension of the questionnaire), and we replaced the word "soccer" with the word "sport". An 18-item version was administered to a sample of 487 athletes aged 13 and 29 years old. Confirmatory factor analyses replicated the factor structure, but two items modification were necessary in order to obtain a good overall fit of the model. The internal consistency and test-retest reliability of the questionnaire were satisfactory. PMID:23156955
McCay, Paul; Fuszard, Matthew; Botting, Catherine H.; Abram, Florence; O'Flaherty, Vincent
2013-01-01
Low-temperature anaerobic digestion (LTAD) technology is underpinned by a diverse microbial community. The methanogenic archaea represent a key functional group in these consortia, undertaking CO2 reduction as well as acetate and methylated C1 metabolism with subsequent biogas (40 to 60% CH4 and 30 to 50% CO2) formation. However, the cold adaptation strategies, which allow methanogens to function efficiently in LTAD, remain unclear. Here, a pure-culture proteomic approach was employed to study the functional characteristics of Methanosarcina barkeri (optimum growth temperature, 37°C), which has been detected in LTAD bioreactors. Two experimental approaches were undertaken. The first approach aimed to characterize a low-temperature shock response (LTSR) of M. barkeri DSMZ 800T grown at 37°C with a temperature drop to 15°C, while the second experimental approach aimed to examine the low-temperature adaptation strategies (LTAS) of the same strain when it was grown at 15°C. The latter experiment employed cell viability and growth measurements (optical density at 600 nm [OD600]), which directly compared M. barkeri cells grown at 15°C with those grown at 37°C. During the LTSR experiment, a total of 127 proteins were detected in 37°C and 15°C samples, with 20 proteins differentially expressed with respect to temperature, while in the LTAS experiment 39% of proteins identified were differentially expressed between phases of growth. Functional categories included methanogenesis, cellular information processing, and chaperones. By applying a polyphasic approach (proteomics and growth studies), insights into the low-temperature adaptation capacity of this mesophilically characterized methanogen were obtained which suggest that the metabolically diverse Methanosarcinaceae could be functionally relevant for LTAD systems. PMID:23645201
Region and edge-adaptive sampling and boundary completion for segmentation
Dillard, Scott E; Prasad, Lakshman; Grazzini, Jacopo A
2010-01-01
Edge detection produces a set of points that are likely to lie on discontinuities between objects within an image. We consider faces of the Gabriel graph of these points, a sub-graph of the Delaunay triangulation. Features are extracted by merging these faces using size, shape and color cues. We measure regional properties of faces using a novel shape-dependant sampling method that overcomes undesirable sampling bias of the Delaunay triangles. Instead, sampling is biased so as to smooth regional statistics within the detected object boundaries, and this smoothing adapts to local geometric features of the shape such as curvature, thickness and straightness.
Mehta, Cyrus; Liu, Lingyun
2016-02-10
Over the past 25 years, adaptive designs have gradually gained acceptance and are being used with increasing frequency in confirmatory clinical trials. Recent surveys of submissions to the regulatory agencies reveal that the most popular type of adaptation is unblinded sample size re-estimation. Concerns have nevertheless been raised that this type of adaptation is inefficient.We intend to show in our discussion that such concerns are greatly exaggerated in any practical setting and that the advantages of adaptive sample size re-estimation usually outweigh any minor loss of efficiency. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26757953
A novel approach for SEMG signal classification with adaptive local binary patterns.
Ertuğrul, Ömer Faruk; Kaya, Yılmaz; Tekin, Ramazan
2016-07-01
Feature extraction plays a major role in the pattern recognition process, and this paper presents a novel feature extraction approach, adaptive local binary pattern (aLBP). aLBP is built on the local binary pattern (LBP), which is an image processing method, and one-dimensional local binary pattern (1D-LBP). In LBP, each pixel is compared with its neighbors. Similarly, in 1D-LBP, each data in the raw is judged against its neighbors. 1D-LBP extracts feature based on local changes in the signal. Therefore, it has high a potential to be employed in medical purposes. Since, each action or abnormality, which is recorded in SEMG signals, has its own pattern, and via the 1D-LBP these (hidden) patterns may be detected. But, the positions of the neighbors in 1D-LBP are constant depending on the position of the data in the raw. Also, both LBP and 1D-LBP are very sensitive to noise. Therefore, its capacity in detecting hidden patterns is limited. To overcome these drawbacks, aLBP was proposed. In aLBP, the positions of the neighbors and their values can be assigned adaptively via the down-sampling and the smoothing coefficients. Therefore, the potential to detect (hidden) patterns, which may express an illness or an action, is really increased. To validate the proposed feature extraction approach, two different datasets were employed. Achieved accuracies by the proposed approach were higher than obtained results by employed popular feature extraction approaches and the reported results in the literature. Obtained accuracy results were brought out that the proposed method can be employed to investigate SEMG signals. In summary, this work attempts to develop an adaptive feature extraction scheme that can be utilized for extracting features from local changes in different categories of time-varying signals. PMID:26718556
Hejase, Hussein A; Liu, Kevin J
2016-01-01
Recent studies of eukaryotes including human and Neandertal, mice, and butterflies have highlighted the major role that interspecific introgression has played in adaptive trait evolution. A common question arises in each case: what is the genomic architecture of the introgressed traits? One common approach that can be used to address this question is association mapping, which looks for genotypic markers that have significant statistical association with a trait. It is well understood that sample relatedness can be a confounding factor in association mapping studies if not properly accounted for. Introgression and other evolutionary processes (e.g., incomplete lineage sorting) typically introduce variation among local genealogies, which can also differ from global sample structure measured across all genomic loci. In contrast, state-of-the-art association mapping methods assume fixed sample relatedness across the genome, which can lead to spurious inference. We therefore propose a new association mapping method called Coal-Map, which uses coalescent-based models to capture local genealogical variation alongside global sample structure. Using simulated and empirical data reflecting a range of evolutionary scenarios, we compare the performance of Coal-Map against EIGENSTRAT, a leading association mapping method in terms of its popularity, power, and type I error control. Our empirical data makes use of hundreds of mouse genomes for which adaptive interspecific introgression has recently been described. We found that Coal-Map's performance is comparable or better than EIGENSTRAT in terms of statistical power and false positive rate. Coal-Map's performance advantage was greatest on model conditions that most closely resembled empirically observed scenarios of adaptive introgression. These conditions had: (1) causal SNPs contained in one or a few introgressed genomic loci and (2) varying rates of gene flow - from high rates to very low rates where incomplete lineage
Novel Approaches for Fungal Transcriptomics from Host Samples
Amorim-Vaz, Sara; Sanglard, Dominique
2016-01-01
Candida albicans adaptation to the host requires a profound reprogramming of the fungal transcriptome as compared to in vitro laboratory conditions. A detailed knowledge of the C. albicans transcriptome during the infection process is necessary in order to understand which of the fungal genes are important for host adaptation. Such genes could be thought of as potential targets for antifungal therapy. The acquisition of the C. albicans transcriptome is, however, technically challenging due to the low proportion of fungal RNA in host tissues. Two emerging technologies were used recently to circumvent this problem. One consists of the detection of low abundance fungal RNA using capture and reporter gene probes which is followed by emission and quantification of resulting fluorescent signals (nanoString). The other is based first on the capture of fungal RNA by short biotinylated oligonucleotide baits covering the C. albicans ORFome permitting fungal RNA purification. Next, the enriched fungal RNA is amplified and subjected to RNA sequencing (RNA-seq). Here we detail these two transcriptome approaches and discuss their advantages and limitations and future perspectives in microbial transcriptomics from host material. PMID:26834721
Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging
NASA Astrophysics Data System (ADS)
Kellogg, Robert L.; Escuti, Michael J.
2007-09-01
New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang
2016-02-01
Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses
NASA Astrophysics Data System (ADS)
Herfort, L.; Seaton, C. M.; Wilkin, M.; Baptista, A. M.; Roman, B.; Preston, C. M.; Scholin, C. A.; Melançon, C.; Simon, H. M.
2013-12-01
An autonomous microbial sampling device was integrated with a long-term (endurance) environmental sensor system to investigate variation in microbial composition and activities related to complex estuarine dynamics. This integration was a part of ongoing efforts in the Center for Coastal Margin Observation and Prediction (CMOP) to study estuarine carbon and nitrogen cycling using an observation and prediction system (SATURN, http://www.stccmop.org/saturn) as foundational infrastructure. The two endurance stations fitted with physical and biogeochemical sensors that were used in this study are located in the SATURN observation network. The microbial sampler is the Environmental Sample Processor (ESP), a commercially available electromechanical/fluidic system designed for automated collection, preservation and in situ analyses of marine water samples. The primary goal of the integration was to demonstrate that the ESP, developed for sampling of pelagic oceanic environments, could be successfully deployed for autonomous sample acquisition in the highly dynamic and turbid Columbia River estuary. The ability of the ESP to collect material at both pre-determined times and automatically in response to local conditions was tested. Pre-designated samples were acquired at specific times to capture variability in the tidal cycle. Autonomous, adaptive sampling was triggered when conditions associated with specific water masses were detected in real-time by the SATURN station's sensors and then communicated to the ESP via the station computer to initiate sample collection. Triggering criteria were based on our understanding of estuary dynamics, as provided by the analysis of extensive archives of high-resolution, long-term SATURN observations and simulations. In this manner, we used the ESP to selectively sample various microbial consortia in the estuary to facilitate the study of ephemeral microbial-driven processes. For example, during the summer of 2013 the adaptive sampling
Non-adaptive and adaptive hybrid approaches for enhancing water quality management
NASA Astrophysics Data System (ADS)
Kalwij, Ineke M.; Peralta, Richard C.
2008-09-01
SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control
A Variational Approach to Enhanced Sampling and Free Energy Calculations
NASA Astrophysics Data System (ADS)
Parrinello, Michele
2015-03-01
The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.
Variational Approach to Enhanced Sampling and Free Energy Calculations
NASA Astrophysics Data System (ADS)
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
Adaptation and Validation of the Sexual Assertiveness Scale (SAS) in a Sample of Male Drug Users.
Vallejo-Medina, Pablo; Sierra, Juan Carlos
2015-01-01
The aim of the present study was to adapt and validate the Sexual Assertiveness Scale (SAS) in a sample of male drug users. A sample of 326 male drug users and 322 non-clinical males was selected by cluster sampling and convenience sampling, respectively. Results showed that the scale had good psychometric properties and adequate internal consistency reliability (Initiation = .66, Refusal = .74 and STD-P = .79). An evaluation of the invariance showed strong factor equivalence between both samples. A high and moderate effect of Differential Item Functioning was only found in items 1 and 14 (∆R 2 Nagelkerke = .076 and .037, respectively). We strongly recommend not using item 1 if the goal is to compare the scores of both groups, otherwise the comparison will be biased. Correlations obtained between the CSFQ-14 and the safe sex ratio and the SAS subscales were significant (CI = 95%) and indicated good concurrent validity. Scores of male drug users were similar to those of non-clinical males. Therefore, the adaptation of the SAS to drug users provides enough guarantees for reliable and valid use in both clinical practice and research, although care should be taken with item 1. PMID:25896498
Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach
Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio
2015-01-01
This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043
ERIC Educational Resources Information Center
Reinschmidt, Kerstin M.; Teufel-Shone, Nicolette I.; Bradford, Gail; Drummond, Rebecca L.; Torres, Emma; Redondo, Floribella; Elenes, Jo Jean; Sanders, Alicia; Gastelum, Sylvia; Moore-Monroy, Martha; Barajas, Salvador; Fernandez, Lourdes; Alvidrez, Rosy; de Zapien, Jill Guernsey; Staten, Lisa K.
2010-01-01
Diabetes health disparities among Hispanic populations have been countered with federally funded health promotion and disease prevention programs. Dissemination has focused on program adaptation to local cultural contexts for greater acceptability and sustainability. Taking a broader approach and drawing on our experience in Mexican American…
New approaches to nanoparticle sample fabrication for atom probe tomography.
Felfer, P; Li, T; Eder, K; Galinski, H; Magyar, A P; Bell, D C; Smith, G D W; Kruse, N; Ringer, S P; Cairney, J M
2015-12-01
Due to their unique properties, nano-sized materials such as nanoparticles and nanowires are receiving considerable attention. However, little data is available about their chemical makeup at the atomic scale, especially in three dimensions (3D). Atom probe tomography is able to answer many important questions about these materials if the challenge of producing a suitable sample can be overcome. In order to achieve this, the nanomaterial needs to be positioned within the end of a tip and fixed there so the sample possesses sufficient structural integrity for analysis. Here we provide a detailed description of various techniques that have been used to position nanoparticles on substrates for atom probe analysis. In some of the approaches, this is combined with deposition techniques to incorporate the particles into a solid matrix, and focused ion beam processing is then used to fabricate atom probe samples from this composite. Using these approaches, data has been achieved from 10-20 nm core-shell nanoparticles that were extracted directly from suspension (i.e. with no chemical modification) with a resolution of better than ± 1 nm. PMID:25980894
Mars sample return, updated to a groundbreaking approach
NASA Technical Reports Server (NTRS)
Mattingly, R.; Matovsek, S.; Jordan, F.
2002-01-01
A Mars Sample Return (MSR) mission is a goal of the Mars Program. Recently, NASA and JPL have been studying the possibility of a Mars Sample Return some time in the next decade of Mars exploration. In 2001, JPL commissioned four industry teams to make a fresh examination of MSR architectures. Six papers on these studies were presented at last year's conference. As new fiscal realities of a cost-capped Mars Exploration Program unfolded, it was evident that these MSR concepts, which included mobility and subsurface sample acquisition, did not fit reasonably within a balanced program. Therefore, at the request of NASA and the science community, JPL asked the four industry teams plus JPL's Team X to explore ways to reduce the cost of a MSR. A NASA-created MSR Science Steering Group (SSG) established a reduced set of requirements for these new studies that built upon the previous year's work. As a result, a new 'Groundbreaking' approach to MSR was established that is well understood based on the studies and independent cost assessments by Aerospace Corporation and SAIC. The Groundbreaking approach appears to be what a contemporary, balanced Mars Exploration Program can afford, has turned out to be justifiable by the MSR Science Steering Group, and has been endorsed by the Mars science community at large. This paper gives a brief overview of the original 2001 study results and discusses the process leading to the new studies, the studies themselves, and the results.
Enhancing Adaptive Filtering Approaches for Land Data Assimilation Systems
Technology Transfer Automated Retrieval System (TEKTRAN)
Recent work has presented the initial application of adaptive filtering techniques to land surface data assimilation systems. Such techniques are motivated by our current lack of knowledge concerning the structure of large-scale error in either land surface modeling output or remotely-sensed estima...
The Canadian approach to the settlement and adaptation of immigrants.
1986-01-01
Canada has been the host to over 400,000 refugees since World War II. The settlement and adaptation process is supported by the federal government and by the majority of provincial governments. Under the national and regional Employment and Immigration Commission CEIC) settlement organizations the major programs administered to effect the adaptation of newcomers are: 1) the Adjustment Assistance Program, 2) the Immigrant Settlement and Adaptation Program, 3) the Language/Skill Training Program, and 4) the Employment Services Program. Ontario, the recipient of more than 1/2 the newcomers that arrive in Canada each year, pursues active programs in the reception of newcomers through their Welcome House Program which offers a wide range of reception services to the newcomers. The employment and unemployment experiences of refugees is very much influenced by the prevailing labor market conditions, the refugees' proficiency in the country's official languages, the amount of sympathy evoked by the media reports on the plight of refugees, the availability of people of the same ethnic origin already well settled in the country, and the adaptability of the refugees themselves. The vast majority of refugee groups that came to Canada during the last 1/4 century seem to have adjusted well economically, despite having had difficulty in entering the occupations they intended to join. It is calculated that an average of $6607 per arrival is needed to cover the CEIC program costs of 1983-1984. PMID:12178937
The Detroit Approach to Adapted Physical Education and Recreation.
ERIC Educational Resources Information Center
Elkins, Bruce; Czapski, Stephen
The report describes Detroit's Adaptive Physical Education Consortium Project in Michigan. Among the main objectives of the project are to coordinate all physical education and recreation services to the handicapped in the Detroit area; to facilitate the mainstreaming of capable handicapped individuals into existing "regular" physical education…
Adaptive E-Learning Environments: Research Dimensions and Technological Approaches
ERIC Educational Resources Information Center
Di Bitonto, Pierpaolo; Roselli, Teresa; Rossano, Veronica; Sinatra, Maria
2013-01-01
One of the most closely investigated topics in e-learning research has always been the effectiveness of adaptive learning environments. The technological evolutions that have dramatically changed the educational world in the last six decades have allowed ever more advanced and smarter solutions to be proposed. The focus of this paper is to depict…
A Monte Carlo Approach for Adaptive Testing with Content Constraints
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander
2008-01-01
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Design of Adaptive Hypermedia Learning Systems: A Cognitive Style Approach
ERIC Educational Resources Information Center
Mampadi, Freddy; Chen, Sherry Y.; Ghinea, Gheorghita; Chen, Ming-Puu
2011-01-01
In the past decade, a number of adaptive hypermedia learning systems have been developed. However, most of these systems tailor presentation content and navigational support solely according to students' prior knowledge. On the other hand, previous research suggested that cognitive styles significantly affect student learning because they refer to…
Dissociating Conflict Adaptation from Feature Integration: A Multiple Regression Approach
ERIC Educational Resources Information Center
Notebaert, Wim; Verguts, Tom
2007-01-01
Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on…
Assessing confidence in management adaptation approaches for climate-sensitive ecosystems
NASA Astrophysics Data System (ADS)
West, J. M.; Julius, S. H.; Weaver, C. P.
2012-03-01
A number of options are available for adapting ecosystem management to improve resilience in the face of climatic changes. However, uncertainty exists as to the effectiveness of these options. A report prepared for the US Climate Change Science Program reviewed adaptation options for a range of federally managed systems in the United States. The report included a qualitative uncertainty analysis of conceptual approaches to adaptation derived from the review. The approaches included reducing anthropogenic stressors, protecting key ecosystem features, maintaining representation, replicating, restoring, identifying refugia and relocating organisms. The results showed that the expert teams had the greatest scientific confidence in adaptation options that reduce anthropogenic stresses. Confidence in other approaches was lower because of gaps in understanding of ecosystem function, climate change impacts on ecosystems, and management effectiveness. This letter discusses insights gained from the confidence exercise and proposes strategies for improving future assessments of confidence for management adaptations to climate change.
High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM
Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne
2015-01-01
Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids. PMID:26576666
High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM
NASA Astrophysics Data System (ADS)
Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne
2015-11-01
Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids.
An Adaptive Sampling System for Sensor Nodes in Body Area Networks.
Rieger, R; Taylor, J
2014-04-23
The importance of body sensor networks to monitor patients over a prolonged period of time has increased with an advance in home healthcare applications. Sensor nodes need to operate with very low-power consumption and under the constraint of limited memory capacity. Therefore, it is wasteful to digitize the sensor signal at a constant sample rate, given that the frequency contents of the signals vary with time. Adaptive sampling is established as a practical method to reduce the sample data volume. In this paper a low-power analog system is proposed, which adjusts the converter clock rate to perform a peak-picking algorithm on the second derivative of the input signal. The presented implementation does not require an analog-to-digital converter or a digital processor in the sample selection process. The criteria for selecting a suitable detection threshold are discussed, so that the maximum sampling error can be limited. A circuit level implementation is presented. Measured results exhibit a significant reduction in the average sample frequency and data rate of over 50% and 38% respectively. PMID:24760918
An approach to fabrication of large adaptive optics mirrors
NASA Astrophysics Data System (ADS)
Schwartz, Eric; Rey, Justin; Blaszak, David; Cavaco, Jeffrey
2014-07-01
For more than two decades, Northrop Grumman Xinetics has been the principal supplier of small deformable mirrors that enable adaptive optical (AO) systems for the ground-based astronomical telescope community. With today's drive toward extremely large aperture systems, and the desire of telescope designers to include adaptive optics in the main optical path of the telescope, Xinetics has recognized the need for large active mirrors with the requisite bandwidth and actuator stoke. Presented in this paper is the proposed use of Northrop Grumman Xinetics' large, ultra-lightweight Silicon Carbide substrates with surface parallel actuation of sufficient spatial density and bandwidth to meet the requirements of tomorrow's AO systems, while reducing complexity and cost.
A Hierarchical Adaptive Approach to Optimal Experimental Design
Kim, Woojae; Pitt, Mark A.; Lu, Zhong-Lin; Steyvers, Mark; Myung, Jay I.
2014-01-01
Experimentation is at the core of research in the behavioral and neural sciences, yet observations can be expensive and time-consuming to acquire (e.g., MRI scans, responses from infant participants). A major interest of researchers is designing experiments that lead to maximal accumulation of information about the phenomenon under study with the fewest possible number of observations. In addressing this challenge, statisticians have developed adaptive design optimization methods. This letter introduces a hierarchical Bayes extension of adaptive design optimization that provides a judicious way to exploit two complementary schemes of inference (with past and future data) to achieve even greater accuracy and efficiency in information gain. We demonstrate the method in a simulation experiment in the field of visual perception. PMID:25149697
Design of an Adaptive Secondary Mirror: A Global Approach
NASA Astrophysics Data System (ADS)
Brusa, Guido; del Vecchio, Ciro
1998-07-01
We present the mechanical and actuator design of an adaptive secondary mirror that matches the optical requirements of the active and adaptive corrections. Conceived for the particular implementation for the 6.5-m conversion of the multiple-mirror telescope, with small variations of the input parameters this study is suitable for applications for telescopes of the same class. We found that a three-layer structure, i.e., a thin deformable shell, a thick reference plate, and a third plate that acts as actuator support and heat sink, is able to provide the required mechanical stability and actuator density. We also found that a simple electromagnetic actuator can be used. This actuator, when optimized, will dissipate a typical power of a few tenths of watts.
Nie Xiaobo; Liang Jian; Yan Di
2012-12-15
Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h
The adaptive significance of adult neurogenesis: an integrative approach
Konefal, Sarah; Elliot, Mick; Crespi, Bernard
2013-01-01
Adult neurogenesis in mammals is predominantly restricted to two brain regions, the dentate gyrus (DG) of the hippocampus and the olfactory bulb (OB), suggesting that these two brain regions uniquely share functions that mediate its adaptive significance. Benefits of adult neurogenesis across these two regions appear to converge on increased neuronal and structural plasticity that subserves coding of novel, complex, and fine-grained information, usually with contextual components that include spatial positioning. By contrast, costs of adult neurogenesis appear to center on potential for dysregulation resulting in higher risk of brain cancer or psychological dysfunctions, but such costs have yet to be quantified directly. The three main hypotheses for the proximate functions and adaptive significance of adult neurogenesis, pattern separation, memory consolidation, and olfactory spatial, are not mutually exclusive and can be reconciled into a simple general model amenable to targeted experimental and comparative tests. Comparative analysis of brain region sizes across two major social-ecological groups of primates, gregarious (mainly diurnal haplorhines, visually-oriented, and in large social groups) and solitary (mainly noctural, territorial, and highly reliant on olfaction, as in most rodents) suggest that solitary species, but not gregarious species, show positive associations of population densities and home range sizes with sizes of both the hippocampus and OB, implicating their functions in social-territorial systems mediated by olfactory cues. Integrated analyses of the adaptive significance of adult neurogenesis will benefit from experimental studies motivated and structured by ecologically and socially relevant selective contexts. PMID:23882188
An integrated sampling and analysis approach for improved biodiversity monitoring
DeWan, Amielle A.; Zipkin, Elise F.
2010-01-01
Successful biodiversity conservation requires high quality monitoring data and analyses to ensure scientifically defensible policy, legislation, and management. Although monitoring is a critical component in assessing population status and trends, many governmental and non-governmental organizations struggle to develop and implement effective sampling protocols and statistical analyses because of the magnitude and diversity of species in conservation concern. In this article we describe a practical and sophisticated data collection and analysis framework for developing a comprehensive wildlife monitoring program that includes multi-species inventory techniques and community-level hierarchical modeling. Compared to monitoring many species individually, the multi-species approach allows for improved estimates of individual species occurrences, including rare species, and an increased understanding of the aggregated response of a community to landscape and habitat heterogeneity. We demonstrate the benefits and practicality of this approach to address challenges associated with monitoring in the context of US state agencies that are legislatively required to monitor and protect species in greatest conservation need. We believe this approach will be useful to regional, national, and international organizations interested in assessing the status of both common and rare species.
Wellmann, Robin; Bennewitz, Jörn; Meuwissen, Theo H E
2014-01-01
As extinction of local domestic breeds and of isolated subpopulations of wild species continues, and the resources available for conservation programs are limited, prioritizing subpopulations for conservation is of high importance to halt the erosion of genetic diversity observed in endangered species. Current approaches usually only take neutral genetic diversity into account. However, adaptation of subpopulations to different environments also contributes to the diversity found in the species. This paper introduces two notions of adaptive variation. The adaptive diversity in a trait is the excess of variance found in genotypic values relative to the variance that would have been expected in the absence of selection. The adaptivity coverage of a set of subpopulations quantifies how well the subpopulations could adapt to a large range of environments within a limited time span. Additionally, genome-based notions of neutral diversities were obtained that correspond to well known pedigree-based definitions. The values of subpopulations for conservation of adaptivity coverage were compared with their conservation values for adaptive diversity and neutral diversities using simulated data. Conservation values for adaptive diversity and neutral diversities were only slightly correlated, but the values for conservation of adaptivity coverage showed a reasonable correlation with both kinds if the time span was chosen appropriately. Hence, maintaining adaptivity coverage is a promising approach to prioritize subpopulations for conservation decisions. PMID:25578300
Li, Hongdong; Liang, Yizeng; Xu, Qingsong; Cao, Dongsheng
2009-08-19
By employing the simple but effective principle 'survival of the fittest' on which Darwin's Evolution Theory is based, a novel strategy for selecting an optimal combination of key wavelengths of multi-component spectral data, named competitive adaptive reweighted sampling (CARS), is developed. Key wavelengths are defined as the wavelengths with large absolute coefficients in a multivariate linear regression model, such as partial least squares (PLS). In the present work, the absolute values of regression coefficients of PLS model are used as an index for evaluating the importance of each wavelength. Then, based on the importance level of each wavelength, CARS sequentially selects N subsets of wavelengths from N Monte Carlo (MC) sampling runs in an iterative and competitive manner. In each sampling run, a fixed ratio (e.g. 80%) of samples is first randomly selected to establish a calibration model. Next, based on the regression coefficients, a two-step procedure including exponentially decreasing function (EDF) based enforced wavelength selection and adaptive reweighted sampling (ARS) based competitive wavelength selection is adopted to select the key wavelengths. Finally, cross validation (CV) is applied to choose the subset with the lowest root mean square error of CV (RMSECV). The performance of the proposed procedure is evaluated using one simulated dataset together with one near infrared dataset of two properties. The results reveal an outstanding characteristic of CARS that it can usually locate an optimal combination of some key wavelengths which are interpretable to the chemical property of interest. Additionally, our study shows that better prediction is obtained by CARS when compared to full spectrum PLS modeling, Monte Carlo uninformative variable elimination (MC-UVE) and moving window partial least squares regression (MWPLSR). PMID:19616692
Advances in adaptive control theory: Gradient- and derivative-free approaches
NASA Astrophysics Data System (ADS)
Yucelen, Tansel
In this dissertation, we present new approaches to improve standard designs in adaptive control theory, and novel adaptive control architectures. We first present a novel Kalman filter based approach for approximately enforcing a linear constraint in standard adaptive control design. One application is that this leads to alternative forms for well known modification terms such as e-modification. In addition, it leads to smaller tracking errors without incurring significant oscillations in the system response and without requiring high modification gain. We derive alternative forms of e- and adaptive loop recovery (ALR-) modifications. Next, we show how to use Kalman filter optimization to derive a novel adaptation law. This results in an optimization-based time-varying adaptation gain that reduces the need for adaptation gain tuning. A second major contribution of this dissertation is the development of a novel derivative-free, delayed weight update law for adaptive control. The assumption of constant unknown ideal weights is relaxed to the existence of time-varying weights, such that fast and possibly discontinuous variation in weights are allowed. This approach is particulary advantageous for applications to systems that can undergo a sudden change in dynamics, such as might be due to reconfiguration, deployment of a payload, docking, or structural damage, and for rejection of external disturbance processes. As a third and final contribution, we develop a novel approach for extending all the methods developed in this dissertation to the case of output feedback. The approach is developed only for the case of derivative-free adaptive control, and the extension of the other approaches developed previously for the state feedback case to output feedback is left as a future research topic. The proposed approaches of this dissertation are illustrated in both simulation and flight test.
Yan, Shengye; Xu, Xinxing; Xu, Dong; Lin, Stephen; Li, Xuelong
2015-03-01
We present a framework for image classification that extends beyond the window sampling of fixed spatial pyramids and is supported by a new learning algorithm. Based on the observation that fixed spatial pyramids sample a rather limited subset of the possible image windows, we propose a method that accounts for a comprehensive set of windows densely sampled over location, size, and aspect ratio. A concise high-level image feature is derived to effectively deal with this large set of windows, and this higher level of abstraction offers both efficient handling of the dense samples and reduced sensitivity to misalignment. In addition to dense window sampling, we introduce generalized adaptive l(p)-norm multiple kernel learning (GA-MKL) to learn a robust classifier based on multiple base kernels constructed from the new image features and multiple sets of prelearned classifiers from other classes. With GA-MKL, multiple levels of image features are effectively fused, and information is shared among different classifiers. Extensive evaluation on benchmark datasets for object recognition (Caltech256 and Caltech101) and scene recognition (15Scenes) demonstrate that the proposed method outperforms the state-of-the-art under a broad range of settings. PMID:24968365
Adaptive sampling in two-phase designs: a biomarker study for progression in arthritis
McIsaac, Michael A; Cook, Richard J
2015-01-01
Response-dependent two-phase designs are used increasingly often in epidemiological studies to ensure sampling strategies offer good statistical efficiency while working within resource constraints. Optimal response-dependent two-phase designs are difficult to implement, however, as they require specification of unknown parameters. We propose adaptive two-phase designs that exploit information from an internal pilot study to approximate the optimal sampling scheme for an analysis based on mean score estimating equations. The frequency properties of estimators arising from this design are assessed through simulation, and they are shown to be similar to those from optimal designs. The design procedure is then illustrated through application to a motivating biomarker study in an ongoing rheumatology research program. Copyright © 2015 © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25951124
Adaptive leadership: a novel approach for family decision making.
Adams, Judith; Bailey, Donald E; Anderson, Ruth A; Galanos, Anthony N
2013-03-01
Family members of intensive care unit (ICU) patients want to be involved in decision making, but they may not be best served by being placed in the position of having to solve problems for which they lack knowledge and skills. This case report presents an exemplar family meeting in the ICU led by a palliative care specialist, with discussion about the strategies used to improve the capacity of the family to make a decision consistent with the patient's goals. These strategies are presented through the lens of Adaptive Leadership. PMID:22663140
PFC design via FRIT Approach for Adaptive Output Feedback Control of Discrete-time Systems
NASA Astrophysics Data System (ADS)
Mizumoto, Ikuro; Takagi, Taro; Fukui, Sota; Shah, Sirish L.
This paper deals with a design problem of an adaptive output feedback control for discrete-time systems with a parallel feedforward compensator (PFC) which is designed for making the augmented controlled system ASPR. A PFC design scheme by a FRIT approach with only using an input/output experimental data set will be proposed for discrete-time systems in order to design an adaptive output feedback control system. Furthermore, the effectiveness of the proposed PFC design method will be confirmed through numerical simulations by designing adaptive control system with adaptive NN (Neural Network) for an uncertain discrete-time system.
Adaptive sampling dual terahertz comb spectroscopy using dual free-running femtosecond lasers
Yasui, Takeshi; Ichikawa, Ryuji; Hsieh, Yi-Da; Hayashi, Kenta; Cahyadi, Harsono; Hindle, Francis; Sakaguchi, Yoshiyuki; Iwata, Tetsuo; Mizutani, Yasuhiro; Yamamoto, Hirotsugu; Minoshima, Kaoru; Inaba, Hajime
2015-01-01
Terahertz (THz) dual comb spectroscopy (DCS) is a promising method for high-accuracy, high-resolution, broadband THz spectroscopy because the mode-resolved THz comb spectrum includes both broadband THz radiation and narrow-line CW-THz radiation characteristics. In addition, all frequency modes of a THz comb can be phase-locked to a microwave frequency standard, providing excellent traceability. However, the need for stabilization of dual femtosecond lasers has often hindered its wide use. To overcome this limitation, here we have demonstrated adaptive-sampling THz-DCS, allowing the use of free-running femtosecond lasers. To correct the fluctuation of the time and frequency scales caused by the laser timing jitter, an adaptive sampling clock is generated by dual THz-comb-referenced spectrum analysers and is used for a timing clock signal in a data acquisition board. The results not only indicated the successful implementation of THz-DCS with free-running lasers but also showed that this configuration outperforms standard THz-DCS with stabilized lasers due to the slight jitter remained in the stabilized lasers. PMID:26035687
Making CORBA objects persistent: The object database adapter approach
Reverbel, F.C.R.
1997-05-01
In spite of its remarkable successes in promoting standards for distributed object systems, the Object Management Group (OMG) has not yet settled the issue of object persistence in the Object Request Broker (ORB) environment. The Common Object Request Broker Architecture (CORBA) specification briefly mentions an Object-Oriented Database Adapter that makes objects stored in an object-oriented database accessible through the ORB. This idea is pursued in the Appendix B of the ODMG standard, which identifies a number of issues involved in using an Object Database Management System (ODBMS) in a CORBA environment, and proposes an Object Database Adapter (ODA) to realize the integration of the ORB with the ODBMS. This paper discusses the design and implementation of an ODA that integrates an ORB and an ODBMS with C++ bindings. For the author`s purposes, an ODBMS is a system with programming interfaces. It may be a pure object-oriented DBMS (an OODBMS), or a combination of a relational DBMS and an object-relational mapper.
Vrabie, Draguna; Lewis, Frank
2009-04-01
In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided. PMID:19362449
Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen
2016-01-01
We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347–356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205–214), and isoform 1 of fibrinogen α chain precursor (FGA 588–624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes. PMID:27150491
Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen
2016-01-01
We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347-356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205-214), and isoform 1 of fibrinogen α chain precursor (FGA 588-624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes. PMID:27150491
NASA Astrophysics Data System (ADS)
Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen
2016-05-01
We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347–356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205–214), and isoform 1 of fibrinogen α chain precursor (FGA 588–624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes.
Analytical approach to an integrate-and-fire model with spike-triggered adaptation
NASA Astrophysics Data System (ADS)
Schwalger, Tilo; Lindner, Benjamin
2015-12-01
The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.
Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology
NASA Technical Reports Server (NTRS)
Blackmore, Lars James C.; Acikmese, Behcet; Mandic, Milan
2012-01-01
A software tool is used to demonstrate the feasibility of Touch and Go (TAG) sampling for Asteroid Sample Return missions. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. Previous work at JPL developed the G-TAG simulation tool, which provides a software environment for fast, multi-body simulations of the TAG event. G-TAG is described in Multibody Simulation Software Testbed for Small-Body Exploration and Sampling, (NPO-47196) NASA Tech Briefs, Vol. 35, No. 11 (November 2011), p.54. This current innovation adapts this tool to a mission that intends to return a sample from the surface of an asteroid. In order to demonstrate the feasibility of the TAG concept, the new software tool was used to generate extensive simulations that demonstrate the designed spacecraft meets key requirements. These requirements state that contact force and duration must be sufficient to ensure that enough material from the surface is collected in the brushwheel sampler (BWS), and that the spacecraft must survive the contact and must be able to recover and ascend to a safe position, and maintain velocity and orientation after the contact.
An Evidence-Based Public Health Approach to Climate Change Adaptation
Eidson, Millicent; Tlumak, Jennifer E.; Raab, Kristin K.; Luber, George
2014-01-01
Background: Public health is committed to evidence-based practice, yet there has been minimal discussion of how to apply an evidence-based practice framework to climate change adaptation. Objectives: Our goal was to review the literature on evidence-based public health (EBPH), to determine whether it can be applied to climate change adaptation, and to consider how emphasizing evidence-based practice may influence research and practice decisions related to public health adaptation to climate change. Methods: We conducted a substantive review of EBPH, identified a consensus EBPH framework, and modified it to support an EBPH approach to climate change adaptation. We applied the framework to an example and considered implications for stakeholders. Discussion: A modified EBPH framework can accommodate the wide range of exposures, outcomes, and modes of inquiry associated with climate change adaptation and the variety of settings in which adaptation activities will be pursued. Several factors currently limit application of the framework, including a lack of higher-level evidence of intervention efficacy and a lack of guidelines for reporting climate change health impact projections. To enhance the evidence base, there must be increased attention to designing, evaluating, and reporting adaptation interventions; standardized health impact projection reporting; and increased attention to knowledge translation. This approach has implications for funders, researchers, journal editors, practitioners, and policy makers. Conclusions: The current approach to EBPH can, with modifications, support climate change adaptation activities, but there is little evidence regarding interventions and knowledge translation, and guidelines for projecting health impacts are lacking. Realizing the goal of an evidence-based approach will require systematic, coordinated efforts among various stakeholders. Citation: Hess JJ, Eidson M, Tlumak JE, Raab KK, Luber G. 2014. An evidence-based public
A context-adaptable approach to clinical guidelines.
Terenziani, Paolo; Montani, Stefania; Bottrighi, Alessio; Torchio, Mauro; Molino, Gianpaolo; Correndo, Gianluca
2004-01-01
One of the most relevant obstacles to the use and dissemination of clinical guidelines is the gap between the generality of guidelines (as defined, e.g., by physicians' committees) and the peculiarities of the specific context of application. In particular, general guidelines do not take into account the fact that the tools needed for laboratory and instrumental investigations might be unavailable at a given hospital. Moreover, computer-based guideline managers must also be integrated with the Hospital Information System (HIS), and usually different DBMS are adopted by different hospitals. The GLARE (Guideline Acquisition, Representation and Execution) system addresses these issues by providing a facility for automatic resource-based adaptation of guidelines to the specific context of application, and by providing a modular architecture in which only limited and well-localised changes are needed to integrate the system with the HIS at hand. PMID:15360797
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
CROWDER, STEPHEN V.
1999-09-01
In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards while building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper we address the issue of low volume statistical process control. We investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. We develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, we study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. We show that far fewer data values are needed than is typically recommended for process control applications. We also demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.
Crowder, S.V.; Eshleman, L.
1998-08-01
In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards white building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper the authors address the issue of low volume statistical process control. They investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. The authors develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, they study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. They show that far fewer data values are needed than is typically recommended for process control applications. And they demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.
Kim, Namhee; Zahran, Mai; Schlick, Tamar
2015-01-01
The modular organization of RNA structure has been exploited in various computational and theoretical approaches to identify RNA tertiary (3D) motifs and assemble RNA structures. Riboswitches exemplify this modularity in terms of both structural and functional adaptability of RNA components. Here, we extend our computational approach based on tree graph sampling to the prediction of riboswitch topologies by defining additional edges to mimick pseudoknots. Starting from a secondary (2D) structure, we construct an initial graph deduced from predicted junction topologies by our data-mining algorithm RNAJAG trained on known RNAs; we sample these graphs in 3D space guided by knowledge-based statistical potentials derived from bending and torsion measures of internal loops as well as radii of gyration for known RNAs. We present graph sampling results for 10 representative riboswitches, 6 of them with pseudoknots, and compare our predictions to solved structures based on global and local RMSD measures. Our results indicate that the helical arrangements in riboswitches can be approximated using our combination of modified 3D tree graph representations for pseudoknots, junction prediction, graph moves, and scoring functions. Future challenges in the field of riboswitch prediction and design are also discussed. PMID:25726463
Adaptively Managing Wildlife for Climate Change: A Fuzzy Logic Approach
NASA Astrophysics Data System (ADS)
Prato, Tony
2011-07-01
Wildlife managers have little or no control over climate change. However, they may be able to alleviate potential adverse impacts of future climate change by adaptively managing wildlife for climate change. In particular, wildlife managers can evaluate the efficacy of compensatory management actions (CMAs) in alleviating potential adverse impacts of future climate change on wildlife species using probability-based or fuzzy decision rules. Application of probability-based decision rules requires managers to specify certain probabilities, which is not possible when they are uncertain about the relationships between observed and true ecological conditions for a species. Under such uncertainty, the efficacy of CMAs can be evaluated and the best CMA selected using fuzzy decision rules. The latter are described and demonstrated using three constructed cases that assume: (1) a single ecological indicator (e.g., population size for a species) in a single time period; (2) multiple ecological indicators for a species in a single time period; and (3) multiple ecological conditions for a species in multiple time periods.
Adaptation to floods in future climate: a practical approach
NASA Astrophysics Data System (ADS)
Doroszkiewicz, Joanna; Romanowicz, Renata; Radon, Radoslaw; Hisdal, Hege
2016-04-01
In this study some aspects of the application of the 1D hydraulic model are discussed with a focus on its suitability for flood adaptation under future climate conditions. The Biała Tarnowska catchment is used as a case study. A 1D hydraulic model is developed for the evaluation of inundation extent and risk maps in future climatic conditions. We analyse the following flood indices: (i) extent of inundation area; (ii) depth of water on flooded land; (iii) the flood wave duration; (iv) the volume of a flood wave over the threshold value. In this study we derive a model cross-section geometry following the results of primary research based on a 500-year flood inundation extent. We compare two methods of localisation of cross-sections from the point of view of their suitability to the derivation of the most precise inundation outlines. The aim is to specify embankment heights along the river channel that would protect the river valley in the most vulnerable locations under future climatic conditions. We present an experimental design for scenario analysis studies and uncertainty reduction options for future climate projections obtained from the EUROCORDEX project. Acknowledgements: This work was supported by the project CHIHE (Climate Change Impact on Hydrological Extremes), carried out in the Institute of Geophysics Polish Academy of Sciences, funded by Norway Grants (contract No. Pol-Nor/196243/80/2013). The hydro-meteorological observations were provided by the Institute of Meteorology and Water Management (IMGW), Poland.
Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith
2011-01-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089
Adaptation of a Weighted Regression Approach to Evaluate Water Quality Trends in an Estuary
To improve the description of long-term changes in water quality, we adapted a weighted regression approach to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach, originally developed to resolve pollutant transport trends in rivers...
Adaptation of a weighted regression approach to evaluate water quality trends in anestuary
To improve the description of long-term changes in water quality, a weighted regression approach developed to describe trends in pollutant transport in rivers was adapted to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach allows...
Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items
ERIC Educational Resources Information Center
Penfield, Randall D.
2006-01-01
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
ERIC Educational Resources Information Center
Dorça, Fabiano Azevedo; Lima, Luciano Vieira; Fernandes, Márcia Aparecida; Lopes, Carlos Roberto
2012-01-01
Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and…
Adaptive Role Playing Games: An Immersive Approach for Problem Based Learning
ERIC Educational Resources Information Center
Sancho, Pilar; Moreno-Ger, Pablo; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar
2009-01-01
In this paper we present a general framework, called NUCLEO, for the application of socio-constructive educational approaches in higher education. The underlying pedagogical approach relies on an adaptation model in order to improve group dynamics, as this has been identified as one of the key features in the success of collaborative learning…
Vogel, Thomas; Perez, Danny
2015-08-28
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The methodmore » is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.« less
Vogel, Thomas; Perez, Danny
2015-08-28
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.
An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors
Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel
2016-01-01
Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559
NASA Astrophysics Data System (ADS)
Blaen, Phillip; Khamis, Kieran; Lloyd, Charlotte; Bradley, Chris
2016-04-01
Excessive nutrient concentrations in river waters threaten aquatic ecosystem functioning and can pose substantial risks to human health. Robust monitoring strategies are therefore required to generate reliable estimates of river nutrient loads and to improve understanding of the catchment processes that drive spatiotemporal patterns in nutrient fluxes. Furthermore, these data are vital for prediction of future trends under changing environmental conditions and thus the development of appropriate mitigation measures. In recent years, technological developments have led to an increase in the use of continuous in-situ nutrient analysers, which enable measurements at far higher temporal resolutions than can be achieved with discrete sampling and subsequent laboratory analysis. However, such instruments can be costly to run and difficult to maintain (e.g. due to high power consumption and memory requirements), leading to trade-offs between temporal and spatial monitoring resolutions. Here, we highlight how adaptive monitoring strategies, comprising a mixture of temporal sample frequencies controlled by one or more 'trigger variables' (e.g. river stage, turbidity, or nutrient concentration), can advance our understanding of catchment nutrient dynamics while simultaneously overcoming many of the practical and economic challenges encountered in typical in-situ river nutrient monitoring applications. We present examples of short-term variability in river nutrient dynamics, driven by complex catchment behaviour, which support our case for the development of monitoring systems that can adapt in real-time to rapid environmental changes. In addition, we discuss the advantages and disadvantages of current nutrient monitoring techniques, and suggest new research directions based on emerging technologies and highlight how these might improve: 1) monitoring strategies, and 2) understanding of linkages between catchment processes and river nutrient fluxes.
An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.
Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel
2016-01-01
Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559
Machine Learning Approaches to Rare Events Sampling and Estimation
NASA Astrophysics Data System (ADS)
Elsheikh, A. H.
2014-12-01
Given the severe impacts of rare events, we try to quantitatively answer the following two questions: How can we estimate the probability of a rare event? And what are the factors affecting these probabilities? We utilize machine learning classification methods to define the failure boundary (in the stochastic space) corresponding to a specific threshold of a rare event. The training samples for the classification algorithm are obtained using multilevel splitting and Monte Carlo (MC) simulations. Once the training of the classifier is performed, a full MC simulation can be performed efficiently using the classifier as a reduced order model replacing the full physics simulator.We apply the proposed method on a standard benchmark for CO2 leakage through an abandoned well. In this idealized test case, CO2 is injected into a deep aquifer and then spreads within the aquifer and, upon reaching an abandoned well; it rises to a shallower aquifer. In current study, we try to evaluate the probability of leakage of a pre-defined amount of the injected CO2 given a heavy tailed distribution of the leaky well permeability. We show that machine learning based approaches significantly outperform direct MC and multi-level splitting methods in terms of efficiency and precision. The proposed algorithm's efficiency and reliability enabled us to perform a sensitivity analysis to the different modeling assumptions including the different prior distributions on the probability of CO2 leakage.
NASA Astrophysics Data System (ADS)
Ma, Xiang; Zabaras, Nicholas
2009-03-01
A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media.
An adaptive management approach to controlling suburban deer
Nielson, C.K.; Porter, W.F.; Underwood, H.B.
1997-01-01
Distance sight-resight sampling has particular relevance to aerial surveys, in which height above ground and aircraft speed make the critical assumption of certain detection on the track-line unrealistic. Recent developments in distance sight-resight theory have left practical issues related to data collection as the major impediment to widespread use of distance sight-resight sampling in aerial surveys. We describe and evaluate a system to automatically log, store, and process data from distance sight-resight aerial surveys. The system has a primary digital system and a secondary audio system. The digital system comprises a sighting 'gun' and small keypad for each observer, a global positioning system (GPS) receiver, and an altimeter interface, all linked to a central laptop computer. The gun is used to record time and angle of declination from the horizon of sighted groups of animals as they pass the aircraft. The keypad is used to record information on species and group size. The altimeter interface records altitude from the aircraft's radar altimeter, and the GPS receiver provides location data at user-definable intervals. We wrote software to import data into a database and convert it into a form appropriate for distance sight-resight analyses. Perpendicular distance of sighted groups of animals from the flight path is calculated from altitude and angle of declination. Time, angle of declination, species, and group size of sightings by independent observers on the same side of the aircraft are used as criteria to classify single and duplicate sightings, allowing testing of the critical distance sampling assumption (g(0)=1) and estimation of g(0) if that assumption fails. An audio system comprising headphones for each observer and a 4-track tape recorder allows recording of data that are difficult to accommodate in the digital system and provides a backup to the digital system. We evaluated the system by conducting experimental surveys and reviewing results
Farms adaptation to changes in flood risk: a management approach
NASA Astrophysics Data System (ADS)
Pivot, Jean-Marc; Martin, Philippe
2002-10-01
Creating flood expansion areas e.g. for the protection of urban areas from flooding involves a localised increase in risk which may require farmers to be compensated for crop damage or other losses. With this in mind, the paper sets out the approach used to study the problem and gives results obtained from a survey of farms liable to flooding in central France. The approach is based on a study of decisions made by farmers in situations of uncertainty, using the concept of 'model of action'. The results show that damage caused to farming areas by flooding should be considered both at field level and at farm level. The damage caused to the field depends on the flood itself, the fixed characteristics of the field, and the plant species cultivated. However, the losses to the farm taken as a whole can differ considerably from those for the flooded field, due to 'knock-on' effects on farm operations which depend on the internal organization, the availability of production resources, and the farmer's objectives, both for the farm as a whole and for its individual enterprises. Three main strategies regarding possible flood events were identified. Reasons for choosing one of these include the way the farmer perceives the risk and the size of the area liable to flooding. Finally, the formalisation of farm system management in the face of uncertainty, especially due to flooding, enables compensation to be calculated for farmers whose land is affected by the creation of flood expansion areas.
NASA Technical Reports Server (NTRS)
Hixson, M. M.; Bauer, M. E.; Davis, B. J.
1979-01-01
The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.
A global sampling approach to designing and reengineering RNA secondary structures
Levin, Alex; Lis, Mieszko; Ponty, Yann; O’Donnell, Charles W.; Devadas, Srinivas; Berger, Bonnie; Waldispühl, Jérôme
2012-01-01
The development of algorithms for designing artificial RNA sequences that fold into specific secondary structures has many potential biomedical and synthetic biology applications. To date, this problem remains computationally difficult, and current strategies to address it resort to heuristics and stochastic search techniques. The most popular methods consist of two steps: First a random seed sequence is generated; next, this seed is progressively modified (i.e. mutated) to adopt the desired folding properties. Although computationally inexpensive, this approach raises several questions such as (i) the influence of the seed; and (ii) the efficiency of single-path directed searches that may be affected by energy barriers in the mutational landscape. In this article, we present RNA-ensign, a novel paradigm for RNA design. Instead of taking a progressive adaptive walk driven by local search criteria, we use an efficient global sampling algorithm to examine large regions of the mutational landscape under structural and thermodynamical constraints until a solution is found. When considering the influence of the seeds and the target secondary structures, our results show that, compared to single-path directed searches, our approach is more robust, succeeds more often and generates more thermodynamically stable sequences. An ensemble approach to RNA design is thus well worth pursuing as a complement to existing approaches. RNA-ensign is available at http://csb.cs.mcgill.ca/RNAensign. PMID:22941632
Adaptive Thouless-Anderson-Palmer approach to inverse Ising problems with quenched random fields.
Huang, Haiping; Kabashima, Yoshiyuki
2013-06-01
The adaptive Thouless-Anderson-Palmer equation is derived for inverse Ising problems in the presence of quenched random fields. We test the proposed scheme on Sherrington-Kirkpatrick, Hopfield, and random orthogonal models and find that the adaptive Thouless-Anderson-Palmer approach allows accurate inference of quenched random fields whose distribution can be either Gaussian or bimodal. In particular, another competitive method for inferring external fields, namely, the naive mean field method with diagonal weights, is compared and discussed. PMID:23848649
Design of Field Experiments for Adaptive Sampling of the Ocean with Autonomous Vehicles
NASA Astrophysics Data System (ADS)
Zheng, H.; Ooi, B. H.; Cho, W.; Dao, M. H.; Tkalich, P.; Patrikalakis, N. M.
2010-05-01
Due to the highly non-linear and dynamical nature of oceanic phenomena, the predictive capability of various ocean models depends on the availability of operational data. A practical method to improve the accuracy of the ocean forecast is to use a data assimilation methodology to combine in-situ measured and remotely acquired data with numerical forecast models of the physical environment. Autonomous surface and underwater vehicles with various sensors are economic and efficient tools for exploring and sampling the ocean for data assimilation; however there is an energy limitation to such vehicles, and thus effective resource allocation for adaptive sampling is required to optimize the efficiency of exploration. In this paper, we use physical oceanography forecasts of the coastal zone of Singapore for the design of a set of field experiments to acquire useful data for model calibration and data assimilation. The design process of our experiments relied on the oceanography forecast including the current speed, its gradient, and vorticity in a given region of interest for which permits for field experiments could be obtained and for time intervals that correspond to strong tidal currents. Based on these maps, resources available to our experimental team, including Autonomous Surface Craft (ASC) are allocated so as to capture the oceanic features that result from jets and vortices behind bluff bodies (e.g., islands) in the tidal current. Results are summarized from this resource allocation process and field experiments conducted in January 2009.
Learning approach to sampling optimization: Applications in astrodynamics
NASA Astrophysics Data System (ADS)
Henderson, Troy Allen
A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.
Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil
France, Brian; Bell, William; Chang, Emily; Scholten, Trudy
2015-01-01
Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. PMID:26714315
Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil.
France, Brian; Bell, William; Chang, Emily; Scholten, Trudy
2015-01-01
Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. PMID:26714315
NASA Astrophysics Data System (ADS)
Kust, German; Andreeva, Olga
2015-04-01
A number of new concepts and paradigms appeared during last decades, such as sustainable land management (SLM), climate change (CC) adaptation, environmental services, ecosystem health, and others. All of these initiatives still not having the common scientific platform although some agreements in terminology were reached, schemes of links and feedback loops created, and some models developed. Nevertheless, in spite of all these scientific achievements, the land related issues are still not in the focus of CC adaptation and mitigation. The last did not grow much beyond the "greenhouse gases" (GHG) concept, which makes land degradation as the "forgotten side of climate change" The possible decision to integrate concepts of climate and desertification/land degradation could be consideration of the "GHG" approach providing global solution, and "land" approach providing local solution covering other "locally manifesting" issues of global importance (biodiversity conservation, food security, disasters and risks, etc.) to serve as a central concept among those. SLM concept is a land-based approach, which includes the concepts of both ecosystem-based approach (EbA) and community-based approach (CbA). SLM can serve as in integral CC adaptation strategy, being based on the statement "the more healthy and resilient the system is, the less vulnerable and more adaptive it will be to any external changes and forces, including climate" The biggest scientific issue is the methods to evaluate the SLM and results of the SLM investments. We suggest using the approach based on the understanding of the balance or equilibrium of the land and nature components as the major sign of the sustainable system. Prom this point of view it is easier to understand the state of the ecosystem stress, size of the "health", range of adaptive capacity, drivers of degradation and SLM nature, as well as the extended land use, and the concept of environmental land management as the improved SLM approach
Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach
2014-01-01
Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population
NASA Astrophysics Data System (ADS)
Klein, R.; Gordon, E.
2010-12-01
Scholars and policy analysts often contend that an effective climate adaptation strategy must entail "mainstreaming," or incorporating responses to possible climate impacts into existing planning and management decision frameworks. Such an approach, however, makes it difficult to assess the degree to which decisionmaking entities are engaging in adaptive activities that may or may not be explicitly framed around a changing climate. For example, a drought management plan may not explicitly address climate change, but the activities and strategies outlined in it may reduce vulnerabilities posed by a variable and changing climate. Consequently, to generate a strategic climate adaptation plan requires identifying the entire suite of activities that are implicitly linked to climate and may affect adaptive capacity within the system. Here we outline a novel, two-pronged approach, leveraging social science methods, to understanding adaptation throughout state government in Colorado. First, we conducted a series of interviews with key actors in state and federal government agencies, non-governmental organizations, universities, and other entities engaged in state issues. The purpose of these interviews was to elicit information about current activities that may affect the state’s adaptive capacity and to identify future climate-related needs across the state. Second, we have developed an interactive database cataloging organizations, products, projects, and people actively engaged in adaptive planning and policymaking that are relevant to the state of Colorado. The database includes a wiki interface, helping create a dynamic component that will enable frequent updating as climate-relevant information emerges. The results of this project are intended to paint a clear picture of sectors and agencies with higher and lower levels of adaptation awareness and to provide a roadmap for the next gubernatorial administration to pursue a more sophisticated climate adaptation agenda
The Application of Adaptive Sampling and Analysis Program (ASAP) Techniques to NORM Sites
Johnson, Robert; Smith, Karen P.; Quinn, John
1999-10-29
The results from the Michigan demonstration establish that this type of approach can be very effective for NORM sites. The advantages include (1) greatly reduced per sample analytical costs; (2) a reduced reliance on soil sampling and ex situ gamma spectroscopy analyses; (3) the ability to combine characterization with remediation activities in one fieldwork cycle; (4) improved documentation; and (5) ultimately better remediation, as measured by greater precision in delineating soils that are not in compliance with requirements from soils that are in compliance. In addition, the demonstration showed that the use of real-time technologies, such as the RadInSoil, can facilitate the implementation of a Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM)-based final status survey program
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model
NASA Technical Reports Server (NTRS)
Mathe, Nathalie; Chen, James
1994-01-01
Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
Novel Sample-handling Approach for XRD Analysis with Minimal Sample Preparation
NASA Technical Reports Server (NTRS)
Sarrazin, P.; Chipera, S.; Bish, D.; Blake, D.; Feldman, S.; Vaniman, D.; Bryson, C.
2004-01-01
Sample preparation and sample handling are among the most critical operations associated with X-ray diffraction (XRD) analysis. These operations require attention in a laboratory environment, but they become a major constraint in the deployment of XRD instruments for robotic planetary exploration. We are developing a novel sample handling system that dramatically relaxes the constraints on sample preparation by allowing characterization of coarse-grained material that would normally be impossible to analyze with conventional powder-XRD techniques.
An enhanced adaptive management approach for remediation of legacy mercury in the South River.
Foran, Christy M; Baker, Kelsie M; Grosso, Nancy R; Linkov, Igor
2015-01-01
Uncertainties about future conditions and the effects of chosen actions, as well as increasing resource scarcity, have been driving forces in the utilization of adaptive management strategies. However, many applications of adaptive management have been criticized for a number of shortcomings, including a limited ability to learn from actions and a lack of consideration of stakeholder objectives. To address these criticisms, we supplement existing adaptive management approaches with a decision-analytical approach that first informs the initial selection of management alternatives and then allows for periodic re-evaluation or phased implementation of management alternatives based on monitoring information and incorporation of stakeholder values. We describe the application of this enhanced adaptive management (EAM) framework to compare remedial alternatives for mercury in the South River, based on an understanding of the loading and behavior of mercury in the South River near Waynesboro, VA. The outcomes show that the ranking of remedial alternatives is influenced by uncertainty in the mercury loading model, by the relative importance placed on different criteria, and by cost estimates. The process itself demonstrates that a decision model can link project performance criteria, decision-maker preferences, environmental models, and short- and long-term monitoring information with management choices to help shape a remediation approach that provides useful information for adaptive, incremental implementation. PMID:25665032
120nm resolution in thick samples with structured illumination and adaptive optics
NASA Astrophysics Data System (ADS)
Thomas, Benjamin; Sloan, Megan; Wolstenholme, Adrian J.; Kner, Peter
2014-03-01
μLinear Structured Illumination Microscopy (SIM) provides a two-fold increase over the diffraction limited resolution. SIM produces excellent images with 120nm resolution in tissue culture cells in two and three dimensions. For SIM to work correctly, the point spread function (PSF) and optical transfer function (OTF) must be known, and, ideally, should be unaberrated. When imaging through thick samples, aberrations will be introduced into the optical system which will reduce the peak intensity and increase the width of the PSF. This will lead to reduced resolution and artifacts in SIM images. Adaptive optics can be used to correct the optical wavefront restoring the PSF to its unaberrated state, and AO has been used in several types of fluorescence microscopy. We demonstrate that AO can be used with SIM to achieve 120nm resolution through 25m of tissue by imaging through the full thickness of an adult C. elegans roundworm. The aberrations can be corrected over a 25μm × 45μm field of view with one wavefront correction setting, demonstrating that AO can be used effectively with widefield superresolution techniques.
Preschoolers' narrative representations and childhood adaptation in an ethnoracially diverse sample.
Grey, Izabela K; Yates, Tuppett M
2014-01-01
This investigation evaluated relations between preschoolers' representational content and coherence in the MacArthur Story Stem Battery (MSSB) at age four as related to child adjustment at age six. A community sample of 250 preschoolers (50% female; M(age) = 49.05 months, SD = 2.9; 46% Hispanic, 18% Black, 11.2% White, 0.4% Asian, and 24.4% multiracial) completed assessments of relational representations using the MSSB at age four and of child adjustment at age six, including a measure of child-reported depressive symptomatology and observer ratings of child aggression during a Bobo doll task and inhibitory control during a delay of gratification task. Regression analyses demonstrated prospective relations between negative mother representation and less inhibitory control, negative child representation and higher aggression, and narrative coherence and more inhibitory control. Interactive analyses revealed relations between negative mother representation and difficulties in inhibitory control among White children and weaker relations among Black children. Prospective relations between narrative coherence and increased inhibitory control were less pronounced for Hispanic children. Findings indicate that preschoolers' narratives can reveal the thematic content and structural coherence of their internalized beliefs and expectations of self and (m)other. Associations between representations and children's adaptation have clear implications for representational processes and interventions in development. PMID:25299891
NASA Astrophysics Data System (ADS)
Ziaja, Beata; Saxena, Vikrant; Son, Sang-Kil; Medvedev, Nikita; Barbrel, Benjamin; Woloncewicz, Bianca; Stransky, Michal
2016-05-01
We report on the kinetic Boltzmann approach adapted for simulations of highly ionized matter created from a solid by its x-ray irradiation. X rays can excite inner-shell electrons, which leads to the creation of deeply lying core holes. Their relaxation, especially in heavier elements, can take complicated paths, leading to a large number of active configurations. Their number can be so large that solving the set of respective evolution equations becomes computationally inefficient and another modeling approach should be used instead. To circumvent this complexity, the commonly used continuum models employ a superconfiguration scheme. Here, we propose an alternative approach which still uses "true" atomic configurations but limits their number by restricting the sample relaxation to the predominant relaxation paths. We test its reliability, performing respective calculations for a bulk material consisting of light atoms and comparing the results with a full calculation including all relaxation paths. Prospective application for heavy elements is discussed.
Ziaja, Beata; Saxena, Vikrant; Son, Sang-Kil; Medvedev, Nikita; Barbrel, Benjamin; Woloncewicz, Bianca; Stransky, Michal
2016-05-01
We report on the kinetic Boltzmann approach adapted for simulations of highly ionized matter created from a solid by its x-ray irradiation. X rays can excite inner-shell electrons, which leads to the creation of deeply lying core holes. Their relaxation, especially in heavier elements, can take complicated paths, leading to a large number of active configurations. Their number can be so large that solving the set of respective evolution equations becomes computationally inefficient and another modeling approach should be used instead. To circumvent this complexity, the commonly used continuum models employ a superconfiguration scheme. Here, we propose an alternative approach which still uses "true" atomic configurations but limits their number by restricting the sample relaxation to the predominant relaxation paths. We test its reliability, performing respective calculations for a bulk material consisting of light atoms and comparing the results with a full calculation including all relaxation paths. Prospective application for heavy elements is discussed. PMID:27300998
NASA Astrophysics Data System (ADS)
Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.
2015-10-01
We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.
Neural Network Aided Adaptive Extended Kalman Filtering Approach for DGPS Positioning
NASA Astrophysics Data System (ADS)
Jwo, Dah-Jing; Huang, Hung-Chih
2004-09-01
The extended Kalman filter, when employed in the GPS receiver as the navigation state estimator, provides optimal solutions if the noise statistics for the measurement and system are completely known. In practice, the noise varies with time, which results in performance degradation. The covariance matching method is a conventional adaptive approach for estimation of noise covariance matrices. The technique attempts to make the actual filter residuals consistent with their theoretical covariance. However, this innovation-based adaptive estimation shows very noisy results if the window size is small. To resolve the problem, a multilayered neural network is trained to identify the measurement noise covariance matrix, in which the back-propagation algorithm is employed to iteratively adjust the link weights using the steepest descent technique. Numerical simulations show that based on the proposed approach the adaptation performance is substantially enhanced and the positioning accuracy is substantially improved.
A Time-Critical Adaptive Approach for Visualizing Natural Scenes on Different Devices
Dong, Tianyang; Liu, Siyuan; Xia, Jiajia; Fan, Jing; Zhang, Ling
2015-01-01
To automatically adapt to various hardware and software environments on different devices, this paper presents a time-critical adaptive approach for visualizing natural scenes. In this method, a simplified expression of a tree model is used for different devices. The best rendering scheme is intelligently selected to generate a particular scene by estimating the rendering time of trees based on their visual importance. Therefore, this approach can ensure the reality of natural scenes while maintaining a constant frame rate for their interactive display. To verify its effectiveness and flexibility, this method is applied in different devices, such as a desktop computer, laptop, iPad and smart phone. Applications show that the method proposed in this paper can not only adapt to devices with different computing abilities and system resources very well but can also achieve rather good visual realism and a constant frame rate for natural scenes. PMID:25723177
Adaptation to heat health risk among vulnerable urban residents: a multi-city approach
NASA Astrophysics Data System (ADS)
Wilhelmi, O.; Hayden, M.; Brenkert-Smith, H.
2010-12-01
Recent studies on climate impacts demonstrate that climate change will have differential consequences in the U.S. at the regional and local scales. Changing climate is predicted to increase the frequency, intensity and impacts of extreme heat events prompting the need to develop preparedness and adaptation strategies that reduce societal vulnerability. Central to understanding societal vulnerability, is population’s adaptive capacity, which, in turn, influences adaptation, the actual adjustments made to cope with the impacts from current and future hazardous heat events. To-date, few studies have considered the complexity of vulnerability and its relationship to capacity to cope with or adapt to extreme heat. In this presentation we will discuss a pilot project conducted in 2009 in Phoenix, AZ, which explored urban societal vulnerability and adaptive capacity to extreme heat in several neighborhoods. Household-level surveys revealed differential adaptive capacity among the neighborhoods and social groups. In response to this pilot project, and in order to develop a methodological framework that could be used across locales, we also present an expansion of this project into Houston, TX and Toronto, Canada with the goal of furthering our understanding of adaptive capacity to extreme heat in very different urban settings. This presentation will communicate the results of the extreme heat vulnerability survey in Phoenix as well as the multidisciplinary, multi-model framework that will be used to explore urban vulnerability and adaptation strategies to heat in Houston and Toronto. We will outline challenges and opportunities in furthering our understanding of adaptive capacity and the need to approach these problems from a macro to a micro level.
ERIC Educational Resources Information Center
Rule, Audrey C.; Barrera, Manuel T., III
2008-01-01
Integration of subject areas with technology and thinking skills is a way to help teachers cope with today's overloaded curriculum and to help students see the connectedness of different curriculum areas. This study compares three authentic approaches to teaching a science unit on bird adaptations for habitat that integrate thinking skills and…
ERIC Educational Resources Information Center
Meisels, Samuel J.; Atkins-Burnett, Sally; Nicholson, Julie
Prepared in support of the Early Childhood Longitudinal Study (ECLS), which will examine children's early school experiences beginning with kindergarten, this working paper focuses on research regarding the measurement of young children's social competence, adaptive behavior, and approaches to learning. The paper reviews the key variables and…
AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT
A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...
Adaptive leadership and person-centered care: a new approach to solving problems.
Corazzini, Kirsten N; Anderson, Ruth A
2014-01-01
Successfully transitioning to person-centered care in nursing homes requires a new approach to solving care issues. The adaptive leadership framework suggests that expert providers must support frontline caregivers in their efforts to develop high-quality, person-centered solutions. PMID:25237881
ERIC Educational Resources Information Center
Storey, Brian; Butler, Joy
2013-01-01
Background: This article draws on the literature relating to game-centred approaches (GCAs), such as Teaching Games for Understanding, and dynamical systems views of motor learning to demonstrate a convergence of ideas around games as complex adaptive learning systems. This convergence is organized under the title "complexity thinking"…
EXSPRT: An Expert Systems Approach to Computer-Based Adaptive Testing.
ERIC Educational Resources Information Center
Frick, Theodore W.; And Others
Expert systems can be used to aid decision making. A computerized adaptive test (CAT) is one kind of expert system, although it is not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. EXSPRT-R uses random selection of test items,…
An Enhanced Approach to Combine Item Response Theory with Cognitive Diagnosis in Adaptive Testing
ERIC Educational Resources Information Center
Wang, Chun; Zheng, Chanjin; Chang, Hua-Hua
2014-01-01
Computerized adaptive testing offers the possibility of gaining information on both the overall ability and cognitive profile in a single assessment administration. Some algorithms aiming for these dual purposes have been proposed, including the shadow test approach, the dual information method (DIM), and the constraint weighted method. The…
NASA Technical Reports Server (NTRS)
Starks, Scott; Abdel-Hafeez, Saleh; Usevitch, Bryan
1997-01-01
This paper discusses the implementation of a fuzzy logic system using an ASICs design approach. The approach is based upon combining the inherent advantages of symmetric triangular membership functions and fuzzy singleton sets to obtain a novel structure for fuzzy logic system application development. The resulting structure utilizes a fuzzy static RAM to store the rule-base and the end-points of the triangular membership functions. This provides advantages over other approaches in which all sampled values of membership functions for all universes must be stored. The fuzzy coprocessor structure implements the fuzzification and defuzzification processes through a two-stage parallel pipeline architecture which is capable of executing complex fuzzy computations in less than 0.55us with an accuracy of more than 95%, thus making it suitable for a wide range of applications. Using the approach presented in this paper, a fuzzy logic rule-base can be directly downloaded via a host processor to an onchip rule-base memory with a size of 64 words. The fuzzy coprocessor's design supports up to 49 rules for seven fuzzy membership functions associated with each of the chip's two input variables. This feature allows designers to create fuzzy logic systems without the need for additional on-board memory. Finally, the paper reports on simulation studies that were conducted for several adaptive filter applications using the least mean squared adaptive algorithm for adjusting the knowledge rule-base.
Selecting a Sample for Your Experiment: A Non-Random Stratified Sampling Approach
ERIC Educational Resources Information Center
Tipton, Elizabeth
2012-01-01
The purpose of this paper is to develop a more general method for sample recruitment in experiments that is purposive (not random) and that results in a sample that is compositionally similar to the generalization population. This work builds on Tipton et al. (2011) by offering solutions to a larger class of problems than the non-overlapping…
The role of adaptive management as an operational approach for resource management agencies
Johnson, B.L.
1999-01-01
In making resource management decisions, agencies use a variety of approaches that involve different levels of political concern, historical precedence, data analyses, and evaluation. Traditional decision-making approaches have often failed to achieve objectives for complex problems in large systems, such as the Everglades or the Colorado River. I contend that adaptive management is the best approach available to agencies for addressing this type of complex problem, although its success has been limited thus far. Traditional decision-making approaches have been fairly successful at addressing relatively straightforward problems in small, replicated systems, such as management of trout in small streams or pulp production in forests. However, this success may be jeopardized as more users place increasing demands on these systems. Adaptive management has received little attention from agencies for addressing problems in small-scale systems, but I suggest that it may be a useful approach for creating a holistic view of common problems and developing guidelines that can then be used in simpler, more traditional approaches to management. Although adaptive management may be more expensive to initiate than traditional approaches, it may be less expensive in the long run if it leads to more effective management. The overall goal of adaptive management is not to maintain an optimal condition of the resource, but to develop an optimal management capacity. This is accomplished by maintaining ecological resilience that allows the system to react to inevitable stresses, and generating flexibility in institutions and stakeholders that allows managers to react when conditions change. The result is that, rather than managing for a single, optimal state, we manage within a range of acceptable outcomes while avoiding catastrophes and irreversible negative effects. Copyright ?? 1999 by The Resilience Alliance.
Sroufe, L A; Egeland, B; Kreutzer, T
1990-10-01
2 strategies were used to investigate the continued impact of early experience and adaptation given subsequent experience and/or developmental change in a poverty sample (N = 190). Groups were defined whose adaptation was similar during the preschool years but consistently different earlier; then these 2 groups were compared in elementary school. In addition, a series of regression analyses was performed in which variance accounted for by near-in or contemporary predictors of adaptation in middle childhood was removed before adding earlier adaptation in subsequent steps. Children showing positive adaptation in the infant/toddler period showed greater rebound in the elementary school years, despite poor functioning in the preschool period. Regression analyses revealed some incremental power of early predictors with intermediate predictors removed. The results were interpreted as supporting Bowlby's thesis that adaptation is always a product of both developmental history and current circumstances. While this research cannot resolve such a complicated issue, it does point to the need for complex formulations to guide research on individual development. PMID:2245730
Experimental Approaches to Microarray Analysis of Tumor Samples
ERIC Educational Resources Information Center
Furge, Laura Lowe; Winter, Michael B.; Meyers, Jacob I.; Furge, Kyle A.
2008-01-01
Comprehensive measurement of gene expression using high-density nucleic acid arrays (i.e. microarrays) has become an important tool for investigating the molecular differences in clinical and research samples. Consequently, inclusion of discussion in biochemistry, molecular biology, or other appropriate courses of microarray technologies has…
NEW APPROACHES TO THE PRESERVATION OF CONTAMINANTS IN WATER SAMPLES
The potential of antibiotics, chemical biocides and lytic enzymes in preserving nutrients, biological oxygen demand and oil and grease in water and sewage effluents was studied. Preliminary studies concerning the effect of drugs on cell growth and oxygen utilization in samples st...
An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.
Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf
2016-01-01
Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762
An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures
Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf
2016-01-01
Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762
ERIC Educational Resources Information Center
Hess, Markus; Scheithauer, Herbert; Kleiber, Dieter; Wille, Nora; Erhart, Michael; Ravens-Sieberer, Ulrike
2014-01-01
The Social Skills Rating System (SSRS) developed by Gresham and Elliott (1990) is a multirater, norm-referenced instrument measuring social skills and adaptive behavior in preschool children. The aims of the present study were (a) to test the factorial structure of the Parent Form of the SSRS for the first time with a German preschool sample (391…
ERIC Educational Resources Information Center
Blais, Jean-Guy; Raiche, Gilles
This paper examines some characteristics of the statistics associated with the sampling distribution of the proficiency level estimate when the Rasch model is used. These characteristics allow the judgment of the meaning to be given to the proficiency level estimate obtained in adaptive testing, and as a consequence, they can illustrate the…
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-01
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/. PMID:25083512
An adaptive online learning approach for Support Vector Regression: Online-SVR-FID
NASA Astrophysics Data System (ADS)
Liu, Jie; Zio, Enrico
2016-08-01
Support Vector Regression (SVR) is a popular supervised data-driven approach for building empirical models from available data. Like all data-driven methods, under non-stationary environmental and operational conditions it needs to be provided with adaptive learning capabilities, which might become computationally burdensome with large datasets cumulating dynamically. In this paper, a cost-efficient online adaptive learning approach is proposed for SVR by combining Feature Vector Selection (FVS) and Incremental and Decremental Learning. The proposed approach adaptively modifies the model only when different pattern drifts are detected according to proposed criteria. Two tolerance parameters are introduced in the approach to control the computational complexity, reduce the influence of the intrinsic noise in the data and avoid the overfitting problem of SVR. Comparisons of the prediction results is made with other online learning approaches e.g. NORMA, SOGA, KRLS, Incremental Learning, on several artificial datasets and a real case study concerning time series prediction based on data recorded on a component of a nuclear power generation system. The performance indicators MSE and MARE computed on the test dataset demonstrate the efficiency of the proposed online learning method.
NASA Astrophysics Data System (ADS)
Yang, Yueneng; Wu, Jie; Zheng, Wei
2013-04-01
This paper presents a novel approach for station-keeping control of a stratospheric airship platform in the presence of parametric uncertainty and external disturbance. First, conceptual design of the stratospheric airship platform is introduced, including the target mission, configuration, energy sources, propeller and payload. Second, the dynamics model of the airship platform is presented, and the mathematical model of its horizontal motion is derived. Third, a fuzzy adaptive backstepping control approach is proposed to develop the station-keeping control system for the simplified horizontal motion. The backstepping controller is designed assuming that the airship model is accurately known, and a fuzzy adaptive algorithm is used to approximate the uncertainty of the airship model. The stability of the closed-loop control system is proven via the Lyapunov theorem. Finally, simulation results illustrate the effectiveness and robustness of the proposed control approach.
A new approach for designing self-organizing systems and application to adaptive control
NASA Technical Reports Server (NTRS)
Ramamoorthy, P. A.; Zhang, Shi; Lin, Yueqing; Huang, Song
1993-01-01
There is tremendous interest in the design of intelligent machines capable of autonomous learning and skillful performance under complex environments. A major task in designing such systems is to make the system plastic and adaptive when presented with new and useful information and stable in response to irrelevant events. A great body of knowledge, based on neuro-physiological concepts, has evolved as a possible solution to this problem. Adaptive resonance theory (ART) is a classical example under this category. The system dynamics of an ART network is described by a set of differential equations with nonlinear functions. An approach for designing self-organizing networks characterized by nonlinear differential equations is proposed.
Southam-Gerow, Michael A.; Hourigan, Shannon E.; Allin, Robert B.
2009-01-01
This paper describes the application of a university-community partnership model to the problem of adapting evidence-based treatment approaches in a community mental health setting. Background on partnership research is presented, with consideration of methodological and practical issues related to this kind of research. Then, a rationale for using partnerships as a basis for conducting mental health treatment research is presented. Finally, an ongoing partnership research project concerned with the adaptation of evidence-based mental health treatments for childhood internalizing problems in community settings is presented, with preliminary results of the ongoing effort discussed. PMID:18697917
An Efficient Adaptive Angle-Doppler Compensation Approach for Non-Sidelooking Airborne Radar STAP.
Shen, Mingwei; Yu, Jia; Wu, Di; Zhu, Daiyin
2015-01-01
In this study, the effects of non-sidelooking airborne radar clutter dispersion on space-time adaptive processing (STAP) is considered, and an efficient adaptive angle-Doppler compensation (EAADC) approach is proposed to improve the clutter suppression performance. In order to reduce the computational complexity, the reduced-dimension sparse reconstruction (RDSR) technique is introduced into the angle-Doppler spectrum estimation to extract the required parameters for compensating the clutter spectral center misalignment. Simulation results to demonstrate the effectiveness of the proposed algorithm are presented. PMID:26053755
Approaches to retrospective sampling for longitudinal transition regression models
Hunsberger, Sally; Albert, Paul S.; Thoma, Marie
2016-01-01
For binary diseases that relapse and remit, it is often of interest to estimate the effect of covariates on the transition process between disease states over time. The transition process can be characterized by modeling the probability of the binary event given the individual’s history. Designing studies that examine the impact of time varying covariates over time can lead to collection of extensive amounts of data. Sometimes it may be possible to collect and store tissue, blood or images and retrospectively analyze this covariate information. In this paper we consider efficient sampling designs that do not require biomarker measurements on all subjects. We describe appropriate estimation methods for transition probabilities and functions of these probabilities, and evaluate efficiency of the estimates from the proposed sampling designs. These new methods are illustrated with data from a longitudinal study of bacterial vaginosis, a common relapsing-remitting vaginal infection of women of child bearing age.
Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa
2013-01-01
Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.
Mass Spectrometry Imaging Using the Stretched Sample Approach
Zimmerman, Tyler A.; Rubakhin, Stanislav S.; Sweedler, Jonathan V.
2011-01-01
Matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging (MSI) can determine tissue localization for a variety of analytes with high sensitivity, chemical specificity, and spatial resolution. MS image quality typically depends on the MALDI matrix application method used, particularly when the matrix solution or powder is applied directly to the tissue surface. Improper matrix application results in spatial redistribution of analytes and reduced MS signal quality. Here we present a stretched sample imaging protocol that removes the dependence of MS image quality on the matrix application process and improves analyte extraction and sample desalting. First, the tissue sample is placed on a monolayer of solid support beads that are embedded in a hydrophobic membrane. Stretching the membrane fragments the tissue into thousands of nearly single-cell sized islands, with the pieces physically isolated from each other by the membrane. This spatial isolation prevents analyte transfer between beads, allowing for longer exposure of the tissue fragments to the MALDI matrix, thereby improving detectability of small analyte quantities without sacrificing spatial resolution. When using this method to reconstruct chemical images, complications result from non-uniform stretching of the supporting membrane. Addressing this concern, several computational tools enable automated data acquisition at individual bead locations and allow reconstruction of ion images corresponding to the original spatial conformation of the tissue section. Using mouse pituitary, we demonstrate the utility of this stretched imaging technique for characterizing peptide distributions in heterogeneous tissues at nearly single-cell resolution. PMID:20680608
Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures
Calyam, Prasad
2014-09-15
The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federation policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.
A Direct Adaptive Control Approach in the Presence of Model Mismatch
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.; Tao, Gang; Khong, Thuan
2009-01-01
This paper considers the problem of direct model reference adaptive control when the plant-model matching conditions are violated due to abnormal changes in the plant or incorrect knowledge of the plant's mathematical structure. The approach consists of direct adaptation of state feedback gains for state tracking, and simultaneous estimation of the plant-model mismatch. Because of the mismatch, the plant can no longer track the state of the original reference model, but may be able to track a new reference model that still provides satisfactory performance. The reference model is updated if the estimated plant-model mismatch exceeds a bound that is determined via robust stability and/or performance criteria. The resulting controller is a hybrid direct-indirect adaptive controller that offers asymptotic state tracking in the presence of plant-model mismatch as well as parameter deviations.