NASA Astrophysics Data System (ADS)
Khodabakhshi, M.; Jafarpour, B.
2013-12-01
Characterization of complex geologic patterns that create preferential flow paths in certain reservoir systems requires higher-order geostatistical modeling techniques. Multipoint statistics (MPS) provides a flexible grid-based approach for simulating such complex geologic patterns from a conceptual prior model known as a training image (TI). In this approach, a stationary TI that encodes the higher-order spatial statistics of the expected geologic patterns is used to represent the shape and connectivity of the underlying lithofacies. While MPS is quite powerful for describing complex geologic facies connectivity, the nonlinear and complex relation between the flow data and facies distribution makes flow data conditioning quite challenging. We propose an adaptive technique for conditioning facies simulation from a prior TI to nonlinear flow data. Non-adaptive strategies for conditioning facies simulation to flow data can involves many forward flow model solutions that can be computationally very demanding. To improve the conditioning efficiency, we develop an adaptive sampling approach through a data feedback mechanism based on the sampling history. In this approach, after a short period of sampling burn-in time where unconditional samples are generated and passed through an acceptance/rejection test, an ensemble of accepted samples is identified and used to generate a facies probability map. This facies probability map contains the common features of the accepted samples and provides conditioning information about facies occurrence in each grid block, which is used to guide the conditional facies simulation process. As the sampling progresses, the initial probability map is updated according to the collective information about the facies distribution in the chain of accepted samples to increase the acceptance rate and efficiency of the conditioning. This conditioning process can be viewed as an optimization approach where each new sample is proposed based on the sampling history to improve the data mismatch objective function. We extend the application of this adaptive conditioning approach to the case where multiple training images are proposed to describe the geologic scenario in a given formation. We discuss the advantages and limitations of the proposed adaptive conditioning scheme and use numerical experiments from fluvial channel formations to demonstrate its applicability and performance compared to non-adaptive conditioning techniques.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
A Novel Approach to Adaptive Flow Separation Control
2016-09-03
particular, it considers control of flow separation over a NACA-0025 airfoil using microjet actuators and develops Adaptive Sampling Based Model...Predictive Control ( Adaptive SBMPC), a novel approach to Nonlinear Model Predictive Control that applies the Minimal Resource Allocation Network...Distribution Unlimited UU UU UU UU 03-09-2016 1-May-2013 30-Apr-2016 Final Report: A Novel Approach to Adaptive Flow Separation Control The views, opinions
Spatial adaptive sampling in multiscale simulation
NASA Astrophysics Data System (ADS)
Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.
2014-07-01
In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.
NASA Astrophysics Data System (ADS)
Saqib, Najam us; Faizan Mysorewala, Muhammad; Cheded, Lahouari
2017-12-01
In this paper, we propose a novel monitoring strategy for a wireless sensor networks (WSNs)-based water pipeline network. Our strategy uses a multi-pronged approach to reduce energy consumption based on the use of two types of vibration sensors and pressure sensors, all having different energy levels, and a hierarchical adaptive sampling mechanism to determine the sampling frequency. The sampling rate of the sensors is adjusted according to the bandwidth of the vibration signal being monitored by using a wavelet-based adaptive thresholding scheme that calculates the new sampling frequency for the following cycle. In this multimodal sensing scheme, the duty-cycling approach is used for all sensors to reduce the sampling instances, such that the high-energy, high-precision (HE-HP) vibration sensors have low duty cycles, and the low-energy, low-precision (LE-LP) vibration sensors have high duty cycles. The low duty-cycling (HE-HP) vibration sensor adjusts the sampling frequency of the high duty-cycling (LE-LP) vibration sensor. The simulated test bed considered here consists of a water pipeline network which uses pressure and vibration sensors, with the latter having different energy consumptions and precision levels, at various locations in the network. This is all the more useful for energy conservation for extended monitoring. It is shown that by using the novel features of our proposed scheme, a significant reduction in energy consumption is achieved and the leak is effectively detected by the sensor node that is closest to it. Finally, both the total energy consumed by monitoring as well as the time to detect the leak by a WSN node are computed, and show the superiority of our proposed hierarchical adaptive sampling algorithm over a non-adaptive sampling approach.
Strategies for informed sample size reduction in adaptive controlled clinical trials
NASA Astrophysics Data System (ADS)
Arandjelović, Ognjen
2017-12-01
Clinical trial adaptation refers to any adjustment of the trial protocol after the onset of the trial. The main goal is to make the process of introducing new medical interventions to patients more efficient. The principal challenge, which is an outstanding research problem, is to be found in the question of how adaptation should be performed so as to minimize the chance of distorting the outcome of the trial. In this paper, we propose a novel method for achieving this. Unlike most of the previously published work, our approach focuses on trial adaptation by sample size adjustment, i.e. by reducing the number of trial participants in a statistically informed manner. Our key idea is to select the sample subset for removal in a manner which minimizes the associated loss of information. We formalize this notion and describe three algorithms which approach the problem in different ways, respectively, using (i) repeated random draws, (ii) a genetic algorithm, and (iii) what we term pair-wise sample compatibilities. Experiments on simulated data demonstrate the effectiveness of all three approaches, with a consistently superior performance exhibited by the pair-wise sample compatibilities-based method.
Pupil-segmentation-based adaptive optics for microscopy
NASA Astrophysics Data System (ADS)
Ji, Na; Milkie, Daniel E.; Betzig, Eric
2011-03-01
Inhomogeneous optical properties of biological samples make it difficult to obtain diffraction-limited resolution in depth. Correcting the sample-induced optical aberrations needs adaptive optics (AO). However, the direct wavefront-sensing approach commonly used in astronomy is not suitable for most biological samples due to their strong scattering of light. We developed an image-based AO approach that is insensitive to sample scattering. By comparing images of the sample taken with different segments of the pupil illuminated, local tilt in the wavefront is measured from image shift. The aberrated wavefront is then obtained either by measuring the local phase directly using interference or with phase reconstruction algorithms similar to those used in astronomical AO. We implemented this pupil-segmentation-based approach in a two-photon fluorescence microscope and demonstrated that diffraction-limited resolution can be recovered from nonbiological and biological samples.
Wu, Xiaolin; Zhang, Xiangjun; Wang, Xiaohan
2009-03-01
Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.
Using continuous in-situ measurements to adaptively trigger urban storm water samples
NASA Astrophysics Data System (ADS)
Wong, B. P.; Kerkez, B.
2015-12-01
Until cost-effective in-situ sensors are available for biological parameters, nutrients and metals, automated samplers will continue to be the primary source of reliable water quality measurements. Given limited samples bottles, however, autosamplers often obscure insights on nutrient sources and biogeochemical processes which would otherwise be captured using a continuous sampling approach. To that end, we evaluate the efficacy a novel method to measure first-flush nutrient dynamics in flashy, urban watersheds. Our approach reduces the number of samples required to capture water quality dynamics by leveraging an internet-connected sensor node, which is equipped with a suite of continuous in-situ sensors and an automated sampler. To capture both the initial baseflow as well as storm concentrations, a cloud-hosted adaptive algorithm analyzes the high-resolution sensor data along with local weather forecasts to optimize a sampling schedule. The method was tested in a highly developed urban catchment in Ann Arbor, Michigan and collected samples of nitrate, phosphorus, and suspended solids throughout several storm events. Results indicate that the watershed does not exhibit first flush dynamics, a behavior that would have been obscured when using a non-adaptive sampling approach.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Statistical Inference for Data Adaptive Target Parameters.
Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J
2016-05-01
Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.
Introducing sampling entropy in repository based adaptive umbrella sampling
NASA Astrophysics Data System (ADS)
Zheng, Han; Zhang, Yingkai
2009-12-01
Determining free energy surfaces along chosen reaction coordinates is a common and important task in simulating complex systems. Due to the complexity of energy landscapes and the existence of high barriers, one widely pursued objective to develop efficient simulation methods is to achieve uniform sampling among thermodynamic states of interest. In this work, we have demonstrated sampling entropy (SE) as an excellent indicator for uniform sampling as well as for the convergence of free energy simulations. By introducing SE and the concentration theorem into the biasing-potential-updating scheme, we have further improved the adaptivity, robustness, and applicability of our recently developed repository based adaptive umbrella sampling (RBAUS) approach [H. Zheng and Y. Zhang, J. Chem. Phys. 128, 204106 (2008)]. Besides simulations of one dimensional free energy profiles for various systems, the generality and efficiency of this new RBAUS-SE approach have been further demonstrated by determining two dimensional free energy surfaces for the alanine dipeptide in gas phase as well as in water.
Practical guidelines for implementing adaptive optics in fluorescence microscopy
NASA Astrophysics Data System (ADS)
Wilding, Dean; Pozzi, Paolo; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel
2018-02-01
In life sciences, interest in the microscopic imaging of increasingly complex three dimensional samples, such as cell spheroids, zebrafish embryos, and in vivo applications in small animals, is growing quickly. Due to the increasing complexity of samples, more and more life scientists are considering the implementation of adaptive optics in their experimental setups. While several approaches to adaptive optics in microscopy have been reported, it is often difficult and confusing for the microscopist to choose from the array of techniques and equipment. In this poster presentation we offer a small guide to adaptive optics providing general guidelines for successful adaptive optics implementation.
Shore, Sabrina; Henderson, Jordana M; Lebedev, Alexandre; Salcedo, Michelle P; Zon, Gerald; McCaffrey, Anton P; Paul, Natasha; Hogrefe, Richard I
2016-01-01
For most sample types, the automation of RNA and DNA sample preparation workflows enables high throughput next-generation sequencing (NGS) library preparation. Greater adoption of small RNA (sRNA) sequencing has been hindered by high sample input requirements and inherent ligation side products formed during library preparation. These side products, known as adapter dimer, are very similar in size to the tagged library. Most sRNA library preparation strategies thus employ a gel purification step to isolate tagged library from adapter dimer contaminants. At very low sample inputs, adapter dimer side products dominate the reaction and limit the sensitivity of this technique. Here we address the need for improved specificity of sRNA library preparation workflows with a novel library preparation approach that uses modified adapters to suppress adapter dimer formation. This workflow allows for lower sample inputs and elimination of the gel purification step, which in turn allows for an automatable sRNA library preparation protocol.
Adaptive enhanced sampling by force-biasing using neural networks
NASA Astrophysics Data System (ADS)
Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.
2018-04-01
A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
Adaptive sampling strategies with high-throughput molecular dynamics
NASA Astrophysics Data System (ADS)
Clementi, Cecilia
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
Adaptive measurements of urban runoff quality
NASA Astrophysics Data System (ADS)
Wong, Brandon P.; Kerkez, Branko
2016-11-01
An approach to adaptively measure runoff water quality dynamics is introduced, focusing specifically on characterizing the timing and magnitude of urban pollutographs. Rather than relying on a static schedule or flow-weighted sampling, which can miss important water quality dynamics if parameterized inadequately, novel Internet-enabled sensor nodes are used to autonomously adapt their measurement frequency to real-time weather forecasts and hydrologic conditions. This dynamic approach has the potential to significantly improve the use of constrained experimental resources, such as automated grab samplers, which continue to provide a strong alternative to sampling water quality dynamics when in situ sensors are not available. Compared to conventional flow-weighted or time-weighted sampling schemes, which rely on preset thresholds, a major benefit of the approach is the ability to dynamically adapt to features of an underlying hydrologic signal. A 28 km2 urban watershed was studied to characterize concentrations of total suspended solids (TSS) and total phosphorus. Water quality samples were autonomously triggered in response to features in the underlying hydrograph and real-time weather forecasts. The study watershed did not exhibit a strong first flush and intraevent concentration variability was driven by flow acceleration, wherein the largest loadings of TSS and total phosphorus corresponded with the steepest rising limbs of the storm hydrograph. The scalability of the proposed method is discussed in the context of larger sensor network deployments, as well the potential to improving control of urban water quality.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less
Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure
Salehi, M.; Smith, D.R.
2005-01-01
Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.
Luce, Bryan R; Connor, Jason T; Broglio, Kristine R; Mullins, C Daniel; Ishak, K Jack; Saunders, Elijah; Davis, Barry R
2016-09-20
Bayesian and adaptive clinical trial designs offer the potential for more efficient processes that result in lower sample sizes and shorter trial durations than traditional designs. To explore the use and potential benefits of Bayesian adaptive clinical trial designs in comparative effectiveness research. Virtual execution of ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) as if it had been done according to a Bayesian adaptive trial design. Comparative effectiveness trial of antihypertensive medications. Patient data sampled from the more than 42 000 patients enrolled in ALLHAT with publicly available data. Number of patients randomly assigned between groups, trial duration, observed numbers of events, and overall trial results and conclusions. The Bayesian adaptive approach and original design yielded similar overall trial conclusions. The Bayesian adaptive trial randomly assigned more patients to the better-performing group and would probably have ended slightly earlier. This virtual trial execution required limited resampling of ALLHAT patients for inclusion in RE-ADAPT (REsearch in ADAptive methods for Pragmatic Trials). Involvement of a data monitoring committee and other trial logistics were not considered. In a comparative effectiveness research trial, Bayesian adaptive trial designs are a feasible approach and potentially generate earlier results and allocate more patients to better-performing groups. National Heart, Lung, and Blood Institute.
NASA Astrophysics Data System (ADS)
Wietsma, T.; Minsker, B. S.
2012-12-01
Increased sensor throughput combined with decreasing hardware costs has led to a disruptive growth in data volume. This disruption, popularly termed "the data deluge," has placed new demands for cyberinfrastructure and information technology skills among researchers in many academic fields, including the environmental sciences. Adaptive sampling has been well established as an effective means of improving network resource efficiency (energy, bandwidth) without sacrificing sample set quality relative to traditional uniform sampling. However, using adaptive sampling for the explicit purpose of improving resolution over events -- situations displaying intermittent dynamics and unique hydrogeological signatures -- is relatively new. In this paper, we define hot spots and hot moments in terms of sensor signal activity as measured through discrete Fourier analysis. Following this frequency-based approach, we apply the Nyquist-Shannon sampling theorem, a fundamental contribution from signal processing that led to the field of information theory, for analysis of uni- and multivariate environmental signal data. In the scope of multi-scale environmental sensor networks, we present several sampling control algorithms, derived from the Nyquist-Shannon theorem, that operate at local (field sensor), regional (base station for aggregation of field sensor data), and global (Cloud-based, computationally intensive models) scales. Evaluated over soil moisture data, results indicate significantly greater sample density during precipitation events while reducing overall sample volume. Using these algorithms as indicators rather than control mechanisms, we also discuss opportunities for spatio-temporal modeling as a tool for planning/modifying sensor network deployments. Locally adaptive model based on Nyquist-Shannon sampling theorem Pareto frontiers for local, regional, and global models relative to uniform sampling. Objectives are (1) overall sampling efficiency and (2) sampling efficiency during hot moments as identified using heuristic approach.
Quality based approach for adaptive face recognition
NASA Astrophysics Data System (ADS)
Abboud, Ali J.; Sellahewa, Harin; Jassim, Sabah A.
2009-05-01
Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary restore image quality according to the need of the intended application. In this paper, we present no-reference image quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance of adaptive face recognition system over the corresponding non-adaptive scheme.
Predicting protein interactions by Brownian dynamics simulations.
Meng, Xuan-Yu; Xu, Yu; Zhang, Hong-Xing; Mezei, Mihaly; Cui, Meng
2012-01-01
We present a newly adapted Brownian-Dynamics (BD)-based protein docking method for predicting native protein complexes. The approach includes global BD conformational sampling, compact complex selection, and local energy minimization. In order to reduce the computational costs for energy evaluations, a shell-based grid force field was developed to represent the receptor protein and solvation effects. The performance of this BD protein docking approach has been evaluated on a test set of 24 crystal protein complexes. Reproduction of experimental structures in the test set indicates the adequate conformational sampling and accurate scoring of this BD protein docking approach. Furthermore, we have developed an approach to account for the flexibility of proteins, which has been successfully applied to reproduce the experimental complex structure from the structure of two unbounded proteins. These results indicate that this adapted BD protein docking approach can be useful for the prediction of protein-protein interactions.
Rackauckas, Christopher; Nie, Qing
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.
Rackauckas, Christopher
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs. PMID:29527134
Domain Adaptation for Pedestrian Detection Based on Prediction Consistency
Huan-ling, Tang; Zhi-yong, An
2014-01-01
Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene. PMID:25013850
Rozenberg, Andrey; Leese, Florian; Weiss, Linda C; Tollrian, Ralph
2016-01-01
Tag-Seq is a high-throughput approach used for discovering SNPs and characterizing gene expression. In comparison to RNA-Seq, Tag-Seq eases data processing and allows detection of rare mRNA species using only one tag per transcript molecule. However, reduced library complexity raises the issue of PCR duplicates, which distort gene expression levels. Here we present a novel Tag-Seq protocol that uses the least biased methods for RNA library preparation combined with a novel approach for joint PCR template and sample labeling. In our protocol, input RNA is fragmented by hydrolysis, and poly(A)-bearing RNAs are selected and directly ligated to mixed DNA-RNA P5 adapters. The P5 adapters contain i5 barcodes composed of sample-specific (moderately) degenerate base regions (mDBRs), which later allow detection of PCR duplicates. The P7 adapter is attached via reverse transcription with individual i7 barcodes added during the amplification step. The resulting libraries can be sequenced on an Illumina sequencer. After sample demultiplexing and PCR duplicate removal with a free software tool we designed, the data are ready for downstream analysis. Our protocol was tested on RNA samples from predator-induced and control Daphnia microcrustaceans.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
NASA Astrophysics Data System (ADS)
Choi, Jinhyeok; Kim, Hyeonjin
2016-12-01
To improve the efficacy of undersampled MRI, a method of designing adaptive sampling functions is proposed that is simple to implement on an MR scanner and yet effectively improves the performance of the sampling functions. An approximation of the energy distribution of an image (E-map) is estimated from highly undersampled k-space data acquired in a prescan and efficiently recycled in the main scan. An adaptive probability density function (PDF) is generated by combining the E-map with a modeled PDF. A set of candidate sampling functions are then prepared from the adaptive PDF, among which the one with maximum energy is selected as the final sampling function. To validate its computational efficiency, the proposed method was implemented on an MR scanner, and its robust performance in Fourier-transform (FT) MRI and compressed sensing (CS) MRI was tested by simulations and in a cherry tomato. The proposed method consistently outperforms the conventional modeled PDF approach for undersampling ratios of 0.2 or higher in both FT-MRI and CS-MRI. To fully benefit from undersampled MRI, it is preferable that the design of adaptive sampling functions be performed online immediately before the main scan. In this way, the proposed method may further improve the efficacy of the undersampled MRI.
Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology
NASA Technical Reports Server (NTRS)
Mandic, Milan; Acikmese, Behcet; Blackmore, Lars
2011-01-01
The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal effort by indicating problems and/or benefits of different approaches and designs.
Adaptation of ATI-R Scale to Turkish Samples: Validity and Reliability Analyses
ERIC Educational Resources Information Center
Tezci, Erdogan
2017-01-01
Teachers' teaching approaches have become an important issue in the search of quality in education and teaching because of their effect on students' learning. Improvements in teachers' knowledge and awareness of their own teaching approaches enable them to adopt teaching process in accordance with their students' learning styles. The Approaches to…
Charney, Noah D.; Kubel, Jacob E.; Eiseman, Charles S.
2015-01-01
Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224
Angular velocity estimation from measurement vectors of star tracker.
Liu, Hai-bo; Yang, Jun-cai; Yi, Wen-jun; Wang, Jiong-qi; Yang, Jian-kun; Li, Xiu-jian; Tan, Ji-chun
2012-06-01
In most spacecraft, there is a need to know the craft's angular rate. Approaches with least squares and an adaptive Kalman filter are proposed for estimating the angular rate directly from the star tracker measurements. In these approaches, only knowledge of the vector measurements and sampling interval is required. The designed adaptive Kalman filter can filter out noise without information of the dynamic model and inertia dyadic. To verify the proposed estimation approaches, simulations based on the orbit data of the challenging minisatellite payload (CHAMP) satellite and experimental tests with night-sky observation are performed. Both the simulations and experimental testing results have demonstrated that the proposed approach performs well in terms of accuracy, robustness, and performance.
Mouse EEG spike detection based on the adapted continuous wavelet transform
NASA Astrophysics Data System (ADS)
Tieng, Quang M.; Kharatishvili, Irina; Chen, Min; Reutens, David C.
2016-04-01
Objective. Electroencephalography (EEG) is an important tool in the diagnosis of epilepsy. Interictal spikes on EEG are used to monitor the development of epilepsy and the effects of drug therapy. EEG recordings are generally long and the data voluminous. Thus developing a sensitive and reliable automated algorithm for analyzing EEG data is necessary. Approach. A new algorithm for detecting and classifying interictal spikes in mouse EEG recordings is proposed, based on the adapted continuous wavelet transform (CWT). The construction of the adapted mother wavelet is founded on a template obtained from a sample comprising the first few minutes of an EEG data set. Main Result. The algorithm was tested with EEG data from a mouse model of epilepsy and experimental results showed that the algorithm could distinguish EEG spikes from other transient waveforms with a high degree of sensitivity and specificity. Significance. Differing from existing approaches, the proposed approach combines wavelet denoising, to isolate transient signals, with adapted CWT-based template matching, to detect true interictal spikes. Using the adapted wavelet constructed from a predefined template, the adapted CWT is calculated on small EEG segments to fit dynamical changes in the EEG recording.
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
LMC: Logarithmantic Monte Carlo
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.
Azzari, Lucio; Foi, Alessandro
2014-08-01
We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
Adaptive Sampling of Time Series During Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.
2012-01-01
This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models are stationary, e.g., the covariance relationships are time-invariant. In such cases, information gain is independent of previously collected data, and the optimal solution can always be computed in advance. Information-optimal sampling of a stationary GP time series thus reduces to even spacing, and such models are not appropriate for tracking localized anomalies. Additionally, GP model inference can be computationally expensive.
ERIC Educational Resources Information Center
Dimitrios, Voutsas; Dimitrios, Kokaridas
2004-01-01
The purpose of this action research study was to examine the effect of an adapted swimming program in terms of improving the performance and behaviour of an individual with kyphosis-scoliosis, with the use of an individualised education approach. The sample consisted of an adult woman with kyphosis-scoliosis. The pre-swimming phase included a…
Qin, Lei; Snoussi, Hichem; Abdallah, Fahed
2014-01-01
We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Genetics of climate change adaptation.
Franks, Steven J; Hoffmann, Ary A
2012-01-01
The rapid rate of current global climate change is having strong effects on many species and, at least in some cases, is driving evolution, particularly when changes in conditions alter patterns of selection. Climate change thus provides an opportunity for the study of the genetic basis of adaptation. Such studies include a variety of observational and experimental approaches, such as sampling across clines, artificial evolution experiments, and resurrection studies. These approaches can be combined with a number of techniques in genetics and genomics, including association and mapping analyses, genome scans, and transcription profiling. Recent research has revealed a number of candidate genes potentially involved in climate change adaptation and has also illustrated that genetic regulatory networks and epigenetic effects may be particularly relevant for evolution driven by climate change. Although genetic and genomic data are rapidly accumulating, we still have much to learn about the genetic architecture of climate change adaptation.
Zhuang, Mingke; She, Zhuolin; Cai, Zijun; Huang, Zheng; Xiang, Qian; Wang, Ping; Zhu, Fei
2018-01-01
Despite career construction theory attends to individual subjective career and provides a useful lens to study well-being, extant research has yielded limited insights into the mechanisms through which career construction variables influence individual well-being. To address this important gap, the present study examined a mediation model that links indicators of career adaptivity (big-five personality and approach/avoidance traits) to psychological well-being (psychological flourishing and life satisfaction) through career adaptability and in sequent meaning of life (presence of life meaning and search for life meaning) among a sample of Chinese university students ( N = 165). The results of a two-wave survey study showed that career adaptability and presence of life meaning mediated the effects of openness to experience, consciousness, approach trait, and avoidance trait on individual well-being in sequence. The results also showed that approach trait's effect on presence of meaning was partially mediated by career adaptability; career adaptability's effect on psychological flourishing was partially mediated by presence of meaning. These findings advance understanding of antecedents to individual well-being from a career construction perspective, and carry implications for career education and counseling practices.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graf, Peter; Damiani, Rick R.; Dykes, Katherine
2017-01-09
A new adaptive stratified importance sampling (ASIS) method is proposed as an alternative approach for the calculation of the 50 year extreme load under operational conditions, as in design load case 1.1 of the the International Electrotechnical Commission design standard. ASIS combines elements of the binning and extrapolation technique, currently described by the standard, and of the importance sampling (IS) method to estimate load probability of exceedances (POEs). Whereas a Monte Carlo (MC) approach would lead to the sought level of POE with a daunting number of simulations, IS-based techniques are promising as they target the sampling of the inputmore » parameters on the parts of the distributions that are most responsible for the extreme loads, thus reducing the number of runs required. We compared the various methods on select load channels as output from FAST, an aero-hydro-servo-elastic tool for the design and analysis of wind turbines developed by the National Renewable Energy Laboratory (NREL). Our newly devised method, although still in its infancy in terms of tuning of the subparameters, is comparable to the others in terms of load estimation and its variance versus computational cost, and offers great promise going forward due to the incorporation of adaptivity into the already powerful importance sampling concept.« less
Corron, Louise; Marchal, François; Condemi, Silvana; Chaumoître, Kathia; Adalian, Pascal
2017-01-01
Juvenile age estimation methods used in forensic anthropology generally lack methodological consistency and/or statistical validity. Considering this, a standard approach using nonparametric Multivariate Adaptive Regression Splines (MARS) models were tested to predict age from iliac biometric variables of male and female juveniles from Marseilles, France, aged 0-12 years. Models using unidimensional (length and width) and bidimensional iliac data (module and surface) were constructed on a training sample of 176 individuals and validated on an independent test sample of 68 individuals. Results show that MARS prediction models using iliac width, module and area give overall better and statistically valid age estimates. These models integrate punctual nonlinearities of the relationship between age and osteometric variables. By constructing valid prediction intervals whose size increases with age, MARS models take into account the normal increase of individual variability. MARS models can qualify as a practical and standardized approach for juvenile age estimation. © 2016 American Academy of Forensic Sciences.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
In vitro adaptation of Plasmodium falciparum reveal variations in cultivability.
White, John; Mascarenhas, Anjali; Pereira, Ligia; Dash, Rashmi; Walke, Jayashri T; Gawas, Pooja; Sharma, Ambika; Manoharan, Suresh Kumar; Guler, Jennifer L; Maki, Jennifer N; Kumar, Ashwani; Mahanta, Jagadish; Valecha, Neena; Dubhashi, Nagesh; Vaz, Marina; Gomes, Edwin; Chery, Laura; Rathod, Pradipsinh K
2016-01-22
Culture-adapted Plasmodium falciparum parasites can offer deeper understanding of geographic variations in drug resistance, pathogenesis and immune evasion. To help ground population-based calculations and inferences from culture-adapted parasites, the complete range of parasites from a study area must be well represented in any collection. To this end, standardized adaptation methods and determinants of successful in vitro adaption were sought. Venous blood was collected from 33 P. falciparum-infected individuals at Goa Medical College and Hospital (Bambolim, Goa, India). Culture variables such as whole blood versus washed blood, heat-inactivated plasma versus Albumax, and different starting haematocrit levels were tested on fresh blood samples from patients. In vitro adaptation was considered successful when two four-fold or greater increases in parasitaemia were observed within, at most, 33 days of attempted culture. Subsequently, parasites from the same patients, which were originally cryopreserved following blood draw, were retested for adaptability for 45 days using identical host red blood cells (RBCs) and culture media. At a new endemic area research site, ~65% of tested patient samples, with varied patient history and clinical presentation, were successfully culture-adapted immediately after blood collection. Cultures set up at 1% haematocrit and 0.5% Albumax adapted most rapidly, but no single test condition was uniformly fatal to culture adaptation. Success was not limited by low patient parasitaemia nor by patient age. Some parasites emerged even after significant delays in sample processing and even after initiation of treatment with anti-malarials. When 'day 0' cryopreserved samples were retested in parallel many months later using identical host RBCs and media, speed to adaptation appeared to be an intrinsic property of the parasites collected from individual patients. Culture adaptation of P. falciparum in a field setting is formally shown to be robust. Parasites were found to have intrinsic variations in adaptability to culture conditions, with some lines requiring longer attempt periods for successful adaptation. Quantitative approaches described here can help describe phenotypic diversity of field parasite collections with precision. This is expected to improve population-based extrapolations of findings from field-derived fresh culture-adapted parasites to broader questions of public health importance.
Accelerated Adaptive Integration Method
2015-01-01
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
Context-aware adaptive spelling in motor imagery BCI
NASA Astrophysics Data System (ADS)
Perdikis, S.; Leeb, R.; Millán, J. d. R.
2016-06-01
Objective. This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject’s performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Approach. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree’s language model to improve online expectation-maximization maximum-likelihood estimation. Main results. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. Significance. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.
An adaptive confidence limit for periodic non-steady conditions fault detection
NASA Astrophysics Data System (ADS)
Wang, Tianzhen; Wu, Hao; Ni, Mengqi; Zhang, Milu; Dong, Jingjing; Benbouzid, Mohamed El Hachemi; Hu, Xiong
2016-05-01
System monitoring has become a major concern in batch process due to the fact that failure rate in non-steady conditions is much higher than in steady ones. A series of approaches based on PCA have already solved problems such as data dimensionality reduction, multivariable decorrelation, and processing non-changing signal. However, if the data follows non-Gaussian distribution or the variables contain some signal changes, the above approaches are not applicable. To deal with these concerns and to enhance performance in multiperiod data processing, this paper proposes a fault detection method using adaptive confidence limit (ACL) in periodic non-steady conditions. The proposed ACL method achieves four main enhancements: Longitudinal-Standardization could convert non-Gaussian sampling data to Gaussian ones; the multiperiod PCA algorithm could reduce dimensionality, remove correlation, and improve the monitoring accuracy; the adaptive confidence limit could detect faults under non-steady conditions; the fault sections determination procedure could select the appropriate parameter of the adaptive confidence limit. The achieved result analysis clearly shows that the proposed ACL method is superior to other fault detection approaches under periodic non-steady conditions.
Adaptive AFM scan speed control for high aspect ratio fast structure tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Ahmad; Schuh, Andreas; Rangelow, Ivo W.
2014-10-15
Improved imaging rates in Atomic Force Microscopes (AFM) are of high interest for disciplines such as life sciences and failure analysis of semiconductor wafers, where the sample topology shows high aspect ratios. Also, fast imaging is necessary to cover a large surface under investigation in reasonable times. Since AFMs are composed of mechanical components, they are associated with comparably low resonance frequencies that undermine the effort to increase the acquisition rates. In particular, high and steep structures are difficult to follow, which causes the cantilever to temporarily loose contact to or crash into the sample. Here, we report on amore » novel approach that does not affect the scanner dynamics, but adapts the lateral scanning speed of the scanner. The controller monitors the control error signal and, only when necessary, decreases the scan speed to allow the z-piezo more time to react to changes in the sample's topography. In this case, the overall imaging rate can be significantly increased, because a general scan speed trade-off decision is not needed and smooth areas are scanned fast. In contrast to methods trying to increase the z-piezo bandwidth, our method is a comparably simple approach that can be easily adapted to standard systems.« less
Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach
Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei
2016-01-01
Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795
An adaptive interpolation scheme for molecular potential energy surfaces
NASA Astrophysics Data System (ADS)
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
Dynamic Event Tree advancements and control logic improvements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego
The RAVEN code has been under development at the Idaho National Laboratory since 2012. Its main goal is to create a multi-purpose platform for the deploying of all the capabilities needed for Probabilistic Risk Assessment, uncertainty quantification, data mining analysis and optimization studies. RAVEN is currently equipped with three different sampling categories: Forward samplers (Monte Carlo, Latin Hyper Cube, Stratified, Grid Sampler, Factorials, etc.), Adaptive Samplers (Limit Surface search, Adaptive Polynomial Chaos, etc.) and Dynamic Event Tree (DET) samplers (Deterministic and Adaptive Dynamic Event Trees). The main subject of this document is to report the activities that have been donemore » in order to: start the migration of the RAVEN/RELAP-7 control logic system into MOOSE, and develop advanced dynamic sampling capabilities based on the Dynamic Event Tree approach. In order to provide to all MOOSE-based applications a control logic capability, in this Fiscal Year an initial migration activity has been initiated, moving the control logic system, designed for RELAP-7 by the RAVEN team, into the MOOSE framework. In this document, a brief explanation of what has been done is going to be reported. The second and most important subject of this report is about the development of a Dynamic Event Tree (DET) sampler named “Hybrid Dynamic Event Tree” (HDET) and its Adaptive variant “Adaptive Hybrid Dynamic Event Tree” (AHDET). As other authors have already reported, among the different types of uncertainties, it is possible to discern two principle types: aleatory and epistemic uncertainties. The classical Dynamic Event Tree is in charge of treating the first class (aleatory) uncertainties; the dependence of the probabilistic risk assessment and analysis on the epistemic uncertainties are treated by an initial Monte Carlo sampling (MCDET). From each Monte Carlo sample, a DET analysis is run (in total, N trees). The Monte Carlo employs a pre-sampling of the input space characterized by epistemic uncertainties. The consequent Dynamic Event Tree performs the exploration of the aleatory space. In the RAVEN code, a more general approach has been developed, not limiting the exploration of the epistemic space through a Monte Carlo method but using all the forward sampling strategies RAVEN currently employs. The user can combine a Latin Hyper Cube, Grid, Stratified and Monte Carlo sampling in order to explore the epistemic space, without any limitation. From this pre-sampling, the Dynamic Event Tree sampler starts its aleatory space exploration. As reported by the authors, the Dynamic Event Tree is a good fit to develop a goal-oriented sampling strategy. The DET is used to drive a Limit Surface search. The methodology that has been developed by the authors last year, performs a Limit Surface search in the aleatory space only. This report documents how this approach has been extended in order to consider the epistemic space interacting with the Hybrid Dynamic Event Tree methodology.« less
Saha, Arindam; Jana, Nikhil R
2015-01-14
Although microfluidic approach is widely used in various point of care diagnostics, its implementation in surface enhanced Raman spectroscopy (SERS)-based detection is challenging. This is because SERS signal depends on plasmonic nanoparticle aggregation induced generation of stable electromagnetic hot spots and in currently available microfluidic platform this condition is difficult to adapt. Here we show that SERS can be adapted using simple paper based microfluidic system where both the plasmonic nanomaterials and analyte are used in mobile phase. This approach allows analyte induced controlled particle aggregation and electromagnetic hot spot generation inside the microfluidic channel with the resultant SERS signal, which is highly reproducible and sensitive. This approach has been used for reproducible detection of protein in the pico to femtomolar concentration. Presented approach is simple, rapid, and cost-effective, and requires low sample volume. Method can be extended for SERS-based detection of other biomolecules.
A Dynamic Time Warping Approach to Real-Time Activity Recognition for Food Preparation
NASA Astrophysics Data System (ADS)
Pham, Cuong; Plötz, Thomas; Olivier, Patrick
We present a dynamic time warping based activity recognition system for the analysis of low-level food preparation activities. Accelerometers embedded into kitchen utensils provide continuous sensor data streams while people are using them for cooking. The recognition framework analyzes frames of contiguous sensor readings in real-time with low latency. It thereby adapts to the idiosyncrasies of utensil use by automatically maintaining a template database. We demonstrate the effectiveness of the classification approach by a number of real-world practical experiments on a publically available dataset. The adaptive system shows superior performance compared to a static recognizer. Furthermore, we demonstrate the generalization capabilities of the system by gradually reducing the amount of training samples. The system achieves excellent classification results even if only a small number of training samples is available, which is especially relevant for real-world scenarios.
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS
Huang, Jian; Horowitz, Joel L.; Wei, Fengrong
2010-01-01
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739
Yan, Wei; Yang, Yanlong; Tan, Yu; Chen, Xun; Li, Yang; Qu, Junle; Ye, Tong
2018-01-01
Stimulated emission depletion microscopy (STED) is one of far-field optical microscopy techniques that can provide sub-diffraction spatial resolution. The spatial resolution of the STED microscopy is determined by the specially engineered beam profile of the depletion beam and its power. However, the beam profile of the depletion beam may be distorted due to aberrations of optical systems and inhomogeneity of specimens’ optical properties, resulting in a compromised spatial resolution. The situation gets deteriorated when thick samples are imaged. In the worst case, the sever distortion of the depletion beam profile may cause complete loss of the super resolution effect no matter how much depletion power is applied to specimens. Previously several adaptive optics approaches have been explored to compensate aberrations of systems and specimens. However, it is hard to correct the complicated high-order optical aberrations of specimens. In this report, we demonstrate that the complicated distorted wavefront from a thick phantom sample can be measured by using the coherent optical adaptive technique (COAT). The full correction can effectively maintain and improve the spatial resolution in imaging thick samples. PMID:29400356
Dawson, Ree; Lavori, Philip W
2012-01-01
Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Fraley, Stephanie I; Hardick, Justin; Masek, Billie J; Jo Masek, Billie; Athamanolap, Pornpat; Rothman, Richard E; Gaydos, Charlotte A; Carroll, Karen C; Wakefield, Teresa; Wang, Tza-Huei; Yang, Samuel
2013-10-01
Comprehensive profiling of nucleic acids in genetically heterogeneous samples is important for clinical and basic research applications. Universal digital high-resolution melt (U-dHRM) is a new approach to broad-based PCR diagnostics and profiling technologies that can overcome issues of poor sensitivity due to contaminating nucleic acids and poor specificity due to primer or probe hybridization inaccuracies for single nucleotide variations. The U-dHRM approach uses broad-based primers or ligated adapter sequences to universally amplify all nucleic acid molecules in a heterogeneous sample, which have been partitioned, as in digital PCR. Extensive assay optimization enables direct sequence identification by algorithm-based matching of melt curve shape and Tm to a database of known sequence-specific melt curves. We show that single-molecule detection and single nucleotide sensitivity is possible. The feasibility and utility of U-dHRM is demonstrated through detection of bacteria associated with polymicrobial blood infection and microRNAs (miRNAs) associated with host response to infection. U-dHRM using broad-based 16S rRNA gene primers demonstrates universal single cell detection of bacterial pathogens, even in the presence of larger amounts of contaminating bacteria; U-dHRM using universally adapted Lethal-7 miRNAs in a heterogeneous mixture showcases the single copy sensitivity and single nucleotide specificity of this approach.
NASA Astrophysics Data System (ADS)
Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.
2016-12-01
Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.
Hybrid selection for sequencing pathogen genomes from clinical samples
2011-01-01
We have adapted a solution hybrid selection protocol to enrich pathogen DNA in clinical samples dominated by human genetic material. Using mock mixtures of human and Plasmodium falciparum malaria parasite DNA as well as clinical samples from infected patients, we demonstrate an average of approximately 40-fold enrichment of parasite DNA after hybrid selection. This approach will enable efficient genome sequencing of pathogens from clinical samples, as well as sequencing of endosymbiotic organisms such as Wolbachia that live inside diverse metazoan phyla. PMID:21835008
An adaptive interpolation scheme for molecular potential energy surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kowalewski, Markus, E-mail: mkowalew@uci.edu; Larsson, Elisabeth; Heryudono, Alfa
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within amore » given accuracy compared to the non-adaptive version.« less
Zhuang, Mingke; She, Zhuolin; Cai, Zijun; Huang, Zheng; Xiang, Qian; Wang, Ping; Zhu, Fei
2018-01-01
Despite career construction theory attends to individual subjective career and provides a useful lens to study well-being, extant research has yielded limited insights into the mechanisms through which career construction variables influence individual well-being. To address this important gap, the present study examined a mediation model that links indicators of career adaptivity (big-five personality and approach/avoidance traits) to psychological well-being (psychological flourishing and life satisfaction) through career adaptability and in sequent meaning of life (presence of life meaning and search for life meaning) among a sample of Chinese university students (N = 165). The results of a two-wave survey study showed that career adaptability and presence of life meaning mediated the effects of openness to experience, consciousness, approach trait, and avoidance trait on individual well-being in sequence. The results also showed that approach trait’s effect on presence of meaning was partially mediated by career adaptability; career adaptability’s effect on psychological flourishing was partially mediated by presence of meaning. These findings advance understanding of antecedents to individual well-being from a career construction perspective, and carry implications for career education and counseling practices. PMID:29743876
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Weed, Keri; Keogh, Deborah; Borkowski, John G.; Whitman, Thomas; Noria, Christine W.
2010-01-01
A person-centered approach was used to explore the mediating role of self-regulation between learner typology at age 8 and academic achievement at age 14while controlling for domain-specific achievement in a longitudinal sample of 113 children born to adolescent mothers. Children were classified into one of 5 learner typologies at age 8based on interactive patterns of intellectual, achievement, and adaptive abilities. Typology classification explained significant variance in both reading and mathematics achievement at age 14. A bootstrapping approach confirmed that self-regulation mediated the relationship between typology and reading and mathematical achievement for children from all typologies except those classified as Cognitively and Adaptively Challenged. Implications of person-centered approaches for understanding processes involved with achievement are discussed. PMID:21278904
Williams, Tim D; Turan, Nil; Diab, Amer M; Wu, Huifeng; Mackenzie, Carolynn; Bartie, Katie L; Hrydziuszko, Olga; Lyons, Brett P; Stentiford, Grant D; Herbert, John M; Abraham, Joseph K; Katsiadaki, Ioanna; Leaver, Michael J; Taggart, John B; George, Stephen G; Viant, Mark R; Chipman, Kevin J; Falciani, Francesco
2011-08-01
The acquisition and analysis of datasets including multi-level omics and physiology from non-model species, sampled from field populations, is a formidable challenge, which so far has prevented the application of systems biology approaches. If successful, these could contribute enormously to improving our understanding of how populations of living organisms adapt to environmental stressors relating to, for example, pollution and climate. Here we describe the first application of a network inference approach integrating transcriptional, metabolic and phenotypic information representative of wild populations of the European flounder fish, sampled at seven estuarine locations in northern Europe with different degrees and profiles of chemical contaminants. We identified network modules, whose activity was predictive of environmental exposure and represented a link between molecular and morphometric indices. These sub-networks represented both known and candidate novel adverse outcome pathways representative of several aspects of human liver pathophysiology such as liver hyperplasia, fibrosis, and hepatocellular carcinoma. At the molecular level these pathways were linked to TNF alpha, TGF beta, PDGF, AGT and VEGF signalling. More generally, this pioneering study has important implications as it can be applied to model molecular mechanisms of compensatory adaptation to a wide range of scenarios in wild populations.
Williams, Tim D.; Turan, Nil; Diab, Amer M.; Wu, Huifeng; Mackenzie, Carolynn; Bartie, Katie L.; Hrydziuszko, Olga; Lyons, Brett P.; Stentiford, Grant D.; Herbert, John M.; Abraham, Joseph K.; Katsiadaki, Ioanna; Leaver, Michael J.; Taggart, John B.; George, Stephen G.; Viant, Mark R.; Chipman, Kevin J.; Falciani, Francesco
2011-01-01
The acquisition and analysis of datasets including multi-level omics and physiology from non-model species, sampled from field populations, is a formidable challenge, which so far has prevented the application of systems biology approaches. If successful, these could contribute enormously to improving our understanding of how populations of living organisms adapt to environmental stressors relating to, for example, pollution and climate. Here we describe the first application of a network inference approach integrating transcriptional, metabolic and phenotypic information representative of wild populations of the European flounder fish, sampled at seven estuarine locations in northern Europe with different degrees and profiles of chemical contaminants. We identified network modules, whose activity was predictive of environmental exposure and represented a link between molecular and morphometric indices. These sub-networks represented both known and candidate novel adverse outcome pathways representative of several aspects of human liver pathophysiology such as liver hyperplasia, fibrosis, and hepatocellular carcinoma. At the molecular level these pathways were linked to TNF alpha, TGF beta, PDGF, AGT and VEGF signalling. More generally, this pioneering study has important implications as it can be applied to model molecular mechanisms of compensatory adaptation to a wide range of scenarios in wild populations. PMID:21901081
Wideband FM Demodulation and Multirate Frequency Transformations
2016-12-15
FM signals. 2.2.1 Adaptive Linear Predictive IF Tracking For a pure FM signal, the IF demodulation approach employing adaptive filters was proposed...desired signal. As summarized in [5], the prediction error filter is given by: E (z) = 1− L∑ l=1 goptl z −l, (8) 2 Approved for public release...assumption and the further assumption that the message signal remains es- sentially invariant over the sampling range of the linear prediction filter , we end
Cooper, R.J.; Mordecai, Rua S.; Mattsson, B.G.; Conroy, M.J.; Pacifici, K.; Peterson, J.T.; Moore, C.T.
2008-01-01
We describe a survey design and field protocol for the Ivory-billed Woodpecker (Campephilus principalis) search effort that will: (1) allow estimation of occupancy, use, and detection probability for habitats at two spatial scales within the bird?s former range, (2) assess relationships between occupancy, use, and habitat characteristics at those scales, (3) eventually allow the development of a population viability model that depends on patch occupancy instead of difficult-to-measure demographic parameters, and (4) be adaptive, allowing newly collected information to update the above models and search locations. The approach features random selection of patches to be searched from a sampling frame stratified and weighted by patch quality, and requires multiple visits per patch. It is adaptive within a season in that increased search activity is allowed in and around locations of strong visual and/or aural evidence, and adaptive among seasons in that habitat associations allow modification of stratum weights. This statistically rigorous approach is an improvement over simply visiting the ?best? habitat in an ad hoc fashion because we can learn from prior effort and modify the search accordingly. Results from the 2006-07 search season indicate weak relationships between occupancy and habitat (although we suggest modifications of habitat measurement protocols), and a very low detection probability, suggesting more visits per patch are required. Sample size requirements will be discussed.
Evolutionary Quantitative Genomics of Populus trichocarpa
McKown, Athena D.; La Mantia, Jonathan; Guy, Robert D.; Ingvarsson, Pär K.; Hamelin, Richard; Mansfield, Shawn D.; Ehlting, Jürgen; Douglas, Carl J.; El-Kassaby, Yousry A.
2015-01-01
Forest trees generally show high levels of local adaptation and efforts focusing on understanding adaptation to climate will be crucial for species survival and management. Here, we address fundamental questions regarding the molecular basis of adaptation in undomesticated forest tree populations to past climatic environments by employing an integrative quantitative genetics and landscape genomics approach. Using this comprehensive approach, we studied the molecular basis of climate adaptation in 433 Populus trichocarpa (black cottonwood) genotypes originating across western North America. Variation in 74 field-assessed traits (growth, ecophysiology, phenology, leaf stomata, wood, and disease resistance) was investigated for signatures of selection (comparing Q ST -F ST) using clustering of individuals by climate of origin (temperature and precipitation). 29,354 SNPs were investigated employing three different outlier detection methods and marker-inferred relatedness was estimated to obtain the narrow-sense estimate of population differentiation in wild populations. In addition, we compared our results with previously assessed selection of candidate SNPs using the 25 topographical units (drainages) across the P. trichocarpa sampling range as population groupings. Narrow-sense Q ST for 53% of distinct field traits was significantly divergent from expectations of neutrality (indicating adaptive trait variation); 2,855 SNPs showed signals of diversifying selection and of these, 118 SNPs (within 81 genes) were associated with adaptive traits (based on significant Q ST). Many SNPs were putatively pleiotropic for functionally uncorrelated adaptive traits, such as autumn phenology, height, and disease resistance. Evolutionary quantitative genomics in P. trichocarpa provides an enhanced understanding regarding the molecular basis of climate-driven selection in forest trees and we highlight that important loci underlying adaptive trait variation also show relationship to climate of origin. We consider our approach the most comprehensive, as it uncovers the molecular mechanisms of adaptation using multiple methods and tests. We also provide a detailed outline of the required analyses for studying adaptation to the environment in a population genomics context to better understand the species’ potential adaptive capacity to future climatic scenarios. PMID:26599762
Particle systems for adaptive, isotropic meshing of CAD models
Levine, Joshua A.; Whitaker, Ross T.
2012-01-01
We present a particle-based approach for generating adaptive triangular surface and tetrahedral volume meshes from computer-aided design models. Input shapes are treated as a collection of smooth, parametric surface patches that can meet non-smoothly on boundaries. Our approach uses a hierarchical sampling scheme that places particles on features in order of increasing dimensionality. These particles reach a good distribution by minimizing an energy computed in 3D world space, with movements occurring in the parametric space of each surface patch. Rather than using a pre-computed measure of feature size, our system automatically adapts to both curvature as well as a notion of topological separation. It also enforces a measure of smoothness on these constraints to construct a sizing field that acts as a proxy to piecewise-smooth feature size. We evaluate our technique with comparisons against other popular triangular meshing techniques for this domain. PMID:23162181
Towards psychologically adaptive brain-computer interfaces
NASA Astrophysics Data System (ADS)
Myrden, A.; Chau, T.
2016-12-01
Objective. Brain-computer interface (BCI) performance is sensitive to short-term changes in psychological states such as fatigue, frustration, and attention. This paper explores the design of a BCI that can adapt to these short-term changes. Approach. Eleven able-bodied individuals participated in a study during which they used a mental task-based EEG-BCI to play a simple maze navigation game while self-reporting their perceived levels of fatigue, frustration, and attention. In an offline analysis, a regression algorithm was trained to predict changes in these states, yielding Pearson correlation coefficients in excess of 0.45 between the self-reported and predicted states. Two means of fusing the resultant mental state predictions with mental task classification were investigated. First, single-trial mental state predictions were used to predict correct classification by the BCI during each trial. Second, an adaptive BCI was designed that retrained a new classifier for each testing sample using only those training samples for which predicted mental state was similar to that predicted for the current testing sample. Main results. Mental state-based prediction of BCI reliability exceeded chance levels. The adaptive BCI exhibited significant, but practically modest, increases in classification accuracy for five of 11 participants and no significant difference for the remaining six despite a smaller average training set size. Significance. Collectively, these findings indicate that adaptation to psychological state may allow the design of more accurate BCIs.
Sunderland, Matthew; Batterham, Philip J; Calear, Alison L; Carragher, Natacha
2017-12-01
A series of static and adaptive screeners for panic disorder, social anxiety disorder (SAD), and obsessive compulsive disorder (OCD) were developed and compared using data-driven methods to facilitate the measurement of each disorder in community samples. Data comprised 3175 respondents for the development sample and 3755 respondents for the validation sample, recruited independently using Facebook advertising. Item Response Theory (IRT) was utilized to develop static continuous screeners and to simulate computerized adaptive algorithms. The screeners consisted of a small subset of items from each bank (79% reduction in items for panic disorder, 85% reduction in items for SAD, and 84% reduction in items for OCD) that provided similar scores (r = 0.88-0.96). Both static and adaptive screeners were valid with respect to existing scales that purportedly measure similar constructs (r > 0.70 for panic disorder, r > 0.76 for SAD, and r > 0.68 for OCD). The adaptive scales were able to maintain a higher level of precision in comparison to the static scales and evidenced slightly higher concordance with scores generated by the full item banks. The screeners for panic disorder, SAD, and OCD could be used as a flexible approach to measure and monitor the severity of psychopathology in tailored treatment protocols. Copyright © 2017 John Wiley & Sons, Ltd.
Kone, Cheick Tidjane; Mathias, Jean-Denis; De Sousa, Gil
2017-01-01
Designing a Wireless Sensor Network (WSN) to achieve a high Quality of Service (QoS) (network performance and durability) is a challenging problem. We address it by focusing on the performance of the 802.15.4 communication protocol because the IEEE 802.15.4 Standard is actually considered as one of the reference technologies in WSNs. In this paper, we propose to control the sustainable use of resources (i.e., energy consumption, reliability and timely packet transmission) of a wireless sensor node equipped with photovoltaic cells by an adaptive tuning not only of the MAC (Medium Access Control) parameters but also of the sampling frequency of the node. To do this, we use one of the existing control approaches, namely the viability theory, which aims to preserve the functions and the controls of a dynamic system in a set of desirable states. So, an analytical model, describing the evolution over time of nodal resources, is derived and used by a viability algorithm for the adaptive tuning of the IEEE 802.15.4 MAC protocol. The simulation analysis shows that our solution allows ensuring indefinitely, in the absence of hardware failure, the operations (lifetime duration, reliability and timely packet transmission) of an 802.15.4 WSN and one can temporarily increase the sampling frequency of the node beyond the regular sampling one. This latter brings advantages for agricultural and environmental applications such as precision agriculture, flood or fire prevention. Main results show that our current approach enable to send more information when critical events occur without the node runs out of energy. Finally, we argue that our approach is generic and can be applied to other types of WSN.
The role of career adaptability and courage on life satisfaction in adolescence.
Ginevra, Maria Cristina; Magnano, Paola; Lodi, Ernesto; Annovazzi, Chiara; Camussi, Elisabetta; Patrizi, Patrizia; Nota, Laura
2018-01-01
The present study aimed to extend understanding about the relationship between career adaptability, courage, and life satisfaction in a sample of Italian adolescents. It was hypothesized that courage partially mediated the relationship between career adaptability and life satisfaction. Specifically, 1202 Italian high school students with an age from 14 to 20 years (M = 16.87; SD = 1.47), of which 600 (49.9%) boys and 602 (50.1%) girls, were involved. Using a multigroup approach across gender, it was found that courage partially mediated the relationship between career adaptability and life satisfaction in boys and girls. Results suggested the relevance of career interventions to promote career adaptability and courage for strengthening life satisfaction in adolescence. Copyright © 2017 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Multilattice sampling strategies for region of interest dynamic MRI.
Rilling, Gabriel; Tao, Yuehui; Marshall, Ian; Davies, Mike E
2013-08-01
A multilattice sampling approach is proposed for dynamic MRI with Cartesian trajectories. It relies on the use of sampling patterns composed of several different lattices and exploits an image model where only some parts of the image are dynamic, whereas the rest is assumed static. Given the parameters of such an image model, the methodology followed for the design of a multilattice sampling pattern adapted to the model is described. The multi-lattice approach is compared to single-lattice sampling, as used by traditional acceleration methods such as UNFOLD (UNaliasing by Fourier-Encoding the Overlaps using the temporal Dimension) or k-t BLAST, and random sampling used by modern compressed sensing-based methods. On the considered image model, it allows more flexibility and higher accelerations than lattice sampling and better performance than random sampling. The method is illustrated on a phase-contrast carotid blood velocity mapping MR experiment. Combining the multilattice approach with the KEYHOLE technique allows up to 12× acceleration factors. Simulation and in vivo undersampling results validate the method. Compared to lattice and random sampling, multilattice sampling provides significant gains at high acceleration factors. © 2012 Wiley Periodicals, Inc.
STANDARD OPERATING PROCEDURE FOR QUALITY ASSURANCE IN ANALYTICAL CHEMISTRY METHODS DEVELOPMENT
The Environmental Protection Agency's (EPA) Office of Research and Development (ORD) is engaged in the development, demonstration, and validation of new or newly adapted methods of analysis for environmentally related samples. Recognizing that a "one size fits all" approach to qu...
High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Juan; Zou, Qingze, E-mail: qzzou@rci.rutgers.edu
In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized inmore » a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.« less
High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force.
Ren, Juan; Zou, Qingze
2014-07-01
In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized in a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.
Veljanova, Irena; Schabrun, Siobhan; Chipchase, Lucinda
2017-01-01
Introduction There is strong evidence that biopsychosocial approaches are efficacious in the management of chronic pain. However, implementation of these approaches in clinical practice is known not to account for the beliefs and values of culturally and linguistically diverse (CALD) patients. This limitation in translation of research contributes to the disparities in outcomes for CALD patients with chronic pain adding to the socioeconomic burden of this prevalent condition. Cultural adaptation of chronic pain assessment and management is urgently required. Thus, the aim of this pilot randomised controlled trial (RCT) is to determine the feasibility, participant acceptance with and clinical effectiveness of a culturally adapted physiotherapy assessment and treatment approach when contrasted with ‘usual evidence based physiotherapy care’ for three CALD communities. Methods and analysis Using a participant-blinded and assessor-blinded randomised controlled pilot design, patients with chronic pain who self-identify as Assyrian, Mandaean or Vietnamese will be randomised to either 'culturally adapted physiotherapy assessment and treatment' or ‘evidence informed usual physiotherapy care'. We will recruit 16 participants from each ethnocultural community that will give a total of 24 participants in each treatment arm. Both groups will receive physiotherapy treatment for up to 10 sessions over 3 months. Outcomes including feasibility data, acceptance with the culturally adapted intervention, functional and pain-related measures will be collected at baseline and 3 months by a blinded assessor. Analysis will be descriptive for feasibility outcomes, while measures for clinical effectiveness will be explored using independent samples t-tests and repeated measures analysis of variance. This analysis will inform sample size estimates while also allowing for identification of revisions in the protocol or intervention prior to a larger scale RCT. Ethics and dissemination This trial has full ethical approval (HREC/16/LPOOL/194). The results from this pilot RCT will be presented at scientific meetings and published in peer-reviewed journals. Trial registration number ACTRN12616000857404 PMID:28501812
Guimaraes, S; Pruvost, M; Daligault, J; Stoetzel, E; Bennett, E A; Côté, N M-L; Nicolas, V; Lalis, A; Denys, C; Geigl, E-M; Grange, T
2017-05-01
We present a cost-effective metabarcoding approach, aMPlex Torrent, which relies on an improved multiplex PCR adapted to highly degraded DNA, combining barcoding and next-generation sequencing to simultaneously analyse many heterogeneous samples. We demonstrate the strength of these improvements by generating a phylochronology through the genotyping of ancient rodent remains from a Moroccan cave whose stratigraphy covers the last 120 000 years. Rodents are important for epidemiology, agronomy and ecological investigations and can act as bioindicators for human- and/or climate-induced environmental changes. Efficient and reliable genotyping of ancient rodent remains has the potential to deliver valuable phylogenetic and paleoecological information. The analysis of multiple ancient skeletal remains of very small size with poor DNA preservation, however, requires a sensitive high-throughput method to generate sufficient data. We show this approach to be particularly adapted at accessing this otherwise difficult taxonomic and genetic resource. As a highly scalable, lower cost and less labour-intensive alternative to targeted sequence capture approaches, we propose the aMPlex Torrent strategy to be a useful tool for the genetic analysis of multiple degraded samples in studies involving ecology, archaeology, conservation and evolutionary biology. © 2016 John Wiley & Sons Ltd.
Domain-Invariant Partial-Least-Squares Regression.
Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne
2018-05-11
Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.
Flexible sequential designs for multi-arm clinical trials.
Magirr, D; Stallard, N; Jaki, T
2014-08-30
Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.
Adaptive sampling in research on risk-related behaviors.
Thompson, Steven K; Collins, Linda M
2002-11-01
This article introduces adaptive sampling designs to substance use researchers. Adaptive sampling is particularly useful when the population of interest is rare, unevenly distributed, hidden, or hard to reach. Examples of such populations are injection drug users, individuals at high risk for HIV/AIDS, and young adolescents who are nicotine dependent. In conventional sampling, the sampling design is based entirely on a priori information, and is fixed before the study begins. By contrast, in adaptive sampling, the sampling design adapts based on observations made during the survey; for example, drug users may be asked to refer other drug users to the researcher. In the present article several adaptive sampling designs are discussed. Link-tracing designs such as snowball sampling, random walk methods, and network sampling are described, along with adaptive allocation and adaptive cluster sampling. It is stressed that special estimation procedures taking the sampling design into account are needed when adaptive sampling has been used. These procedures yield estimates that are considerably better than conventional estimates. For rare and clustered populations adaptive designs can give substantial gains in efficiency over conventional designs, and for hidden populations link-tracing and other adaptive procedures may provide the only practical way to obtain a sample large enough for the study objectives.
Beamspace fast fully adaptive brain source localization for limited data sequences
NASA Astrophysics Data System (ADS)
Ravan, Maryam
2017-05-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second order statistics often fail when the observations are taken over a short time interval, especially when the number of electrodes is large. To address this issue, in previous study, we developed a multistage adaptive processing called fast fully adaptive (FFA) approach that can significantly reduce the required sample support while still processing all available degrees of freedom (DOFs). This approach processes the observed data in stages through a decimation procedure. In this study, we introduce a new form of FFA approach called beamspace FFA. We first divide the brain into smaller regions and transform the measured data from the source space to the beamspace in each region. The FFA approach is then applied to the beamspaced data of each region. The goal of this modification is to benefit the correlation sensitivity reduction between sources in different brain regions. To demonstrate the performance of the beamspace FFA approach in the limited data scenario, simulation results with multiple deep and cortical sources as well as experimental results are compared with regular FFA and widely used FINE approaches. Both simulation and experimental results demonstrate that the beamspace FFA method can localize different types of multiple correlated brain sources in low signal to noise ratios more accurately with limited data.
Future lab-on-a-chip technologies for interrogating individual molecules.
Craighead, Harold
2006-07-27
Advances in technology have allowed chemical sampling with high spatial resolution and the manipulation and measurement of individual molecules. Adaptation of these approaches to lab-on-a-chip formats is providing a new class of research tools for the investigation of biochemistry and life processes.
NASA Astrophysics Data System (ADS)
ten Veldhuis, Marie-Claire; Schleiss, Marc
2017-04-01
In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.
Adaptive Peer Sampling with Newscast
NASA Astrophysics Data System (ADS)
Tölgyesi, Norbert; Jelasity, Márk
The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
Chang, Min; Li, Yongchao; Angeles, Reginald; Khan, Samina; Chen, Lian; Kaplan, Julia; Yang, Liyu
2011-08-01
Two approaches to monitor the matrix effect on ionization in study samples were described. One approach is the addition of multiple reaction monitoring transitions to the bioanalytical methods to monitor the presence of known ionization modification-causing components of the matrix, for example, m/z 184→125 (or m/z 184→184) and m/z 133→89 may be used for phospholipids and polyethylene oxide containing surfactants, respectively. This approach requires no additional equipment and can be readily adapted for most method. The approach detects only the intended interfering compounds and provides little quantitative indication if the matrix effect is within the tolerable range (±15%). The other approach requires the addition of an infusion pump and identifies an appropriate surrogate of the analyte to be infused for the determination of modification on the ionization of the analyte. The second approach detects interferences in the sample regardless of the sources (i.e., dosing vehicle components, co-administrated drugs, their metabolites, phospholipids, plasticizers and endogenous components introduced due to disease stage).
Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.
2008-01-01
Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of abundance of rare and patchily distributed species and is particularly appropriate when sampling in all patches is impossible, but a global estimate of abundance is required.
A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Zhang, Dongxiao; Lin, Guang
A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a samplemore » of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.« less
Protein-ligand docking using FFT based sampling: D3R case study.
Padhorny, Dzmitry; Hall, David R; Mirzaei, Hanieh; Mamonov, Artem B; Moghadasi, Mohammad; Alekseenko, Andrey; Beglov, Dmitri; Kozakov, Dima
2018-01-01
Fast Fourier transform (FFT) based approaches have been successful in application to modeling of relatively rigid protein-protein complexes. Recently, we have been able to adapt the FFT methodology to treatment of flexible protein-peptide interactions. Here, we report our latest attempt to expand the capabilities of the FFT approach to treatment of flexible protein-ligand interactions in application to the D3R PL-2016-1 challenge. Based on the D3R assessment, our FFT approach in conjunction with Monte Carlo minimization off-grid refinement was among the top performing methods in the challenge. The potential advantage of our method is its ability to globally sample the protein-ligand interaction landscape, which will be explored in further applications.
Lindgren, Kristen P.; Wiers, Reinout W.; Teachman, Bethany A.; Gasser, Melissa L.; Westgate, Erin C.; Cousijn, Janna; Enkema, Matthew C.; Neighbors, Clayton
2015-01-01
There is preliminary evidence that approach avoid training can shift implicit alcohol associations and improve treatment outcomes. We sought to replicate and extend those findings in US undergraduate social drinkers (Study 1) and at-risk drinkers (Study 2). Three adaptations of the approach avoid task (AAT) were tested. The first adaptation – the approach avoid training – was a replication and targeted implicit alcohol approach associations. The remaining two adaptations – the general identity and personalized identity trainings – targeted implicit drinking identity associations, which are robust predictors of hazardous drinking in US undergraduates. Study 1 included 300 undergraduate social drinkers. They were randomly assigned to real or sham training conditions for one of the three training adaptations, and completed two training sessions, spaced one week apart. Study 2 included 288 undergraduates at risk for alcohol use disorders. The same training procedures were used, but the two training sessions occurred within a single week. Results were not as expected. Across both studies, the approach avoid training yielded no evidence of training effects on implicit alcohol associations or alcohol outcomes. The general identity training also yielded no evidence of training effects on implicit alcohol associations or alcohol outcomes with one exception; individuals who completed real training demonstrated no changes in drinking refusal self-efficacy whereas individuals who completed sham training had reductions in self-efficacy. Finally, across both studies, the personalized identity training yielded no evidence of training effects on implicit alcohol associations or alcohol outcomes. Despite having relatively large samples and using a well-validated training task, study results indicated all three training adaptations were ineffective at this dose in US undergraduates. These findings are important because training studies are costly and labor-intensive. Future research may benefit from focusing on more severe populations, pairing training with other interventions, increasing training dose, and increasing gamification of training tasks. PMID:26241316
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging
Cua, Michelle; Wahl, Daniel J.; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J.; Jian, Yifan; Sarunic, Marinko V.
2016-01-01
Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems. PMID:27599635
Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging.
Cua, Michelle; Wahl, Daniel J; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J; Jian, Yifan; Sarunic, Marinko V
2016-09-07
Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems.
NASA Technical Reports Server (NTRS)
Thau, F. E.; Montgomery, R. C.
1980-01-01
Techniques developed for the control of aircraft under changing operating conditions are used to develop a learning control system structure for a multi-configuration, flexible space vehicle. A configuration identification subsystem that is to be used with a learning algorithm and a memory and control process subsystem is developed. Adaptive gain adjustments can be achieved by this learning approach without prestoring of large blocks of parameter data and without dither signal inputs which will be suppressed during operations for which they are not compatible. The Space Shuttle Solar Electric Propulsion (SEP) experiment is used as a sample problem for the testing of adaptive/learning control system algorithms.
Ziaja, Beata; Saxena, Vikrant; Son, Sang-Kil; Medvedev, Nikita; Barbrel, Benjamin; Woloncewicz, Bianca; Stransky, Michal
2016-05-01
We report on the kinetic Boltzmann approach adapted for simulations of highly ionized matter created from a solid by its x-ray irradiation. X rays can excite inner-shell electrons, which leads to the creation of deeply lying core holes. Their relaxation, especially in heavier elements, can take complicated paths, leading to a large number of active configurations. Their number can be so large that solving the set of respective evolution equations becomes computationally inefficient and another modeling approach should be used instead. To circumvent this complexity, the commonly used continuum models employ a superconfiguration scheme. Here, we propose an alternative approach which still uses "true" atomic configurations but limits their number by restricting the sample relaxation to the predominant relaxation paths. We test its reliability, performing respective calculations for a bulk material consisting of light atoms and comparing the results with a full calculation including all relaxation paths. Prospective application for heavy elements is discussed.
Adapted random sampling patterns for accelerated MRI.
Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf
2011-02-01
Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.
Boulgarides, L K; Barakatt, E; Coleman-Salgado, B
2014-01-01
Parkinson's disease (PD) is a neurodegenerative disease that affects muscle tone, strength, flexibility, motor control, psychological outlook, cognition, and function. Exercise has been found to improve physical ability and psychological outlook, but the effect of yoga on individuals with PD has not been well researched. The purposes of this study were to identify outcome measures that were responsive to change in individuals with PD after an 8-week adaptive yoga program and to determine appropriate sample sizes for future studies. In a repeated measures design, 10 participants with a Hoehn and Yahr stage of 2 or 3 were tested prior to and after an 8-week control phase and again after they underwent an 8-week adaptive yoga program. Analysis of variance (ANOVA) tests revealed differences in time of measure that approached significance for the depression subscale of the Hospital Anxiety and Depression Scale (HADS) (p = 0.008) and the 30-Second Chair Stand (TSCS) (p = 0.013). The interaction between time of measure and gender approached significance for the Sit-and-Reach Test (SRT) (p = 0.08 and 0.03, right and left respectively), with male participants improving in sit-and-reach flexibility compared with female participants after intervention. The interaction between time of measure and age approached significance for the Single-Leg Balance test (SLB) (p = 0.007), with younger participants improving in SLB time after intervention. Power calculations found that a sample size ranging from 33 to 153 would be required to achieve significance at the 0.01 level in the various outcome measures in a future study of this design. The depression subscale of the HADS, the TSCS, the SLB, and the right and left SRT were the measures that changed following the yoga intervention and are recommended as outcome measures in future studies investigating the effectiveness of yoga for individuals with PD. This preliminary study supports further investigation of adaptive yoga using a randomized design and a larger sample size of individuals with PD.
Knowledge-based nonuniform sampling in multidimensional NMR.
Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C
2011-07-01
The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
Ash, A; Schwartz, M; Payne, S M; Restuccia, J D
1990-11-01
Medical record review is increasing in importance as the need to identify and monitor utilization and quality of care problems grow. To conserve resources, reviews are usually performed on a subset of cases. If judgment is used to identify subgroups for review, this raises the following questions: How should subgroups be determined, particularly since the locus of problems can change over time? What standard of comparison should be used in interpreting rates of problems found in subgroups? How can population problem rates be estimated from observed subgroup rates? How can the bias be avoided that arises because reviewers know that selected cases are suspected of having problems? How can changes in problem rates over time be interpreted when evaluating intervention programs? Simple random sampling, an alternative to subgroup review, overcomes the problems implied by these questions but is inefficient. The Self-Adapting Focused Review System (SAFRS), introduced and described here, provides an adaptive approach to record selection that is based upon model-weighted probability sampling. It retains the desirable inferential properties of random sampling while allowing reviews to be concentrated on cases currently thought most likely to be problematic. Model development and evaluation are illustrated using hospital data to predict inappropriate admissions.
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Grizzle, R E; Ward, L G; Fredriksson, D W; Irish, J D; Langan, R; Heinig, C S; Greene, J K; Abeels, H A; Peter, C R; Eberhardt, A L
2014-11-15
The seafloor at an open ocean finfish aquaculture facility in the western Gulf of Maine, USA was monitored from 1999 to 2008 by sampling sites inside a predicted impact area modeled by oceanographic conditions and fecal and food settling characteristics, and nearby reference sites. Univariate and multivariate analyses of benthic community measures from box core samples indicated minimal or no significant differences between impact and reference areas. These findings resulted in development of an adaptive monitoring protocol involving initial low-cost methods that required more intensive and costly efforts only when negative impacts were initially indicated. The continued growth of marine aquaculture is dependent on further development of farming methods that minimize negative environmental impacts, as well as effective monitoring protocols. Adaptive monitoring protocols, such as the one described herein, coupled with mathematical modeling approaches, have the potential to provide effective protection of the environment while minimize monitoring effort and costs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Learning free energy landscapes using artificial neural networks.
Sidky, Hythem; Whitmer, Jonathan K
2018-03-14
Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.
How fisheries management can benefit from genomics?
Valenzuela-Quiñonez, Fausto
2016-09-01
Fisheries genomics is an emerging field that advocates the application of genomic tools to address questions in fisheries management. Genomic approaches bring a new paradigm for fisheries management by making it possible to integrate adaptive diversity to understand fundamental aspects of fisheries resources. Hence, this review is focused on the relevance of genomic approaches to solve fisheries-specific questions. Particularly the detection of adaptive diversity (outlier loci) provides unprecedented opportunity to understand bio-complexity, increased power to trace processed sample origin to allow enforcement and the potential to understand the genetic basis of micro-evolutionary effects of fisheries-induced evolution and climate change. The understanding of adaptive diversity patterns will be the cornerstone of the future links between fisheries and genomics. These studies will help stakeholders anticipate the potential effects of fishing or climate change on the resilience of fisheries stocks; consequently, in the near future, fisheries sciences might integrate evolutionary principles with fisheries management. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Learning free energy landscapes using artificial neural networks
NASA Astrophysics Data System (ADS)
Sidky, Hythem; Whitmer, Jonathan K.
2018-03-01
Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.
Mixture-based gatekeeping procedures in adaptive clinical trials.
Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji
2018-01-01
Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bujewski, G.E.; Johnson, R.L.
1996-04-01
Adaptive sampling programs provide real opportunities to save considerable time and money when characterizing hazardous waste sites. This Strategic Environmental Research and Development Program (SERDP) project demonstrated two decision-support technologies, SitePlanner{trademark} and Plume{trademark}, that can facilitate the design and deployment of an adaptive sampling program. A demonstration took place at Joliet Army Ammunition Plant (JAAP), and was unique in that it was tightly coupled with ongoing Army characterization work at the facility, with close scrutiny by both state and federal regulators. The demonstration was conducted in partnership with the Army Environmental Center`s (AEC) Installation Restoration Program and AEC`s Technology Developmentmore » Program. AEC supported researchers from Tufts University who demonstrated innovative field analytical techniques for the analysis of TNT and DNT. SitePlanner{trademark} is an object-oriented database specifically designed for site characterization that provides an effective way to compile, integrate, manage and display site characterization data as it is being generated. Plume{trademark} uses a combination of Bayesian analysis and geostatistics to provide technical staff with the ability to quantitatively merge soft and hard information for an estimate of the extent of contamination. Plume{trademark} provides an estimate of contamination extent, measures the uncertainty associated with the estimate, determines the value of additional sampling, and locates additional samples so that their value is maximized.« less
NASA Astrophysics Data System (ADS)
Fasnacht, Marc
We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.
Bayesian selective response-adaptive design using the historical control.
Kim, Mi-Ok; Harun, Nusrat; Liu, Chunyan; Khoury, Jane C; Broderick, Joseph P
2018-06-13
High quality historical control data, if incorporated, may reduce sample size, trial cost, and duration. A too optimistic use of the data, however, may result in bias under prior-data conflict. Motivated by well-publicized two-arm comparative trials in stroke, we propose a Bayesian design that both adaptively incorporates historical control data and selectively adapt the treatment allocation ratios within an ongoing trial responsively to the relative treatment effects. The proposed design differs from existing designs that borrow from historical controls. As opposed to reducing the number of subjects assigned to the control arm blindly, this design does so adaptively to the relative treatment effects only if evaluation of cumulated current trial data combined with the historical control suggests the superiority of the intervention arm. We used the effective historical sample size approach to quantify borrowed information on the control arm and modified the treatment allocation rules of the doubly adaptive biased coin design to incorporate the quantity. The modified allocation rules were then implemented under the Bayesian framework with commensurate priors addressing prior-data conflict. Trials were also more frequently concluded earlier in line with the underlying truth, reducing trial cost, and duration and yielded parameter estimates with smaller standard errors. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons, Ltd.
Waldispühl, Jérôme; Ponty, Yann
2011-11-01
The analysis of the relationship between sequences and structures (i.e., how mutations affect structures and reciprocally how structures influence mutations) is essential to decipher the principles driving molecular evolution, to infer the origins of genetic diseases, and to develop bioengineering applications such as the design of artificial molecules. Because their structures can be predicted from the sequence data only, RNA molecules provide a good framework to study this sequence-structure relationship. We recently introduced a suite of algorithms called RNAmutants which allows a complete exploration of RNA sequence-structure maps in polynomial time and space. Formally, RNAmutants takes an input sequence (or seed) to compute the Boltzmann-weighted ensembles of mutants with exactly k mutations, and sample mutations from these ensembles. However, this approach suffers from major limitations. Indeed, since the Boltzmann probabilities of the mutations depend of the free energy of the structures, RNAmutants has difficulties to sample mutant sequences with low G+C-contents. In this article, we introduce an unbiased adaptive sampling algorithm that enables RNAmutants to sample regions of the mutational landscape poorly covered by classical algorithms. We applied these methods to sample mutations with low G+C-contents. These adaptive sampling techniques can be easily adapted to explore other regions of the sequence and structural landscapes which are difficult to sample. Importantly, these algorithms come at a minimal computational cost. We demonstrate the insights offered by these techniques on studies of complete RNA sequence structures maps of sizes up to 40 nucleotides. Our results indicate that the G+C-content has a strong influence on the size and shape of the evolutionary accessible sequence and structural spaces. In particular, we show that low G+C-contents favor the apparition of internal loops and thus possibly the synthesis of tertiary structure motifs. On the other hand, high G+C-contents significantly reduce the size of the evolutionary accessible mutational landscapes.
How to Assess Vulnerabilities of Water Policies to Global Change?
NASA Astrophysics Data System (ADS)
Kumar, A.; Haasnoot, M.; Weijs, S.
2017-12-01
Water managers are confronted with uncertainties arising from hydrological, societal, economical and political drivers. To manage these uncertainties two paradigms have been identified: top-down and bottom-up approaches. Top-down or prediction-based approaches use socio-economic scenarios together with a discrete set of GCM projections (often downscaled) to assess the expected impact of drivers and policies on water resource system through various hydrological and social systems models. Adaptation strategies to alleviate these impacts are then identified and tested against the scenarios. To address GCM and downscaling uncertainties, these approaches put more focus on climate predictions, rather than the decision problem itself. Triggered by the wish to have a more scenario-neutral approach and address downscaling uncertainties, recent analyses have been shifted towards vulnerability-based (bottom-up or decision-centric) approaches. They begin at the local scale by addressing socio-economic responses to climate, often involving stakeholder's input; identify vulnerabilities under a larger sample of plausible futures and evaluate sensitivity and robustness of possible adaptation options. Several bottom-up approaches have emerged so far and are increasingly recommended. Fundamentally they share several core ideas, however, subtle differences exist in vulnerability assessment, visualization tools for exploring vulnerabilities and computational methods used for identifying robust water policies. Through this study, we try to identify how these approaches are progressing, how the climate and non-climate uncertainties are being confronted and how to integrate existing and new tools. We find that choice of a method may depend on the number of vulnerability drivers identified and type of threshold levels (environmental conditions or policy objectives) defined. Certain approaches are suited well for assessing adaptive capacities, tipping points and sequencing of decisions. However, visualization of the vulnerability domain is still challenging if multiple drivers are present. New emerging tools are focused on generating synthetic scenarios, addressing multiple objectives, linking decision-making frameworks to adaptation pathways and communicating risks to the stakeholders.
Zeller, Fabian; Zacharias, Martin
2014-02-11
The accurate calculation of potentials of mean force for ligand-receptor binding is one of the most important applications of molecular simulation techniques. Typically, the separation distance between ligand and receptor is chosen as a reaction coordinate along which a PMF can be calculated with the aid of umbrella sampling (US) techniques. In addition, restraints can be applied on the relative position and orientation of the partner molecules to reduce accessible phase space. An approach combining such phase space reduction with flattening of the free energy landscape and configurational exchanges has been developed, which significantly improves the convergence of PMF calculations in comparison with standard umbrella sampling. The free energy surface along the reaction coordinate is smoothened by iteratively adapting biasing potentials corresponding to previously calculated PMFs. Configurations are allowed to exchange between the umbrella simulation windows via the Hamiltonian replica exchange method. The application to a DNA molecule in complex with a minor groove binding ligand indicates significantly improved convergence and complete reversibility of the sampling along the pathway. The calculated binding free energy is in excellent agreement with experimental results. In contrast, the application of standard US resulted in large differences between PMFs calculated for association and dissociation pathways. The approach could be a useful alternative to standard US for computational studies on biomolecular recognition processes.
Separation of high-resolution samples of overlapping latent fingerprints using relaxation labeling
NASA Astrophysics Data System (ADS)
Qian, Kun; Schott, Maik; Schöne, Werner; Hildebrandt, Mario
2012-06-01
The analysis of latent fingerprint patterns generally requires clearly recognizable friction ridge patterns. Currently, overlapping latent fingerprints pose a major problem for traditional crime scene investigation. This is due to the fact that these fingerprints usually have very similar optical properties. Consequently, the distinction of two or more overlapping fingerprints from each other is not trivially possible. While it is possible to employ chemical imaging to separate overlapping fingerprints, the corresponding methods require sophisticated fingerprint acquisition methods and are not compatible with conventional forensic fingerprint data. A separation technique that is purely based on the local orientation of the ridge patterns of overlapping fingerprints is proposed by Chen et al. and quantitatively evaluated using off-the-shelf fingerprint matching software with mostly artificially composed overlapping fingerprint samples, which is motivated by the scarce availability of authentic test samples. The work described in this paper adapts the approach presented by Chen et al. for its application on authentic high resolution fingerprint samples acquired by a contactless measurement device based on a Chromatic White Light (CWL) sensor. An evaluation of the work is also given, with the analysis of all adapted parameters. Additionally, the separability requirement proposed by Chen et al. is also evaluated for practical feasibility. Our results show promising tendencies for the application of this approach on high-resolution data, yet the separability requirement still poses a further challenge.
Wilson, Sylia; Schalet, Benjamin D; Hicks, Brian M; Zucker, Robert A
2013-08-01
The present study used an empirical, "bottom-up" approach to delineate the structure of the California Child Q-Set (CCQ), a comprehensive set of personality descriptors, in a sample of 373 preschool-aged children. This approach yielded two broad trait dimensions, Adaptive Socialization (emotional stability, compliance, intelligence) and Anxious Inhibition (emotional/behavioral introversion). Results demonstrate the value of using empirical derivation to investigate the structure of personality in young children, speak to the importance of early-evident personality traits for adaptive development, and are consistent with a growing body of evidence indicating that personality structure in young children is similar, but not identical to, that in adults, suggesting a model of broad personality dimensions in childhood that evolve into narrower traits in adulthood.
Application of adaptive cluster sampling to low-density populations of freshwater mussels
Smith, D.R.; Villella, R.F.; Lemarie, D.P.
2003-01-01
Freshwater mussels appear to be promising candidates for adaptive cluster sampling because they are benthic macroinvertebrates that cluster spatially and are frequently found at low densities. We applied adaptive cluster sampling to estimate density of freshwater mussels at 24 sites along the Cacapon River, WV, where a preliminary timed search indicated that mussels were present at low density. Adaptive cluster sampling increased yield of individual mussels and detection of uncommon species; however, it did not improve precision of density estimates. Because finding uncommon species, collecting individuals of those species, and estimating their densities are important conservation activities, additional research is warranted on application of adaptive cluster sampling to freshwater mussels. However, at this time we do not recommend routine application of adaptive cluster sampling to freshwater mussel populations. The ultimate, and currently unanswered, question is how to tell when adaptive cluster sampling should be used, i.e., when is a population sufficiently rare and clustered for adaptive cluster sampling to be efficient and practical? A cost-effective procedure needs to be developed to identify biological populations for which adaptive cluster sampling is appropriate.
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
Adaptive sampling in behavioral surveys.
Thompson, S K
1997-01-01
Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.
Widefield compressive multiphoton microscopy.
Alemohammad, Milad; Shin, Jaewook; Tran, Dung N; Stroud, Jasper R; Chin, Sang Peter; Tran, Trac D; Foster, Mark A
2018-06-15
A single-pixel compressively sensed architecture is exploited to simultaneously achieve a 10× reduction in acquired data compared with the Nyquist rate, while alleviating limitations faced by conventional widefield temporal focusing microscopes due to scattering of the fluorescence signal. Additionally, we demonstrate an adaptive sampling scheme that further improves the compression and speed of our approach.
Panigrahi, Ansuman; Das, Sai C; Sahoo, Prabhudarsan
2018-01-01
Adaptive functioning develops throughout early childhood, and its limitation is a reflection that the child has developmental or emotional problems or even mental retardation. Little is known about the adaptive functioning or developmental status of slum children. The present cross-sectional study was undertaken during the year 2014 to assess the status of adaptive functioning among girl children aged between 3 and 9 years residing in slum areas of Bhubaneswar and to explore the factors associated with poor adaptive functioning. Stratified multi-stage cluster random sampling technique was used to select the study population; 256 mother-child pairs from 256 households in selected slum areas were studied. Demographic information was collected, and adaptive functioning was assessed using the modified Vineland Social Maturity Scale. Univariate and multivariate analyses was carried out using Statistical Package for Social Sciences (SPSS) version 21. One-fifth (54, 21%) of the girls sampled had poor adaptive functioning, and 44 (17%) had poor cognitive functioning. Multivariate analysis revealed that the age of the child, parents' education, presence of stunting in children and attending school/early childhood centre were strong predictors of adaptive functioning in slum children. One-fifth of girls from slums are developmentally vulnerable; parental education, stunting and early childhood education or exposure to schooling are modifiable factors influencing children's adaptive functioning. Health, education and welfare sectors need to be aware of this so that a multi-pronged approach can be planned to properly address this issue in one of the most disadvantaged sections of the society. © 2017 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).
Kappenman, Emily S; Luck, Steven J
2012-01-01
Event-related potentials (ERPs) are a powerful tool in understanding and evaluating cognitive, affective, motor, and sensory processing in both healthy and pathological samples. A typical ERP recording session takes considerable time but is designed to isolate only 1-2 components. Although this is appropriate for most basic science purposes, it is an inefficient approach for measuring the broad set of neurocognitive functions that may be disrupted in a neurological or psychiatric disease. The present study provides a framework for more efficiently evaluating multiple neural processes in a single experimental paradigm through the manipulation of functionally orthogonal dimensions. We describe the general MONSTER (Manipulation of Orthogonal Neural Systems Together in Electrophysiological Recordings) approach and explain how it can be adapted to investigate a variety of neurocognitive domains, ERP components, and neural processes of interest. We also demonstrate how this approach can be used to assess group differences by providing data from an implementation of the MONSTER approach in younger (18-30 y of age) and older (65-85 y of age) adult samples. This specific implementation of the MONSTER framework assesses 4 separate neural processes in the visual domain: (1) early sensory processing, using the C1 wave; (2) shifts of covert attention, with the N2pc component; (3) categorization, with the P3 component; and (4) self-monitoring, with the error-related negativity. Although the MONSTER approach is primarily described in the context of ERP experiments, it could also be adapted easily for use with functional magnetic resonance imaging.
Rapid prototyping of an adaptive light-source for mobile manipulators with EasyKit and EasyLab
NASA Astrophysics Data System (ADS)
Wojtczyk, Martin; Barner, Simon; Geisinger, Michael; Knoll, Alois
2008-08-01
While still not common in day-to-day business, mobile robot platforms form a growing market in robotics. Mobile platforms equipped with a manipulator for increased flexibility have been used successfully in biotech laboratories for sample management as shown on the well-known ESACT meetings. Navigation and object recognition is carried out by the utilization of a mounted machine vision camera. To cope with the different illumination conditions in a large laboratory, development of an adaptive light source was indispensable. We present our approach of rapid developing a computer controlled, adaptive LED light within one single business day, by utilizing the hardware toolbox EasyKit and our appropriate software counterpart EasyLab.
Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior
Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.
2008-01-01
In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Unsupervised active learning based on hierarchical graph-theoretic clustering.
Hu, Weiming; Hu, Wei; Xie, Nianhua; Maybank, Steve
2009-10-01
Most existing active learning approaches are supervised. Supervised active learning has the following problems: inefficiency in dealing with the semantic gap between the distribution of samples in the feature space and their labels, lack of ability in selecting new samples that belong to new categories that have not yet appeared in the training samples, and lack of adaptability to changes in the semantic interpretation of sample categories. To tackle these problems, we propose an unsupervised active learning framework based on hierarchical graph-theoretic clustering. In the framework, two promising graph-theoretic clustering algorithms, namely, dominant-set clustering and spectral clustering, are combined in a hierarchical fashion. Our framework has some advantages, such as ease of implementation, flexibility in architecture, and adaptability to changes in the labeling. Evaluations on data sets for network intrusion detection, image classification, and video classification have demonstrated that our active learning framework can effectively reduce the workload of manual classification while maintaining a high accuracy of automatic classification. It is shown that, overall, our framework outperforms the support-vector-machine-based supervised active learning, particularly in terms of dealing much more efficiently with new samples whose categories have not yet appeared in the training samples.
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Location tests for biomarker studies: a comparison using simulations for the two-sample case.
Scheinhardt, M O; Ziegler, A
2013-01-01
Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.
1989-04-01
strain-specific identification of HAV in human fecal samples was a major aim of the original contract application, as clinical trials of live and...derived materials and human and primate fecal specimens. 4. We molecularly cloned and partially sequenced the genome of PA21 strain HAV, a virus...antibody. This approach revealed that 99% of the infectious virus particles present in disrupted cell lysates from the 23rd passage of persistently
Adaptive measurement selection for progressive damage estimation
NASA Astrophysics Data System (ADS)
Zhou, Wenfan; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Chattopadhyay, Aditi; Peralta, Pedro
2011-04-01
Noise and interference in sensor measurements degrade the quality of data and have a negative impact on the performance of structural damage diagnosis systems. In this paper, a novel adaptive measurement screening approach is presented to automatically select the most informative measurements and use them intelligently for structural damage estimation. The method is implemented efficiently in a sequential Monte Carlo (SMC) setting using particle filtering. The noise suppression and improved damage estimation capability of the proposed method is demonstrated by an application to the problem of estimating progressive fatigue damage in an aluminum compact-tension (CT) sample using noisy PZT sensor measurements.
Husereau, Don; Henshall, Chris; Jivraj, Jamil
2014-07-01
Adaptive approaches to the introduction of drugs and medical devices involve the use of an evolving evidence base rather than conventional single-point-in-time evaluations as a proposed means to promote patient access to innovation, reduce clinical uncertainty, ensure effectiveness, and improve the health technology development process. This report summarizes a Health Technology Assessment International (HTAi) Policy Forum discussion, drawing on presentations from invited experts, discussions among attendees about real-world case examples, and background paper. For adaptive approaches to be understood, accepted, and implemented, the Forum identified several key issues that must be addressed. These include the need to define the goals of and to set priorities for adaptive approaches; to examine evidence collection approaches; to clarify the roles and responsibilities of stakeholders; to understand the implications of adaptive approaches on current legal and ethical standards; to determine costs of such approaches and how they will be met; and to identify differences in applying adaptive approaches to drugs versus medical devices. The Forum also explored the different implications of adaptive approaches for various stakeholders, including patients, regulators, HTA/coverage bodies, health systems, clinicians, and industry. A key outcome of the meeting was a clearer understanding of the opportunities and challenges adaptive approaches present. Furthermore, the Forum brought to light the critical importance of recognizing and including a full range of stakeholders as contributors to a shared decision-making model implicit in adaptive pathways in future discussions on, and implementation of, adaptive approaches.
Adaptive-numerical-bias metadynamics.
Khanjari, Neda; Eslami, Hossein; Müller-Plathe, Florian
2017-12-05
A metadynamics scheme is presented in which the free energy surface is filled with progressively adding adaptive biasing potentials, obtained from the accumulated probability distribution of the collective variables. Instead of adding Gaussians with assigned height and width in conventional metadynamics method, here we add a more realistic adaptive biasing potential to the Hamiltonian of the system. The shape of the adaptive biasing potential is adjusted on the fly by sampling over the visited states. As the top of the barrier is approached, the biasing potentials become wider. This decreases the problem of trapping the system in the niches, introduced by the addition of Gaussians of fixed height in metadynamics. Our results for the free energy profiles of three test systems show that this method is more accurate and converges more quickly than the conventional metadynamics, and is quite comparable (in accuracy and convergence rate) with the well-tempered metadynamics method. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...
2016-05-25
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Bojovic, Dragana; Bonzanigo, Laura; Giupponi, Carlo; Maziotis, Alexandros
2015-07-01
The new EU strategy on adaptation to climate change suggests flexible and participatory approaches. Face-to-face contact, although it involves time-consuming procedures with a limited audience, has often been considered the most effective participatory approach. In recent years, however, there has been an increase in the visibility of different citizens' initiatives in the online world, which strengthens the possibility of greater citizen agency. This paper investigates whether the Internet can ensure efficient public participation with meaningful engagement in climate change adaptation. In elucidating issues regarding climate change adaptation, we developed an eParticipation framework to explore adaptation capacity of agriculture to climate change in Northern Italy. Farmers were mobilised using a pre-existing online network. First they took part in an online questionnaire for revealing their perceptions of and reactions to the impacts of ongoing changes in agriculture. We used these results to suggest a portfolio of policy measures and to set evaluation criteria. Farmers then evaluated these policy options, using a multi criteria analysis tool with a simple user-friendly interface. Our results showed that eParticipation is efficient: it supports a rapid data collection, while involving high number of participants. Moreover, we demonstrated that the digital divide is decreasingly an obstacle for using online spaces for public engagement. This research does not present eParticipation as a panacea. Rather, eParticipation was implemented with well-established participatory approaches to both validate the results and, consequently, communicate meaningful messages on local agricultural adaptation practices to regional decision-makers. Feedbacks from the regional decision-makers showed their interest in using eParticipation to improve communication with farmers in the future. We expect that, with further Internet proliferation, eParticipation may allow the inclusion of more representative samples, which would contribute to an informed and legitimate decision-making process. Copyright © 2015 Elsevier Ltd. All rights reserved.
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
NASA Technical Reports Server (NTRS)
Starks, Scott; Abdel-Hafeez, Saleh; Usevitch, Bryan
1997-01-01
This paper discusses the implementation of a fuzzy logic system using an ASICs design approach. The approach is based upon combining the inherent advantages of symmetric triangular membership functions and fuzzy singleton sets to obtain a novel structure for fuzzy logic system application development. The resulting structure utilizes a fuzzy static RAM to store the rule-base and the end-points of the triangular membership functions. This provides advantages over other approaches in which all sampled values of membership functions for all universes must be stored. The fuzzy coprocessor structure implements the fuzzification and defuzzification processes through a two-stage parallel pipeline architecture which is capable of executing complex fuzzy computations in less than 0.55us with an accuracy of more than 95%, thus making it suitable for a wide range of applications. Using the approach presented in this paper, a fuzzy logic rule-base can be directly downloaded via a host processor to an onchip rule-base memory with a size of 64 words. The fuzzy coprocessor's design supports up to 49 rules for seven fuzzy membership functions associated with each of the chip's two input variables. This feature allows designers to create fuzzy logic systems without the need for additional on-board memory. Finally, the paper reports on simulation studies that were conducted for several adaptive filter applications using the least mean squared adaptive algorithm for adjusting the knowledge rule-base.
The influence of approach-avoidance motivational orientation on conflict adaptation.
Hengstler, Maikel; Holland, Rob W; van Steenbergen, Henk; van Knippenberg, Ad
2014-06-01
To deal effectively with a continuously changing environment, our cognitive system adaptively regulates resource allocation. Earlier findings showed that an avoidance orientation (induced by arm extension), relative to an approach orientation (induced by arm flexion), enhanced sustained cognitive control. In avoidance conditions, performance on a cognitive control task was enhanced, as indicated by a reduced congruency effect, relative to approach conditions. Extending these findings, in the present behavioral studies we investigated dynamic adaptations in cognitive control-that is, conflict adaptation. We proposed that an avoidance state recruits more resources in response to conflicting signals, and thereby increases conflict adaptation. Conversely, in an approach state, conflict processing diminishes, which consequently weakens conflict adaptation. As predicted, approach versus avoidance arm movements affected both behavioral congruency effects and conflict adaptation: As compared to approach, avoidance movements elicited reduced congruency effects and increased conflict adaptation. These results are discussed in line with a possible underlying neuropsychological model.
ERIC Educational Resources Information Center
Moran, Tracy E.; Larrieu, Julie A.; Zeanah, Paula; Evenson, Amber; Valliere, Jean
2013-01-01
Postpartum depression (PPD) affects a significant portion of women and has serious negative short- and long-term consequences for the woman, infant, and family. This article highlights the feasibility and acceptability of group interpersonal psychotherapy (IPT-G), a manualized approach to PPD treatment, with a high risk and underserved sample of…
Wilson, Sylia; Schalet, Benjamin D.; Hicks, Brian M.; Zucker, Robert A.
2013-01-01
The present study used an empirical, “bottom-up” approach to delineate the structure of the California Child Q-Set (CCQ), a comprehensive set of personality descriptors, in a sample of 373 preschool-aged children. This approach yielded two broad trait dimensions, Adaptive Socialization (emotional stability, compliance, intelligence) and Anxious Inhibition (emotional/behavioral introversion). Results demonstrate the value of using empirical derivation to investigate the structure of personality in young children, speak to the importance of early-evident personality traits for adaptive development, and are consistent with a growing body of evidence indicating that personality structure in young children is similar, but not identical to, that in adults, suggesting a model of broad personality dimensions in childhood that evolve into narrower traits in adulthood. PMID:24223448
Quasi‐steady centrifuge method for unsaturated hydraulic properties
Caputo, Maria C.; Nimmo, John R.
2005-01-01
We have developed the quasi‐steady centrifuge (QSC) method as a variation of the steady state centrifuge method that can be implemented simply and inexpensively with greater versatility in terms of sample size and other features. It achieves these advantages by somewhat relaxing the criterion for steadiness of flow through the sample. This compromise entails an increase in measurement uncertainty but to a degree that is tolerable in most applications. We have tested this new approach with an easily constructed apparatus to establish a quasi‐steady flow of water in unsaturated porous rock samples spinning in a centrifuge, obtaining measurements of unsaturated hydraulic conductivity and water retention that agree with results of other methods. The QSC method is adaptable to essentially any centrifuge suitable for hydrogeologic applications, over a wide range of sizes and operating speeds. The simplified apparatus and greater adaptability of this method expands the potential for exploring situations that are common in nature but have been the subject of few laboratory investigations.
Plasma Exosome Profiling of Cancer Patients by a Next Generation Systems Biology Approach.
Domenyuk, Valeriy; Zhong, Zhenyu; Stark, Adam; Xiao, Nianqing; O'Neill, Heather A; Wei, Xixi; Wang, Jie; Tinder, Teresa T; Tonapi, Sonal; Duncan, Janet; Hornung, Tassilo; Hunter, Andrew; Miglarese, Mark R; Schorr, Joachim; Halbert, David D; Quackenbush, John; Poste, George; Berry, Donald A; Mayer, Günter; Famulok, Michael; Spetzler, David
2017-02-20
Technologies capable of characterizing the full breadth of cellular systems need to be able to measure millions of proteins, isoforms, and complexes simultaneously. We describe an approach that fulfils this criterion: Adaptive Dynamic Artificial Poly-ligand Targeting (ADAPT). ADAPT employs an enriched library of single-stranded oligodeoxynucleotides (ssODNs) to profile complex biological samples, thus achieving an unprecedented coverage of system-wide, native biomolecules. We used ADAPT as a highly specific profiling tool that distinguishes women with or without breast cancer based on circulating exosomes in their blood. To develop ADAPT, we enriched a library of ~10 11 ssODNs for those associating with exosomes from breast cancer patients or controls. The resulting 10 6 enriched ssODNs were then profiled against plasma from independent groups of healthy and breast cancer-positive women. ssODN-mediated affinity purification and mass spectrometry identified low-abundance exosome-associated proteins and protein complexes, some with known significance in both normal homeostasis and disease. Sequencing of the recovered ssODNs provided quantitative measures that were used to build highly accurate multi-analyte signatures for patient classification. Probing plasma from 500 subjects with a smaller subset of 2000 resynthesized ssODNs stratified healthy, breast biopsy-negative, and -positive women. An AUC of 0.73 was obtained when comparing healthy donors with biopsy-positive patients.
Plasma Exosome Profiling of Cancer Patients by a Next Generation Systems Biology Approach
Domenyuk, Valeriy; Zhong, Zhenyu; Stark, Adam; Xiao, Nianqing; O’Neill, Heather A.; Wei, Xixi; Wang, Jie; Tinder, Teresa T.; Tonapi, Sonal; Duncan, Janet; Hornung, Tassilo; Hunter, Andrew; Miglarese, Mark R.; Schorr, Joachim; Halbert, David D.; Quackenbush, John; Poste, George; Berry, Donald A.; Mayer, Günter; Famulok, Michael; Spetzler, David
2017-01-01
Technologies capable of characterizing the full breadth of cellular systems need to be able to measure millions of proteins, isoforms, and complexes simultaneously. We describe an approach that fulfils this criterion: Adaptive Dynamic Artificial Poly-ligand Targeting (ADAPT). ADAPT employs an enriched library of single-stranded oligodeoxynucleotides (ssODNs) to profile complex biological samples, thus achieving an unprecedented coverage of system-wide, native biomolecules. We used ADAPT as a highly specific profiling tool that distinguishes women with or without breast cancer based on circulating exosomes in their blood. To develop ADAPT, we enriched a library of ~1011 ssODNs for those associating with exosomes from breast cancer patients or controls. The resulting 106 enriched ssODNs were then profiled against plasma from independent groups of healthy and breast cancer-positive women. ssODN-mediated affinity purification and mass spectrometry identified low-abundance exosome-associated proteins and protein complexes, some with known significance in both normal homeostasis and disease. Sequencing of the recovered ssODNs provided quantitative measures that were used to build highly accurate multi-analyte signatures for patient classification. Probing plasma from 500 subjects with a smaller subset of 2000 resynthesized ssODNs stratified healthy, breast biopsy-negative, and -positive women. An AUC of 0.73 was obtained when comparing healthy donors with biopsy-positive patients. PMID:28218293
da Silva, Wesley Pereira; de Oliveira, Luiz Henrique; Santos, André Luiz Dos; Ferreira, Valdir Souza; Trindade, Magno Aparecido Gonçalves
2018-06-01
A procedure based on liquid-liquid extraction (LLE) and phase separation using magnetically stirred salt-induced high-temperature liquid-liquid extraction (PS-MSSI-HT-LLE) was developed to extract and pre-concentrate ciprofloxacin (CIPRO) and enrofloxacin (ENRO) from animal food samples before electroanalysis. Firstly, simple LLE was used to extract the fluoroquinolones (FQs) from animal food samples, in which dilution was performed to reduce interference effects to below a tolerable threshold. Then, adapted PS-MSSI-HT-LLE protocols allowed re-extraction and further pre-concentration of target analytes in the diluted acid samples for simultaneous electrochemical quantification at low concentration levels. To improve the peak separation, in simultaneous detection, a baseline-corrected second-order derivative approach was processed. These approaches allowed quantification of target FQs from animal food samples spiked at levels of 0.80 to 2.00 µmol L -1 in chicken meat, with recovery values always higher than 80.5%, as well as in milk samples spiked at 4.00 µmol L -1 , with recovery values close to 70.0%. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Zhihua; Yang, Xiaomei; Lu, Chen; Yang, Fengshuo
2018-07-01
Automatic updating of land use/cover change (LUCC) databases using high spatial resolution images (HSRI) is important for environmental monitoring and policy making, especially for coastal areas that connect the land and coast and that tend to change frequently. Many object-based change detection methods are proposed, especially those combining historical LUCC with HSRI. However, the scale parameter(s) segmenting the serial temporal images, which directly determines the average object size, is hard to choose without experts' intervention. And the samples transferred from historical LUCC also need experts' intervention to avoid insufficient or wrong samples. With respect to the scale parameter(s) choosing, a Scale Self-Adapting Segmentation (SSAS) approach based on the exponential sampling of a scale parameter and location of the local maximum of a weighted local variance was proposed to determine the scale selection problem when segmenting images constrained by LUCC for detecting changes. With respect to the samples transferring, Knowledge Transfer (KT), a classifier trained on historical images with LUCC and applied in the classification of updated images, was also proposed. Comparison experiments were conducted in a coastal area of Zhujiang, China, using SPOT 5 images acquired in 2005 and 2010. The results reveal that (1) SSAS can segment images more effectively without intervention of experts. (2) KT can also reach the maximum accuracy of samples transfer without experts' intervention. Strategy SSAS + KT would be a good choice if the temporal historical image and LUCC match, and the historical image and updated image are obtained from the same resource.
Adaptive skin segmentation via feature-based face detection
NASA Astrophysics Data System (ADS)
Taylor, Michael J.; Morris, Tim
2014-05-01
Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.
Shaping field for deep tissue microscopy
NASA Astrophysics Data System (ADS)
Colon, J.; Lim, H.
2015-05-01
Information capacity of a lossless image-forming system is a conserved property determined by two imaging parameters - the resolution and the field of view (FOV). Adaptive optics improves the former by manipulating the phase, or wavefront, in the pupil plane. Here we describe a homologous approach, namely adaptive field microscopy, which aims to enhance the FOV by controlling the phase, or defocus, in the focal plane. In deep tissue imaging, the useful FOV can be severely limited if the region of interest is buried in a thick sample and not perpendicular to the optic axis. One must acquire many z-scans and reconstruct by post-processing, which exposes tissue to excessive radiation and is also time consuming. We demonstrate the effective FOV can be substantially enhanced by dynamic control of the image plane. Specifically, the tilt of the image plane is continuously adjusted in situ to match the oblique orientation of the sample plane within tissue. The utility of adaptive field microscopy is tested for imaging tissue with non-planar morphology. Ocular tissue of small animals was imaged by two-photon excited fluorescence. Our results show that adaptive field microscopy can utilize the full FOV. The freedom to adjust the image plane to account for the geometrical variations of sample could be extremely useful for 3D biological imaging. Furthermore, it could facilitate rapid surveillance of cellular features within deep tissue while avoiding photo damages, making it suitable for in vivo imaging.
Context-aware adaptive spelling in motor imagery BCI.
Perdikis, S; Leeb, R; Millán, J D R
2016-06-01
This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject's performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree's language model to improve online expectation-maximization maximum-likelihood estimation. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.
NASA Astrophysics Data System (ADS)
Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques
2015-12-01
In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.
Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data
Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Sung, Yunsick
2017-01-01
Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method. PMID:28753613
RNAblueprint: flexible multiple target nucleic acid sequence design.
Hammer, Stefan; Tschiatschek, Birgit; Flamm, Christoph; Hofacker, Ivo L; Findeiß, Sven
2017-09-15
Realizing the value of synthetic biology in biotechnology and medicine requires the design of molecules with specialized functions. Due to its close structure to function relationship, and the availability of good structure prediction methods and energy models, RNA is perfectly suited to be synthetically engineered with predefined properties. However, currently available RNA design tools cannot be easily adapted to accommodate new design specifications. Furthermore, complicated sampling and optimization methods are often developed to suit a specific RNA design goal, adding to their inflexibility. We developed a C ++ library implementing a graph coloring approach to stochastically sample sequences compatible with structural and sequence constraints from the typically very large solution space. The approach allows to specify and explore the solution space in a well defined way. Our library also guarantees uniform sampling, which makes optimization runs performant by not only avoiding re-evaluation of already found solutions, but also by raising the probability of finding better solutions for long optimization runs. We show that our software can be combined with any other software package to allow diverse RNA design applications. Scripting interfaces allow the easy adaption of existing code to accommodate new scenarios, making the whole design process very flexible. We implemented example design approaches written in Python to demonstrate these advantages. RNAblueprint , Python implementations and benchmark datasets are available at github: https://github.com/ViennaRNA . s.hammer@univie.ac.at, ivo@tbi.univie.ac.at or sven@tbi.univie.ac.at. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
The search for loci under selection: trends, biases and progress.
Ahrens, Collin W; Rymer, Paul D; Stow, Adam; Bragg, Jason; Dillon, Shannon; Umbers, Kate D L; Dudaniec, Rachael Y
2018-03-01
Detecting genetic variants under selection using F ST outlier analysis (OA) and environmental association analyses (EAAs) are popular approaches that provide insight into the genetic basis of local adaptation. Despite the frequent use of OA and EAA approaches and their increasing attractiveness for detecting signatures of selection, their application to field-based empirical data have not been synthesized. Here, we review 66 empirical studies that use Single Nucleotide Polymorphisms (SNPs) in OA and EAA. We report trends and biases across biological systems, sequencing methods, approaches, parameters, environmental variables and their influence on detecting signatures of selection. We found striking variability in both the use and reporting of environmental data and statistical parameters. For example, linkage disequilibrium among SNPs and numbers of unique SNP associations identified with EAA were rarely reported. The proportion of putatively adaptive SNPs detected varied widely among studies, and decreased with the number of SNPs analysed. We found that genomic sampling effort had a greater impact than biological sampling effort on the proportion of identified SNPs under selection. OA identified a higher proportion of outliers when more individuals were sampled, but this was not the case for EAA. To facilitate repeatability, interpretation and synthesis of studies detecting selection, we recommend that future studies consistently report geographical coordinates, environmental data, model parameters, linkage disequilibrium, and measures of genetic structure. Identifying standards for how OA and EAA studies are designed and reported will aid future transparency and comparability of SNP-based selection studies and help to progress landscape and evolutionary genomics. © 2018 John Wiley & Sons Ltd.
Adapting populations in space: clonal interference and genetic diversity
NASA Astrophysics Data System (ADS)
Weissman, Daniel; Barton, Nick
Most species inhabit ranges much larger than the scales over which individuals interact. How does this spatial structure interact with adaptive evolution? We consider a simple model of a spatially-extended, adapting population and show that, while clonal interference severely limits the adaptation of purely asexual populations, even rare recombination is enough to allow adaptation at rates approaching those of well-mixed populations. We also find that the genetic hitchhiking produced by the adaptive alleles sweeping through the population has strange effects on the patterns of genetic diversity. In large spatial ranges, even low rates of adaptation cause all individuals in the population to rapidly trace their ancestry back to individuals living in a small region in the center of the range. The probability of fixation of an allele is thus strongly dependent on the allele's spatial location, with alleles from the center favored. Surprisingly, these effects are seen genome-wide (instead of being localized to the regions of the genome undergoing the sweeps). The spatial concentration of ancestry produces a power-law dependence of relatedness on distance, so that even individuals sampled far apart are likely to be fairly closely related, masking the underlying spatial structure.
NASA Astrophysics Data System (ADS)
Gong, W.; Meyer, F. J.
2013-12-01
It is well known that spatio-temporal the tropospheric phase signatures complicate the interpretation and detection of smaller magnitude deformation signals or unstudied motion fields. Several advanced time-series InSAR techniques were developed in the last decade that make assumptions about the stochastic properties of the signal components in interferometric phases to reduce atmospheric delay effects on surface deformation estimates. However, their need for large datasets to successfully separate the different phase contributions limits their performance if data is scarce and irregularly sampled. Limited SAR data coverage is true for many areas affected by geophysical deformation. This is either due to their low priority in mission programming, unfavorable ground coverage condition, or turbulent seasonal weather effects. In this paper, we present new adaptive atmospheric phase filtering algorithms that are specifically designed to reconstruct surface deformation signals from atmosphere-affected and irregularly sampled InSAR time series. The filters take advantage of auxiliary atmospheric delay information that is extracted from various sources, e.g. atmospheric weather models. They are embedded into a model-free Persistent Scatterer Interferometry (PSI) approach that was selected to accommodate non-linear deformation patterns that are often observed near volcanoes and earthquake zones. Two types of adaptive phase filters were developed that operate in the time dimension and separate atmosphere from deformation based on their different temporal correlation properties. Both filter types use the fact that atmospheric models can reliably predict the spatial statistics and signal power of atmospheric phase delay fields in order to automatically optimize the filter's shape parameters. In essence, both filter types will attempt to maximize the linear correlation between a-priori and the extracted atmospheric phase information. Topography-related phase components, orbit errors and the master atmospheric delays are first removed in a pre-processing step before the atmospheric filters are applied. The first adaptive filter type is using a filter kernel of Gaussian shape and is adaptively adjusting the width (defined in days) of this filter until the correlation of extracted and modeled atmospheric signal power is maximized. If atmospheric properties vary along the time series, this approach will lead to filter setting that are adapted to best reproduce atmospheric conditions at a certain observation epoch. Despite the superior performance of this first filter design, its Gaussian shape imposes non-physical relative weights onto acquisitions that ignore the known atmospheric noise in the data. Hence, in our second approach we are using atmospheric a-priori information to adaptively define the full shape of the atmospheric filter. For this process, we use a so-called normalized convolution (NC) approach that is often used in image reconstruction. Several NC designs will be presented in this paper and studied for relative performance. A cross-validation of all developed algorithms was done using both synthetic and real data. This validation showed designed filters are outperforming conventional filter methods that particularly useful for regions with limited data coverage or lack of a deformation field prior.
Climate Change: From Science to Practice.
Wheeler, Nicola; Watts, Nick
2018-03-01
Climate change poses a significant threat to human health. Understanding how climate science can be translated into public health practice is an essential first step in enabling robust adaptation and improving resiliency to climate change. Recent research highlights the importance of iterative approaches to public health adaptation to climate change, enabling uncertainties of health impacts and barriers to adaptation to be accounted for. There are still significant barriers to adaptation, which are context-specific and thus present unique challenges to public health practice. The implementation of flexible adaptation approaches, using frameworks targeted for public health, is key to ensuring robust adaptation to climate change in public health practice. The BRACE framework provides an excellent approach for health adaptation to climate change. Combining this with the insights provided and by the adaptation pathways approach allows for more deliberate accounting of long-term uncertainties. The mainstreaming of climate change adaptation into public health practice and planning is important in facilitating this approach and overcoming the significant barriers to effective adaptation. Yet, the immediate and future limits to adaptation provide clear justification for urgent and accelerated efforts to mitigate climate change.
Amini, Parisa; Ettlin, Julia; Opitz, Lennart; Clementi, Elena; Malbon, Alexandra; Markkanen, Enni
2017-08-23
Formalin-fixed paraffin embedded (FFPE) tissue constitutes a vast treasury of samples for biomedical research. Thus far however, extraction of RNA from FFPE tissue has proved challenging due to chemical RNA-protein crosslinking and RNA fragmentation, both of which heavily impact on RNA quantity and quality for downstream analysis. With very small sample sizes, e.g. when performing Laser-capture microdissection (LCM) to isolate specific subpopulations of cells, recovery of sufficient RNA for analysis with reverse-transcription quantitative PCR (RT-qPCR) or next-generation sequencing (NGS) becomes very cumbersome and difficult. We excised matched cancer-associated stroma (CAS) and normal stroma from clinical specimen of FFPE canine mammary tumours using LCM, and compared the commonly used protease-based RNA isolation procedure with an adapted novel technique that additionally incorporates a focused ultrasonication step. We successfully adapted a protocol that uses focused ultrasonication to isolate RNA from small amounts of deparaffinised, stained, clinical LCM samples. Using this approach, we found that total RNA yields could be increased by 8- to 12-fold compared to a commonly used protease-based extraction technique. Surprisingly, RNA extracted using this new approach was qualitatively at least equal if not superior compared to the old approach, as Cq values in RT-qPCR were on average 2.3-fold lower using the new method. Finally, we demonstrate that RNA extracted using the new method performs comparably in NGS as well. We present a successful isolation protocol for extraction of RNA from difficult and limiting FFPE tissue samples that enables successful analysis of small sections of clinically relevant specimen. The possibility to study gene expression signatures in specific small sections of archival FFPE tissue, which often entail large amounts of highly relevant clinical follow-up data, unlocks a new dimension of hitherto difficult-to-analyse samples which now become amenable for investigation.
Biochemical and Anatomical Characteristics of Dolphin Muscles.
1984-01-01
the Bioengineering Branch (Code 5143) of the Naval Ocean Systems Center and the Kinesiology Department of the University of California, Los Angeles...such a sample. TENDON ANALYSES The biochemistry of the dolphin tendon suggests that this tissue is well adapted to withstand large forces and significant...neuromuscular physiology, connective tissue, and muscle biochemistry . A detailed proposal outlining the goals, approach, milestones, and costs for
ERIC Educational Resources Information Center
Amedu, Odagboyi Isaiah; Gudi, Kreni Comfort
2017-01-01
This study is aimed at investigating the attitude of students toward the cooperative learning approach. A quasiexperimental design was used for the study. The sample was made of 179 SS 1 students drawn from three public secondary schools in Nasarawa state. The Jigsaw Attitude Questionnaire (JAQ) was adapted from Koprowski and Perigo (2000) and was…
Hinsu, Ankit T; Parmar, Nidhi R; Nathani, Neelam M; Pandit, Ramesh J; Patel, Anand B; Patel, Amrutlal K; Joshi, Chaitanya G
2017-04-01
Recent advances in next generation sequencing technology have enabled analysis of complex microbial community from genome to transcriptome level. In the present study, metatranscriptomic approach was applied to elucidate functionally active bacteria and their biological processes in rumen of buffalo (Bubalus bubalis) adapted to different dietary treatments. Buffaloes were adapted to a diet containing 50:50, 75:25 and 100:0 forage to concentrate ratio, each for 6 weeks, before ruminal content sample collection. Metatranscriptomes from rumen fiber adherent and fiber-free active bacteria were sequenced using Ion Torrent PGM platform followed by annotation using MG-RAST server and CAZYmes (Carbohydrate active enzymes) analysis toolkit. In all the samples Bacteroidetes was the most abundant phylum followed by Firmicutes. Functional analysis using KEGG Orthology database revealed Metabolism as the most abundant category at level 1 within which Carbohydrate metabolism was dominating. Diet treatments also exerted significant differences in proportion of enzymes involved in metabolic pathways for VFA production. Carbohydrate Active Enzyme(CAZy) analysis revealed the abundance of genes encoding glycoside hydrolases with the highest representation of GH13 CAZy family in all the samples. The findings provide an overview of the activities occurring in the rumen as well as active bacterial population and the changes occurring through different dietary treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroeer, Alexander; Veitch, John
The Laser Interferometer Space Antenna (LISA) defines new demands on data analysis efforts in its all-sky gravitational wave survey, recording simultaneously thousands of galactic compact object binary foreground sources and tens to hundreds of background sources like binary black hole mergers and extreme-mass ratio inspirals. We approach this problem with an adaptive and fully automatic Reversible Jump Markov Chain Monte Carlo sampler, able to sample from the joint posterior density function (as established by Bayes theorem) for a given mixture of signals ''out of the box'', handling the total number of signals as an additional unknown parameter beside the unknownmore » parameters of each individual source and the noise floor. We show in examples from the LISA Mock Data Challenge implementing the full response of LISA in its TDI description that this sampler is able to extract monochromatic Double White Dwarf signals out of colored instrumental noise and additional foreground and background noise successfully in a global fitting approach. We introduce 2 examples with fixed number of signals (MCMC sampling), and 1 example with unknown number of signals (RJ-MCMC), the latter further promoting the idea behind an experimental adaptation of the model indicator proposal densities in the main sampling stage. We note that the experienced runtimes and degeneracies in parameter extraction limit the shown examples to the extraction of a low but realistic number of signals.« less
Robust online tracking via adaptive samples selection with saliency detection
NASA Astrophysics Data System (ADS)
Yan, Jia; Chen, Xi; Zhu, QiuPing
2013-12-01
Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.
Personality Subtypes of Suicidal Adults
Westen, Drew; Bradley, Rebekah
2009-01-01
Research into personality factors related to suicidality suggests substantial variability among suicide attempters. A potentially useful approach that accounts for this complexity is personality subtyping. As part of a large sample looking at personality pathology, this study used Q-factor analysis to identify subtypes of 311 adult suicide attempters using SWAP-II personality profiles. Identified subtypes included Internalizing, Emotionally Dysregulated, Dependent, Hostile-Isolated, Psychopathic, and Anxious-Somatizing. Subtypes differed in hypothesized ways on criterion variables that address their construct validity, including adaptive functioning, Axis I and II comorbidity, and etiology-related variables (e.g., history of abuse). Furthermore, dimensional ratings of the subtypes predicted adaptive functioning above DSM-based diagnoses and symptoms. PMID:19752649
Sampling-free Bayesian inversion with adaptive hierarchical tensor representations
NASA Astrophysics Data System (ADS)
Eigel, Martin; Marschall, Manuel; Schneider, Reinhold
2018-03-01
A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.
Tehrani, Kayvan F.; Zhang, Yiwen; Shen, Ping; Kner, Peter
2017-01-01
Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm. PMID:29188105
Tehrani, Kayvan F; Zhang, Yiwen; Shen, Ping; Kner, Peter
2017-11-01
Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm.
Campbell, Aimee N. C.; Turrigiano, Eva; Moore, Michelle; Miele, Gloria M.; Rieckmann, Traci; Hu, Mei-Chen; Kropp, Frankie; Ringor-Carty, Roz; Nunes, Edward V.
2014-01-01
Longstanding disparities in substance use disorders and treatment access exist among American Indian/Alaska Natives (AI/AN). Computerized, web-delivered interventions have potential to increase access to quality treatment and improve patient outcomes. Prior research supports the efficacy of a web-based version (Therapeutic Education System [TES]) of the Community Reinforcement Approach to improve outcomes among outpatients in substance abuse treatment; however, TES has not been tested among AI/AN. The results from this mixed method acceptability study among a diverse sample of urban AI/AN (N=40) show that TES was acceptable across seven indices (range=7.8 to 9.4 on 0 to 10 scales with 10 indicating highest acceptability). Qualitative interviews suggest adaptation specific to AI/AN culture could improve adoption. Additional efforts to adapt TES and conduct a larger effectiveness study are warranted. PMID:25022913
Campbell, Aimee N C; Turrigiano, Eva; Moore, Michelle; Miele, Gloria M; Rieckmann, Traci; Hu, Mei-Chen; Kropp, Frankie; Ringor-Carty, Roz; Nunes, Edward V
2015-05-01
Longstanding disparities in substance use disorders and treatment access exist among American Indians/Alaska Natives (AI/AN). Computerized, web-delivered interventions have potential to increase access to quality treatment and improve patient outcomes. Prior research supports the efficacy of a web-based version [therapeutic education system (TES)] of the community reinforcement approach to improve outcomes among outpatients in substance abuse treatment; however, TES has not been tested among AI/AN. The results from this mixed method acceptability study among a diverse sample of urban AI/AN (N = 40) show that TES was acceptable across seven indices (range 7.8-9.4 on 0-10 scales with 10 indicating highest acceptability). Qualitative interviews suggest adaptation specific to AI/AN culture could improve adoption. Additional efforts to adapt TES and conduct a larger effectiveness study are warranted.
Discriminative clustering on manifold for adaptive transductive classification.
Zhang, Zhao; Jia, Lei; Zhang, Min; Li, Bing; Zhang, Li; Li, Fanzhang
2017-10-01
In this paper, we mainly propose a novel adaptive transductive label propagation approach by joint discriminative clustering on manifolds for representing and classifying high-dimensional data. Our framework seamlessly combines the unsupervised manifold learning, discriminative clustering and adaptive classification into a unified model. Also, our method incorporates the adaptive graph weight construction with label propagation. Specifically, our method is capable of propagating label information using adaptive weights over low-dimensional manifold features, which is different from most existing studies that usually predict the labels and construct the weights in the original Euclidean space. For transductive classification by our formulation, we first perform the joint discriminative K-means clustering and manifold learning to capture the low-dimensional nonlinear manifolds. Then, we construct the adaptive weights over the learnt manifold features, where the adaptive weights are calculated through performing the joint minimization of the reconstruction errors over features and soft labels so that the graph weights can be joint-optimal for data representation and classification. Using the adaptive weights, we can easily estimate the unknown labels of samples. After that, our method returns the updated weights for further updating the manifold features. Extensive simulations on image classification and segmentation show that our proposed algorithm can deliver the state-of-the-art performance on several public datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ghiglietti, Andrea; Scarale, Maria Giovanna; Miceli, Rosalba; Ieva, Francesca; Mariani, Luigi; Gavazzi, Cecilia; Paganoni, Anna Maria; Edefonti, Valeria
2018-03-22
Recently, response-adaptive designs have been proposed in randomized clinical trials to achieve ethical and/or cost advantages by using sequential accrual information collected during the trial to dynamically update the probabilities of treatment assignments. In this context, urn models-where the probability to assign patients to treatments is interpreted as the proportion of balls of different colors available in a virtual urn-have been used as response-adaptive randomization rules. We propose the use of Randomly Reinforced Urn (RRU) models in a simulation study based on a published randomized clinical trial on the efficacy of home enteral nutrition in cancer patients after major gastrointestinal surgery. We compare results with the RRU design with those previously published with the non-adaptive approach. We also provide a code written with the R software to implement the RRU design in practice. In detail, we simulate 10,000 trials based on the RRU model in three set-ups of different total sample sizes. We report information on the number of patients allocated to the inferior treatment and on the empirical power of the t-test for the treatment coefficient in the ANOVA model. We carry out a sensitivity analysis to assess the effect of different urn compositions. For each sample size, in approximately 75% of the simulation runs, the number of patients allocated to the inferior treatment by the RRU design is lower, as compared to the non-adaptive design. The empirical power of the t-test for the treatment effect is similar in the two designs.
Dhar, Sunil Kumar; Jani, Kunal; Apte, Deepak A.; Shouche, Yogesh S.; Sharma, Avinash
2015-01-01
Marine microbes play a key role and contribute largely to the global biogeochemical cycles. This study aims to explore microbial diversity from one such ecological hotspot, the continental shelf of Agatti Island. Sediment samples from various depths of the continental shelf were analyzed for bacterial diversity using deep sequencing technology along with the culturable approach. Additionally, imputed metagenomic approach was carried out to understand the functional aspects of microbial community especially for microbial genes important in nutrient uptake, survival and biogeochemical cycling in the marine environment. Using culturable approach, 28 bacterial strains representing 9 genera were isolated from various depths of continental shelf. The microbial community structure throughout the samples was dominated by phylum Proteobacteria and harbored various bacterioplanktons as well. Significant differences were observed in bacterial diversity within a short region of the continental shelf (1–40 meters) i.e. between upper continental shelf samples (UCS) with lesser depths (i.e. 1–20 meters) and lower continental shelf samples (LCS) with greater depths (i.e. 25–40 meters). By using imputed metagenomic approach, this study also discusses several adaptive mechanisms which enable microbes to survive in nutritionally deprived conditions, and also help to understand the influence of nutrition availability on bacterial diversity. PMID:26066038
Kumbhare, Shreyas V; Dhotre, Dhiraj P; Dhar, Sunil Kumar; Jani, Kunal; Apte, Deepak A; Shouche, Yogesh S; Sharma, Avinash
2015-01-01
Marine microbes play a key role and contribute largely to the global biogeochemical cycles. This study aims to explore microbial diversity from one such ecological hotspot, the continental shelf of Agatti Island. Sediment samples from various depths of the continental shelf were analyzed for bacterial diversity using deep sequencing technology along with the culturable approach. Additionally, imputed metagenomic approach was carried out to understand the functional aspects of microbial community especially for microbial genes important in nutrient uptake, survival and biogeochemical cycling in the marine environment. Using culturable approach, 28 bacterial strains representing 9 genera were isolated from various depths of continental shelf. The microbial community structure throughout the samples was dominated by phylum Proteobacteria and harbored various bacterioplanktons as well. Significant differences were observed in bacterial diversity within a short region of the continental shelf (1-40 meters) i.e. between upper continental shelf samples (UCS) with lesser depths (i.e. 1-20 meters) and lower continental shelf samples (LCS) with greater depths (i.e. 25-40 meters). By using imputed metagenomic approach, this study also discusses several adaptive mechanisms which enable microbes to survive in nutritionally deprived conditions, and also help to understand the influence of nutrition availability on bacterial diversity.
NASA Astrophysics Data System (ADS)
Ren, Juan
Nanoscale morphological characterization and mechanical properties quantification of soft and biological materials play an important role in areas ranging from nano-composite material synthesis and characterization, cellular mechanics to drug design. Frontier studies in these areas demand the coordination between nanoscale morphological evolution and mechanical behavior variations through simultaneous measurement of these two aspects of properties. Atomic force microscope (AFM) is very promising in achieving such simultaneous measurements at high-speed and broadband owing to its unique capability in applying force stimuli and then, measuring the response at specific locations in a physiologically friendly environment with pico-newton force and nanometer spatial resolution. Challenges, however, arise as current AFM systems are unable to account for the complex and coupled dynamics of the measurement system and probe-sample interaction during high-speed imaging and broadband measurements. In this dissertation, the creation of a set of dynamics and control tools to probe-based high-speed imaging and rapid broadband nanomechanical spectroscopy of soft and biological materials are presented. Firstly, advanced control-based approaches are presented to improve the imaging performance of AFM imaging both in air and in liquid. An adaptive contact mode (ACM) imaging scheme is proposed to replace the traditional contact mode (CM) imaging by addressing the major concerns in both the speed and the force exerted to the sample. In this work, the image distortion caused by the topography tracking error is accounted for in the topography quantification and the quantified sample topography is utilized in a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining a stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line tracking is implemented to enhance the sample topography tracking. An adaptive multi-loop mode (AMLM) imaging approach is proposed to substantially increase the imaging speed of tapping mode (TM) while preserving the advantages of TM over CM by integrating an inner-outer feedback control loop to regulate the TM-deflection on top of the conventional TM-amplitude feedback control to improve the sample topography tracking. Experiments demonstrated that the proposed ACM and AMLM are capable of increasing the imaging speed by at least 20 times for conventional contact and tapping mode imaging, respectively, with no loss of imaging quality and well controlled tip-sample interaction force. In addition, an adaptive mode imaging for in-liquid topography quantification on live cells is presented. The experiment results demonstrated that instead of keeping constant scanning speed, the proposed speed optimization scheme is able to increase the imaging speed on live human prostate cancer cells by at least eight-fold with no loss of imaging quality. Secondly, control based approaches to accurate nanomechanical quantification on soft materials for both broadband and in-liquid force-curve measurements are proposed to address the adverse effects caused by the system coupling dynamics and the cantilever acceleration, which were not compensated for by the conventional AFM measurement approach. The proposed nanomechanical measurement approaches are demonstrated through experiments to measure the viscoelastic properties of different polymer samples in air and live human cells in liquid to study the variation of rate-dependent elastic modulus of cervix cancer cell during the epithelial-mesenchymal transition process.
Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme
2013-07-01
The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.
ECCM Scheme against Interrupted Sampling Repeater Jammer Based on Parameter-Adjusted Waveform Design
Wei, Zhenhua; Peng, Bo; Shen, Rui
2018-01-01
Interrupted sampling repeater jamming (ISRJ) is an effective way of deceiving coherent radar sensors, especially for linear frequency modulated (LFM) radar. In this paper, for a simplified scenario with a single jammer, we propose a dynamic electronic counter-counter measure (ECCM) scheme based on jammer parameter estimation and transmitted signal design. Firstly, the LFM waveform is transmitted to estimate the main jamming parameters by investigating the discontinuousness of the ISRJ’s time-frequency (TF) characteristics. Then, a parameter-adjusted intra-pulse frequency coded signal, whose ISRJ signal after matched filtering only forms a single false target, is designed adaptively according to the estimated parameters, i.e., sampling interval, sampling duration and repeater times. Ultimately, for typical jamming scenes with different jamming signal ratio (JSR) and duty cycle, we propose two particular ISRJ suppression approaches. Simulation results validate the effective performance of the proposed scheme for countering the ISRJ, and the trade-off relationship between the two approaches is demonstrated. PMID:29642508
Eigenvector method for umbrella sampling enables error analysis
Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.
2016-01-01
Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912
A New High-Throughput Approach to Genotype Ancient Human Gastrointestinal Parasites.
Côté, Nathalie M L; Daligault, Julien; Pruvost, Mélanie; Bennett, E Andrew; Gorgé, Olivier; Guimaraes, Silvia; Capelli, Nicolas; Le Bailly, Matthieu; Geigl, Eva-Maria; Grange, Thierry
2016-01-01
Human gastrointestinal parasites are good indicators for hygienic conditions and health status of past and present individuals and communities. While microscopic analysis of eggs in sediments of archeological sites often allows their taxonomic identification, this method is rarely effective at the species level, and requires both the survival of intact eggs and their proper identification. Genotyping via PCR-based approaches has the potential to achieve a precise species-level taxonomic determination. However, so far it has mostly been applied to individual eggs isolated from archeological samples. To increase the throughput and taxonomic accuracy, as well as reduce costs of genotyping methods, we adapted a PCR-based approach coupled with next-generation sequencing to perform precise taxonomic identification of parasitic helminths directly from archeological sediments. Our study of twenty-five 100 to 7,200 year-old archeological samples proved this to be a powerful, reliable and efficient approach for species determination even in the absence of preserved eggs, either as a stand-alone method or as a complement to microscopic studies.
Application of Adaptive Autopilot Designs for an Unmanned Aerial Vehicle
NASA Technical Reports Server (NTRS)
Shin, Yoonghyun; Calise, Anthony J.; Motter, Mark A.
2005-01-01
This paper summarizes the application of two adaptive approaches to autopilot design, and presents an evaluation and comparison of the two approaches in simulation for an unmanned aerial vehicle. One approach employs two-stage dynamic inversion and the other employs feedback dynamic inversions based on a command augmentation system. Both are augmented with neural network based adaptive elements. The approaches permit adaptation to both parametric uncertainty and unmodeled dynamics, and incorporate a method that permits adaptation during periods of control saturation. Simulation results for an FQM-117B radio controlled miniature aerial vehicle are presented to illustrate the performance of the neural network based adaptation.
An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
Multidimensional Adaptation in MAS Organizations.
Alberola, Juan M; Julian, Vicente; Garcia-Fornes, Ana
2013-04-01
Organization adaptation requires determining the consequences of applying changes not only in terms of the benefits provided but also measuring the adaptation costs as well as the impact that these changes have on all of the components of the organization. In this paper, we provide an approach for adaptation in multiagent systems based on a multidimensional transition deliberation mechanism (MTDM). This approach considers transitions in multiple dimensions and is aimed at obtaining the adaptation with the highest potential for improvement in utility based on the costs of adaptation. The approach provides an accurate measurement of the impact of the adaptation since it determines the organization that is to be transitioned to as well as the changes required to carry out this transition. We show an example of adaptation in a service provider network environment in order to demonstrate that the measurement of the adaptation consequences taken by the MTDM improves the organization performance more than the other approaches.
A quality improvement approach to capacity building in low- and middle-income countries.
Bardfield, Joshua; Agins, Bruce; Akiyama, Matthew; Basenero, Apollo; Luphala, Patience; Kaindjee-Tjituka, Francina; Natanael, Salomo; Hamunime, Ndapewa
2015-07-01
To describe the HEALTHQUAL framework consisting of the following three components: performance measurement, quality improvement and the quality management program, representing an adaptive approach to building capacity in national quality management programs in low and middle-income countries. We present a case study from Namibia illustrating how this approach is adapted to country context. HEALTHQUAL partners with Ministries of Health to build knowledge and expertise in modern improvement methods, including data collection, analysis and reporting, process analysis and the use of data to implement quality improvement projects that aim to improve systems and processes of care. Clinical performance measures are selected in each country by the Ministry of Health on the basis of national guidelines. Patient records are sampled using a standardized statistical table to achieve a minimum confidence interval of 90%, with a spread of ±8% in participating facilities. Data are routinely reviewed to identify gaps in patient care, and aggregated to produce facility mean scores that are trended over time. A formal organizational assessment is conducted at facility and national levels to review the implementation progress. Aggregate mean rates of performance for 10 of 11 indicators of HIV care improved for adult HIV-positive patients between 2008 and 2013. Quality improvement is an approach to capacity building and health systems strengthening that offers adaptive methodology. Synergistic implementation of elements of a national quality program can lead to improvements in care, in parallel with systematic capacity development for measurement, improvement and quality management throughout the healthcare delivery system.
Hwang, Wei-Chin
2010-01-01
How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458
A novel approach for SEMG signal classification with adaptive local binary patterns.
Ertuğrul, Ömer Faruk; Kaya, Yılmaz; Tekin, Ramazan
2016-07-01
Feature extraction plays a major role in the pattern recognition process, and this paper presents a novel feature extraction approach, adaptive local binary pattern (aLBP). aLBP is built on the local binary pattern (LBP), which is an image processing method, and one-dimensional local binary pattern (1D-LBP). In LBP, each pixel is compared with its neighbors. Similarly, in 1D-LBP, each data in the raw is judged against its neighbors. 1D-LBP extracts feature based on local changes in the signal. Therefore, it has high a potential to be employed in medical purposes. Since, each action or abnormality, which is recorded in SEMG signals, has its own pattern, and via the 1D-LBP these (hidden) patterns may be detected. But, the positions of the neighbors in 1D-LBP are constant depending on the position of the data in the raw. Also, both LBP and 1D-LBP are very sensitive to noise. Therefore, its capacity in detecting hidden patterns is limited. To overcome these drawbacks, aLBP was proposed. In aLBP, the positions of the neighbors and their values can be assigned adaptively via the down-sampling and the smoothing coefficients. Therefore, the potential to detect (hidden) patterns, which may express an illness or an action, is really increased. To validate the proposed feature extraction approach, two different datasets were employed. Achieved accuracies by the proposed approach were higher than obtained results by employed popular feature extraction approaches and the reported results in the literature. Obtained accuracy results were brought out that the proposed method can be employed to investigate SEMG signals. In summary, this work attempts to develop an adaptive feature extraction scheme that can be utilized for extracting features from local changes in different categories of time-varying signals.
Echolocation in Blainville's beaked whales (Mesoplodon densirostris).
Madsen, P T; de Soto, N Aguilar; Arranz, P; Johnson, M
2013-06-01
Here we use sound and movement recording tags to study how deep-diving Blainville's beaked whales (Mesoplodon densirostris) use echolocation to forage in their natural mesopelagic habitat. These whales ensonify thousands of organisms per dive but select only about 25 prey for capture. They negotiate their cluttered environment by radiating sound in a narrow 20° field of view which they sample with 1.5-3 clicks per metre travelled requiring only some 60 clicks to locate, select and approach each prey. Sampling rates do not appear to be defined by the range to individual targets, but rather by the movement of the predator. Whales sample faster when they encounter patches of prey allowing them to search new water volumes while turning rapidly to stay within a patch. This implies that the Griffin search-approach-capture model of biosonar foraging must be expanded to account for sampling behaviours adapted to the overall prey distribution. Beaked whales can classify prey at more than 15 m range adopting stereotyped motor patterns when approaching some prey. This long detection range relative to swimming speed facilitates a deliberate mode of sensory-motor operation in which prey and capture tactics can be selected to optimize energy returns during long breath-hold dives.
Domain adaptation via transfer component analysis.
Pan, Sinno Jialin; Tsang, Ivor W; Kwok, James T; Yang, Qiang
2011-02-01
Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.
How prevention curricula are taught under real-world conditions
Miller-Day, Michelle; Pettigrew, Jonathan; Hecht, Michael L.; Shin, YoungJu; Graham, John; Krieger, Janice
2015-01-01
Purpose As interventions are disseminated widely, issues of fidelity and adaptation become increasingly critical to understand. This study aims to describe the types of adaptations made by teachers delivering a school-based substance use prevention curriculum and their reasons for adapting program content. Design/methodology/approach To determine the degree to which implementers adhere to a prevention curriculum, naturally adapt the curriculum, and the reasons implementers give for making adaptations, the study examined lesson adaptations made by the 31 teachers who implemented the keepin' it REAL drug prevention curriculum in 7th grade classrooms (n = 25 schools). Data were collected from teacher self-reports after each lesson and observer coding of videotaped lessons. From the total sample, 276 lesson videos were randomly selected for observational analysis. Findings Teachers self-reported adapting more than 68 percent of prevention lessons, while independent observers reported more than 97 percent of the observed lessons were adapted in some way. Types of adaptations included: altering the delivery of the lesson by revising the delivery timetable or delivery context; changing content of the lesson by removing, partially covering, revising, or adding content; and altering the designated format of the lesson (such as assigning small group activities to students as individual work). Reasons for adaptation included responding to constraints (time, institutional, personal, and technical), and responding to student needs (students' abilities to process curriculum content, to enhance student engagement with material). Research limitations/implications The study sample was limited to rural schools in the US mid-Atlantic; however, the results suggest that if programs are to be effectively implemented, program developers need a better understanding of the types of adaptations and reasons implementers provide for adapting curricula. Practical implications These descriptive data suggest that prevention curricula be developed in shorter teaching modules, developers reconsider the usefulness of homework, and implementer training and ongoing support might benefit from more attention to different implementation styles. Originality/value With nearly half of US public schools implementing some form of evidence-based substance use prevention program, issues of implementation fidelity and adaptation have become paramount in the field of prevention. The findings from this study reveal the complexity of the types of adaptations teachers make naturally in the classroom to evidence-based curricula and provide reasons for these adaptations. This information should prove useful for prevention researchers, program developers, and health educators alike. PMID:26290626
A Climate Change Adaptation Strategy for Management of ...
Sea level rise is causing shoreline erosion, increased coastal flooding, and marsh vulnerability to the impact of storms. Coastal marshes provide flood abatement, carbon and nutrient sequestration, water quality maintenance, and habitat for fish, shellfish, and wildlife, including species of concern, such as the saltmarsh sparrow (Ammodramus caudacutus). We present a climate change adaptation strategy (CCAS) adopted by scientific, management, and policy stakeholders for managing coastal marshes and enhancing system resiliency. A common adaptive management approach previously used for restoration projects was modified to identify climate-related vulnerabilities and plan climate change adaptive actions. As an example of implementation of the CCAS, we describe the stakeholder plans and management actions the US Fish and Wildlife Service and partners developed to build coastal resiliency in the Narrow River Estuary, RI, in the aftermath of Superstorm Sandy. When possible, an experimental BACI (before-after, control-impact) design, described as pre- and post-sampling at the impact site and one or more control sites, was incorporated into the climate change adaptation and implementation plans. Specific climate change adaptive actions and monitoring plans are described and include shoreline stabilization, restoring marsh drainage, increasing marsh elevation, and enabling upland marsh migration. The CCAS provides a framework and methodology for successfully managing coa
Covariance Matrix Adaptation Evolutionary Strategy for Drift Correction of Electronic Nose Data
NASA Astrophysics Data System (ADS)
Di Carlo, S.; Falasconi, M.; Sanchez, E.; Sberveglieri, G.; Scionti, A.; Squillero, G.; Tonda, A.
2011-09-01
Electronic Noses (ENs) might represent a simple, fast, high sample throughput and economic alternative to conventional analytical instruments [1]. However, gas sensors drift still limits the EN adoption in real industrial setups due to high recalibration effort and cost [2]. In fact, pattern recognition (PaRC) models built in the training phase become useless after a period of time, in some cases a few weeks. Although algorithms to mitigate the drift date back to the early 90 this is still a challenging issue for the chemical sensor community [3]. Among other approaches, adaptive drift correction methods adjust the PaRC model in parallel with data acquisition without need of periodic calibration. Self-Organizing Maps (SOMs) [4] and Adaptive Resonance Theory (ART) networks [5] have been already tested in the past with fair success. This paper presents and discusses an original methodology based on a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [6], suited for stochastic optimization of complex problems.
ERIC Educational Resources Information Center
Hardison, Debra M.
2014-01-01
Research on the effectiveness of short-term study-abroad (SA) programs for improving oral skills has shown mixed results. In this study, 24 L2 German learners (L1 English) provided pre- and post-SA speech samples addressing a hypothetical situation and completed surveys on cross-cultural interest and adaptability; L2 communication affect,…
PlasFlow: predicting plasmid sequences in metagenomic data using genome signatures
Lipinski, Leszek; Dziembowski, Andrzej
2018-01-01
Abstract Plasmids are mobile genetics elements that play an important role in the environmental adaptation of microorganisms. Although plasmids are usually analyzed in cultured microorganisms, there is a need for methods that allow for the analysis of pools of plasmids (plasmidomes) in environmental samples. To that end, several molecular biology and bioinformatics methods have been developed; however, they are limited to environments with low diversity and cannot recover large plasmids. Here, we present PlasFlow, a novel tool based on genomic signatures that employs a neural network approach for identification of bacterial plasmid sequences in environmental samples. PlasFlow can recover plasmid sequences from assembled metagenomes without any prior knowledge of the taxonomical or functional composition of samples with an accuracy up to 96%. It can also recover sequences of both circular and linear plasmids and can perform initial taxonomical classification of sequences. Compared to other currently available tools, PlasFlow demonstrated significantly better performance on test datasets. Analysis of two samples from heavy metal-contaminated microbial mats revealed that plasmids may constitute an important fraction of their metagenomes and carry genes involved in heavy-metal homeostasis, proving the pivotal role of plasmids in microorganism adaptation to environmental conditions. PMID:29346586
Gradient-free MCMC methods for dynamic causal modelling
Sengupta, Biswa; Friston, Karl J.; Penny, Will D.
2015-03-14
Here, we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density -- albeit at almost 1000% increase in computational time, in comparisonmore » to the most efficient algorithm (i.e., the adaptive MCMC sampler).« less
Simultaneous Spectral Temporal Adaptive Raman Spectrometer - SSTARS
NASA Technical Reports Server (NTRS)
Blacksberg, Jordana
2010-01-01
Raman spectroscopy is a prime candidate for the next generation of planetary instruments, as it addresses the primary goal of mineralogical analysis, which is structure and composition. However, large fluorescence return from many mineral samples under visible light excitation can render Raman spectra unattainable. Using the described approach, Raman and fluorescence, which occur on different time scales, can be simultaneously obtained from mineral samples using a compact instrument in a planetary environment. This new approach is taken based on the use of time-resolved spectroscopy for removing the fluorescence background from Raman spectra in the laboratory. In the SSTARS instrument, a visible excitation source (a green, pulsed laser) is used to generate Raman and fluorescence signals in a mineral sample. A spectral notch filter eliminates the directly reflected beam. A grating then disperses the signal spectrally, and a streak camera provides temporal resolution. The output of the streak camera is imaged on the CCD (charge-coupled device), and the data are read out electronically. By adjusting the sweep speed of the streak camera, anywhere from picoseconds to milliseconds, it is possible to resolve Raman spectra from numerous fluorescence spectra in the same sample. The key features of SSTARS include a compact streak tube capable of picosecond time resolution for collection of simultaneous spectral and temporal information, adaptive streak tube electronics that can rapidly change from one sweep rate to another over ranges of picoseconds to milliseconds, enabling collection of both Raman and fluorescence signatures versus time and wavelength, and Synchroscan integration that allows for a compact, low-power laser without compromising ultimate sensitivity.
Martinez, Omar; Wu, Elwin; Levine, Ethan C; Muñoz-Laboy, Miguel; Fernandez, M Isabel; Bass, Sarah Bauerle; Moya, Eva M; Frasca, Timothy; Chavez-Baray, Silvia; Icard, Larry D; Ovejero, Hugo; Carballo-Diéguez, Alex; Rhodes, Scott D
2016-01-01
Successful HIV prevention and treatment requires evidence-based approaches that combine biomedical strategies with behavioral interventions that are socially and culturally appropriate for the population or community being prioritized. Although there has been a push for a combination approach, how best to integrate different strategies into existing behavioral HIV prevention interventions remains unclear. The need to develop effective combination approaches is of particular importance for men who have sex with men (MSM), who face a disproportionately high risk of HIV acquisition. We collaborated with Latino male couples and providers to adapt Connect 'n Unite, an evidence-based intervention for Black male couples, for Latino male couples. We conducted a series of three focus groups, each with two cohorts of couples, and one focus group with providers. A purposive stratified sample of 20 couples (N = 40, divided into two cohorts) and 10 providers provided insights into how to adapt and integrate social, cultural, and biomedical approaches in a couples-based HIV/AIDS behavioral intervention. The majority (N = 37) of the couple participants had no prior knowledge of the following new biomedical strategies: non-occupational post-exposure prophylaxis (nPEP); pre-exposure prophylaxis (PrEP); and HIV self-testing kits. After they were introduced to these biomedical interventions, all participants expressed a need for information and empowerment through knowledge and awareness of these interventions. In particular, participants suggested that we provide PrEP and HIV self-testing kits by the middle or end of the intervention. Providers suggested a need to address behavioral, social and structural issues, such as language barriers; and the promotion of client-centered approaches to increase access to, adaptation of, and adherence to biomedical strategies. Corroborating what couple participants suggested, providers agreed that biomedical strategies should be offered after providing information about these tools. Regarding culturally sensitive and responsive approaches, participants identified stigma and discrimination associated with HIV and sexual identity as barriers to care, language barriers and documentation status as further barriers to care, the couple-based approach as ideal to health promotion, and the need to include family topics in the intervention. We successfully adapted an evidence-based behavioral HIV prevention intervention for Latino male couples. The adapted intervention, called Conectando Latinos en Pareja, integrates social, cultural, behavioral and biomedical strategies to address the HIV epidemic among Latino MSM. The study highlights the promise regarding the feasibility of implementing a combination approach to HIV prevention in this population.
Martinez, Omar; Wu, Elwin; Levine, Ethan C.; Muñoz-Laboy, Miguel; Fernandez, M. Isabel; Bass, Sarah Bauerle; Moya, Eva M.; Frasca, Timothy; Chavez-Baray, Silvia; Icard, Larry D.; Ovejero, Hugo; Carballo-Diéguez, Alex; Rhodes, Scott D.
2016-01-01
Introduction Successful HIV prevention and treatment requires evidence-based approaches that combine biomedical strategies with behavioral interventions that are socially and culturally appropriate for the population or community being prioritized. Although there has been a push for a combination approach, how best to integrate different strategies into existing behavioral HIV prevention interventions remains unclear. The need to develop effective combination approaches is of particular importance for men who have sex with men (MSM), who face a disproportionately high risk of HIV acquisition. Materials and Methods We collaborated with Latino male couples and providers to adapt Connect ‘n Unite, an evidence-based intervention for Black male couples, for Latino male couples. We conducted a series of three focus groups, each with two cohorts of couples, and one focus group with providers. A purposive stratified sample of 20 couples (N = 40, divided into two cohorts) and 10 providers provided insights into how to adapt and integrate social, cultural, and biomedical approaches in a couples-based HIV/AIDS behavioral intervention. Results The majority (N = 37) of the couple participants had no prior knowledge of the following new biomedical strategies: non-occupational post-exposure prophylaxis (nPEP); pre-exposure prophylaxis (PrEP); and HIV self-testing kits. After they were introduced to these biomedical interventions, all participants expressed a need for information and empowerment through knowledge and awareness of these interventions. In particular, participants suggested that we provide PrEP and HIV self-testing kits by the middle or end of the intervention. Providers suggested a need to address behavioral, social and structural issues, such as language barriers; and the promotion of client-centered approaches to increase access to, adaptation of, and adherence to biomedical strategies. Corroborating what couple participants suggested, providers agreed that biomedical strategies should be offered after providing information about these tools. Regarding culturally sensitive and responsive approaches, participants identified stigma and discrimination associated with HIV and sexual identity as barriers to care, language barriers and documentation status as further barriers to care, the couple-based approach as ideal to health promotion, and the need to include family topics in the intervention. Discussion We successfully adapted an evidence-based behavioral HIV prevention intervention for Latino male couples. The adapted intervention, called Conectando Latinos en Pareja, integrates social, cultural, behavioral and biomedical strategies to address the HIV epidemic among Latino MSM. The study highlights the promise regarding the feasibility of implementing a combination approach to HIV prevention in this population. PMID:27028873
A sub-sampled approach to extremely low-dose STEM
Stevens, A.; Luzi, L.; Yang, H.; ...
2018-01-22
The inpainting of deliberately and randomly sub-sampled images offers a potential means to image specimens at a high resolution and under extremely low-dose conditions (≤1 e -/Å 2) using a scanning transmission electron microscope. We show that deliberate sub-sampling acquires images at least an order of magnitude faster than conventional low-dose methods for an equivalent electron dose. More importantly, when adaptive sub-sampling is implemented to acquire the images, there is a significant increase in the resolution and sensitivity which accompanies the increase in imaging speed. Lastly, we demonstrate the potential of this method for beam sensitive materials and in-situ observationsmore » by experimentally imaging the node distribution in a metal-organic framework.« less
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of deliberately and randomly sub-sampled images offers a potential means to image specimens at a high resolution and under extremely low-dose conditions (≤1 e -/Å 2) using a scanning transmission electron microscope. We show that deliberate sub-sampling acquires images at least an order of magnitude faster than conventional low-dose methods for an equivalent electron dose. More importantly, when adaptive sub-sampling is implemented to acquire the images, there is a significant increase in the resolution and sensitivity which accompanies the increase in imaging speed. Lastly, we demonstrate the potential of this method for beam sensitive materials and in-situ observationsmore » by experimentally imaging the node distribution in a metal-organic framework.« less
Bueno, Juan M; Skorsetz, Martin; Palacios, Raquel; Gualda, Emilio J; Artal, Pablo
2014-01-01
Despite the inherent confocality and optical sectioning capabilities of multiphoton microscopy, three-dimensional (3-D) imaging of thick samples is limited by the specimen-induced aberrations. The combination of immersion objectives and sensorless adaptive optics (AO) techniques has been suggested to overcome this difficulty. However, a complex plane-by-plane correction of aberrations is required, and its performance depends on a set of image-based merit functions. We propose here an alternative approach to increase penetration depth in 3-D multiphoton microscopy imaging. It is based on the manipulation of the spherical aberration (SA) of the incident beam with an AO device while performing fast tomographic multiphoton imaging. When inducing SA, the image quality at best focus is reduced; however, better quality images are obtained from deeper planes within the sample. This is a compromise that enables registration of improved 3-D multiphoton images using nonimmersion objectives. Examples on ocular tissues and nonbiological samples providing different types of nonlinear signal are presented. The implementation of this technique in a future clinical instrument might provide a better visualization of corneal structures in living eyes.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
Microfluidic approaches to malaria detection
Gascoyne, Peter; Satayavivad, Jutamaad; Ruchirawat, Mathuros
2009-01-01
Microfluidic systems are under development to address a variety of medical problems. Key advantages of micrototal analysis systems based on microfluidic technology are the promise of small size and the integration of sample handling and measurement functions within a single, automated device having low mass-production costs. Here, we review the spectrum of methods currently used to detect malaria, consider their advantages and disadvantages, and discuss their adaptability towards integration into small, automated micro total analysis systems. Molecular amplification methods emerge as leading candidates for chip-based systems because they offer extremely high sensitivity, the ability to recognize malaria species and strain, and they will be adaptable to the detection of new genotypic signatures that will emerge from current genomic-based research of the disease. Current approaches to the development of chip-based molecular amplification are considered with special emphasis on flow-through PCR, and we present for the first time the method of malaria specimen preparation by dielectrophoretic field-flow-fractionation. Although many challenges must be addressed to realize a micrototal analysis system for malaria diagnosis, it is concluded that the potential benefits of the approach are well worth pursuing. PMID:14744562
Development of an adaptive bilateral filter for evaluating color image difference
NASA Astrophysics Data System (ADS)
Wang, Zhaohui; Hardeberg, Jon Yngve
2012-04-01
Spatial filtering, which aims to mimic the contrast sensitivity function (CSF) of the human visual system (HVS), has previously been combined with color difference formulae for measuring color image reproduction errors. These spatial filters attenuate imperceptible information in images, unfortunately including high frequency edges, which are believed to be crucial in the process of scene analysis by the HVS. The adaptive bilateral filter represents a novel approach, which avoids the undesirable loss of edge information introduced by CSF-based filtering. The bilateral filter employs two Gaussian smoothing filters in different domains, i.e., spatial domain and intensity domain. We propose a method to decide the parameters, which are designed to be adaptive to the corresponding viewing conditions, and the quantity and homogeneity of information contained in an image. Experiments and discussions are given to support the proposal. A series of perceptual experiments were conducted to evaluate the performance of our approach. The experimental sample images were reproduced with variations in six image attributes: lightness, chroma, hue, compression, noise, and sharpness/blurriness. The Pearson's correlation values between the model-predicted image difference and the observed difference were employed to evaluate the performance, and compare it with that of spatial CIELAB and image appearance model.
Hobbs, Brian P.; Carlin, Bradley P.; Mandrekar, Sumithra J.; Sargent, Daniel J.
2011-01-01
Summary Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected Type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this paper, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen. PMID:21361892
Fuzzy adaptive interacting multiple model nonlinear filter for integrated navigation sensor fusion.
Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing
2011-01-01
In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.
The Limits to Adaptation; A Systems Approach
The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...
Advances in adaptive control theory: Gradient- and derivative-free approaches
NASA Astrophysics Data System (ADS)
Yucelen, Tansel
In this dissertation, we present new approaches to improve standard designs in adaptive control theory, and novel adaptive control architectures. We first present a novel Kalman filter based approach for approximately enforcing a linear constraint in standard adaptive control design. One application is that this leads to alternative forms for well known modification terms such as e-modification. In addition, it leads to smaller tracking errors without incurring significant oscillations in the system response and without requiring high modification gain. We derive alternative forms of e- and adaptive loop recovery (ALR-) modifications. Next, we show how to use Kalman filter optimization to derive a novel adaptation law. This results in an optimization-based time-varying adaptation gain that reduces the need for adaptation gain tuning. A second major contribution of this dissertation is the development of a novel derivative-free, delayed weight update law for adaptive control. The assumption of constant unknown ideal weights is relaxed to the existence of time-varying weights, such that fast and possibly discontinuous variation in weights are allowed. This approach is particulary advantageous for applications to systems that can undergo a sudden change in dynamics, such as might be due to reconfiguration, deployment of a payload, docking, or structural damage, and for rejection of external disturbance processes. As a third and final contribution, we develop a novel approach for extending all the methods developed in this dissertation to the case of output feedback. The approach is developed only for the case of derivative-free adaptive control, and the extension of the other approaches developed previously for the state feedback case to output feedback is left as a future research topic. The proposed approaches of this dissertation are illustrated in both simulation and flight test.
Design of telehealth trials--introducing adaptive approaches.
Law, Lisa M; Wason, James M S
2014-12-01
The field of telehealth and telemedicine is expanding as the need to improve efficiency of health care becomes more pressing. The decision to implement a telehealth system is generally an expensive undertaking that impacts a large number of patients and other stakeholders. It is therefore extremely important that the decision is fully supported by accurate evaluation of telehealth interventions. Numerous reviews of telehealth have described the evidence base as inconsistent. In response they call for larger, more rigorously controlled trials, and trials which go beyond evaluation of clinical effectiveness alone. The aim of this paper is to discuss various ways in which evaluation of telehealth could be improved by the use of adaptive trial designs. We discuss various adaptive design options, such as sample size reviews and changing the study hypothesis to address uncertain parameters, group sequential trials and multi-arm multi-stage trials to improve efficiency, and enrichment designs to maximise the chances of obtaining clear evidence about the telehealth intervention. There is potential to address the flaws discussed in the telehealth literature through the adoption of adaptive approaches to trial design. Such designs could lead to improvements in efficiency, allow the evaluation of multiple telehealth interventions in a cost-effective way, or accurately assess a range of endpoints that are important in the overall success of a telehealth programme. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Gradient-free MCMC methods for dynamic causal modelling.
Sengupta, Biswa; Friston, Karl J; Penny, Will D
2015-05-15
In this technical note we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density - albeit at almost 1000% increase in computational time, in comparison to the most efficient algorithm (i.e., the adaptive MCMC sampler). Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Desired Accuracy Estimation of Noise Function from ECG Signal by Fuzzy Approach
Vahabi, Zahra; Kermani, Saeed
2012-01-01
Unknown noise and artifacts present in medical signals with non-linear fuzzy filter will be estimated and then removed. An adaptive neuro-fuzzy interference system which has a non-linear structure presented for the noise function prediction by before Samples. This paper is about a neuro-fuzzy method to estimate unknown noise of Electrocardiogram signal. Adaptive neural combined with Fuzzy System to construct a fuzzy Predictor. For this system setting parameters such as the number of Membership Functions for each input and output, training epochs, type of MFs for each input and output, learning algorithm and etc. is determined by learning data. At the end simulated experimental results are presented for proper validation. PMID:23717810
An adaptive signal-processing approach to online adaptive tutoring.
Bergeron, Bryan; Cline, Andrew
2011-01-01
Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.
ERIC Educational Resources Information Center
Rossier, Jerome; Zecca, Gregory; Stauffer, Sarah D.; Maggiori, Christian; Dauwalder, Jean-Pierre
2012-01-01
The aim of this study was to analyze the psychometric properties of the Career Adapt-Abilities Scale (CAAS) in a French-speaking Swiss sample and its relationship with personality dimensions and work engagement. The heterogeneous sample of 391 participants (M[subscript age] = 39.59, SD = 12.30) completed the CAAS-International and a short version…
Namroud, Marie-Claire; Beaulieu, Jean; Juge, Nicolas; Laroche, Jérôme; Bousquet, Jean
2008-01-01
Conifers are characterized by a large genome size and a rapid decay of linkage disequilibrium, most often within gene limits. Genome scans based on noncoding markers are less likely to detect molecular adaptation linked to genes in these species. In this study, we assessed the effectiveness of a genome-wide single nucleotide polymorphism (SNP) scan focused on expressed genes in detecting local adaptation in a conifer species. Samples were collected from six natural populations of white spruce (Picea glauca) moderately differentiated for several quantitative characters. A total of 534 SNPs representing 345 expressed genes were analysed. Genes potentially under natural selection were identified by estimating the differentiation in SNP frequencies among populations (FST) and identifying outliers, and by estimating local differentiation using a Bayesian approach. Both average expected heterozygosity and population differentiation estimates (HE = 0.270 and FST = 0.006) were comparable to those obtained with other genetic markers. Of all genes, 5.5% were identified as outliers with FST at the 95% confidence level, while 14% were identified as candidates for local adaptation with the Bayesian method. There was some overlap between the two gene sets. More than half of the candidate genes for local adaptation were specific to the warmest population, about 20% to the most arid population, and 15% to the coldest and most humid higher altitude population. These adaptive trends were consistent with the genes’ putative functions and the divergence in quantitative traits noted among the populations. The results suggest that an approach separating the locus and population effects is useful to identify genes potentially under selection. These candidates are worth exploring in more details at the physiological and ecological levels. PMID:18662225
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Multiple speckle illumination for optical-resolution photoacoustic imaging
NASA Astrophysics Data System (ADS)
Poisson, Florian; Stasio, Nicolino; Moser, Christophe; Psaltis, Demetri; Bossy, Emmanuel
2017-03-01
Optical-resolution photoacoustic microscopy offers exquisite and specific contrast to optical absorption. Conventional approaches generally involves raster scanning a focused spot over the sample. Here, we demonstrate that a full-field illumination approach with multiple speckle illumination can also provide diffraction-limited optical-resolution photoacoustic images. Two different proof-of-concepts are demonstrated with micro-structured test samples. The first approach follows the principle of correlation/ghost imaging,1, 2 and is based on cross-correlating photoacoustic signals under multiple speckle illumination with known speckle patterns measured during a calibration step. The second approach is a speckle scanning microscopy technique, which adapts the technique proposed in fluorescence microscopy by Bertolotti and al.:3 in our work, spatially unresolved photoacoustic measurements are performed for various translations of unknown speckle patterns. A phase-retrieval algorithm is used to reconstruct the object from the knowledge of the modulus of its Fourier Transform yielded by the measurements. Because speckle patterns naturally appear in many various situations, including propagation through biological tissue or multi-mode fibers (for which focusing light is either very demanding if not impossible), speckle-illumination-based photoacoustic microscopy provides a powerful framework for the development of novel reconstruction approaches, well-suited to compressed sensing approaches.2
Programmed LWR metrology by multi-techniques approach
NASA Astrophysics Data System (ADS)
Reche, Jérôme; Besacier, Maxime; Gergaud, Patrice; Blancquaert, Yoann; Freychet, Guillaume; Labbaye, Thibault
2018-03-01
Nowadays, roughness control presents a huge challenge for the lithography step. For advanced nodes, this morphological aspect reaches the same order of magnitude than the Critical Dimension. Hence, the control of roughness needs an adapted metrology. In this study, specific samples with designed roughness have been manufactured using e-beam lithography. These samples have been characterized with three different methodologies: CD-SEM, OCD and SAXS. The main goal of the project is to compare the capability of each of these techniques in terms of reliability, type of information obtained, time to obtain the measurements and level of maturity for the industry.
NASA Astrophysics Data System (ADS)
Krell, Mario Michael; Wilshusen, Nils; Seeland, Anett; Kim, Su Kyoung
2017-04-01
Objective. Classifier transfers usually come with dataset shifts. To overcome dataset shifts in practical applications, we consider the limitations in computational resources in this paper for the adaptation of batch learning algorithms, like the support vector machine (SVM). Approach. We focus on data selection strategies which limit the size of the stored training data by different inclusion, exclusion, and further dataset manipulation criteria like handling class imbalance with two new approaches. We provide a comparison of the strategies with linear SVMs on several synthetic datasets with different data shifts as well as on different transfer settings with electroencephalographic (EEG) data. Main results. For the synthetic data, adding only misclassified samples performed astoundingly well. Here, balancing criteria were very important when the other criteria were not well chosen. For the transfer setups, the results show that the best strategy depends on the intensity of the drift during the transfer. Adding all and removing the oldest samples results in the best performance, whereas for smaller drifts, it can be sufficient to only add samples near the decision boundary of the SVM which reduces processing resources. Significance. For brain-computer interfaces based on EEG data, models trained on data from a calibration session, a previous recording session, or even from a recording session with another subject are used. We show, that by using the right combination of data selection criteria, it is possible to adapt the SVM classifier to overcome the performance drop from the transfer.
Dynamics of multirate sampled data control systems. [for space shuttle boost vehicle
NASA Technical Reports Server (NTRS)
Naylor, J. R.; Hynes, R. J.; Molnar, D. O.
1974-01-01
The effect was investigated of the synthesis approach (single or multirate) on the machine requirements for a digital control system for the space shuttle boost vehicle. The study encompassed four major work areas: synthesis approach trades, machine requirements trades, design analysis requirements and multirate adaptive control techniques. The primary results are two multirate autopilot designs for the low Q and maximum Q flight conditions that exhibits equal or better performance than the analog and single rate system designs. Also, a preferred technique for analyzing and synthesizing multirate digital control systems is included.
Visual Tracking Using 3D Data and Region-Based Active Contours
2016-09-28
adaptive control strategies which explicitly take uncertainty into account. Filtering methods ranging from the classical Kalman filters valid for...linear systems to the much more general particle filters also fit into this framework in a very natural manner. In particular, the particle filtering ...the number of samples required for accurate filtering increases with the dimension of the system noise. In our approach, we approximate curve
Stochastic detection of enantiomers.
Kang, Xiao-Feng; Cheley, Stephen; Guan, Xiyun; Bayley, Hagan
2006-08-23
The rapid quantification of the enantiomers of small chiral molecules is very important, notably in pharmacology. Here, we show that the enantiomers of drug molecules can be distinguished by stochastic sensing, a single-molecule detection technique. The sensing element is an engineered alpha-hemolysin protein pore, fitted with a beta-cyclodextrin adapter. By using the approach, the enantiomeric composition of samples of ibuprofen and thalidomide can be determined in less than 1 s.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan
In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic amplitude versus angle (AVA) and controlled source electromagnetic (CSEM) data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo (MCMC) sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis (DREAM) and Adaptive Metropolis (AM) samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and CSEM data. The multi-chain MCMC is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration,more » the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic AVA and CSEM joint inversion provides better estimation of reservoir saturations than the seismic AVA-only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated – reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.« less
NASA Astrophysics Data System (ADS)
Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Bao, Jie; Swiler, Laura
2017-12-01
In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated - reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.
Thompson, Steven K
2006-12-01
A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.
Adaptive Oceanographic Sampling in a Coastal Environment Using Autonomous Gliding Vehicles
2003-08-01
cost autonomous vehicles with near-global range and modular sensor payload. Particular emphasis is placed on the development of adaptive sampling...environment. Secondary objectives include continued development of adaptive sampling strategies suitable for large fleets of slow-moving autonomous ... vehicles , and development and implementation of new oceanographic sensors and sampling methodologies. The main task completed was a complete redesign of
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
An Approach to Stable Gradient-Descent Adaptation of Higher Order Neural Units.
Bukovsky, Ivo; Homma, Noriyasu
2017-09-01
Stability evaluation of a weight-update system of higher order neural units (HONUs) with polynomial aggregation of neural inputs (also known as classes of polynomial neural networks) for adaptation of both feedforward and recurrent HONUs by a gradient descent method is introduced. An essential core of the approach is based on the spectral radius of a weight-update system, and it allows stability monitoring and its maintenance at every adaptation step individually. Assuring the stability of the weight-update system (at every single adaptation step) naturally results in the adaptation stability of the whole neural architecture that adapts to the target data. As an aside, the used approach highlights the fact that the weight optimization of HONU is a linear problem, so the proposed approach can be generally extended to any neural architecture that is linear in its adaptable parameters.
Enhancement of gold recovery using bioleaching from gold concentrate
NASA Astrophysics Data System (ADS)
Choi, S. H.; Cho, K. H.; Kim, B. J.; Choi, N. C.; Park, C. Y.
2012-04-01
The gold in refractory ores is encapsulated as fine particles (sometimes at a molecular level) in the crystal structure of the sulfide (typically pyrite with or without arsenopyrite) matrix. This makes it impossible to extract a significant amount of refractory gold by cyanidation since the cyanide solution cannot penetrate the pyrite/arsenopyrite crystals and dissolve gold particles, even after fine grinding. To effectively extract gold from these ores, an oxidative pretreatment is necessary to break down the sulfide matrix. The most popular methods of pretreatment include nitric acid oxidation, roasting, pressure oxidation and biological oxidation by microorganisms. This study investigated the bioleaching efficiency of Au concentrate under batch experimental conditions (adaptation cycles and chemical composition adaptation) using the indigenous acidophilic bacteria collected from gold mine leachate in Sunsin gold mine, Korea. We conducted the batch experiments at two different chemical composition (CuSO4 and ZnSO4), two different adaptation cycles 1'st (3 weeks) and 2'nd (6 weeks). The results showed that the pH in the bacteria inoculating sample decreased than initial condition and Eh increased. In the chemical composition adaptation case, the leached accumulation content of Fe and Pb was exhibited in CuSO4 adaptation bacteria sample more than in ZnSO4 adaptation bacteria samples, possibly due to pre-adaptation effect on chalcopyrite (CuFeS2) in gold concentrate. And after 21 days on the CuSO4 adaptation cycles case, content of Fe and Pb was appeared at 1'st adaptation bacteria sample(Fe - 1.82 and Pb - 25.81 times per control sample) lower than at 2'nd adaptation bacteria sample(Fe - 2.87 and Pb - 62.05 times per control sample). This study indicates that adaptation chemical composition and adaptation cycles can play an important role in bioleaching of gold concentrate in eco-/economic metallurgy process.
Adaptive Learning and Risk Taking
ERIC Educational Resources Information Center
Denrell, Jerker
2007-01-01
Humans and animals learn from experience by reducing the probability of sampling alternatives with poor past outcomes. Using simulations, J. G. March (1996) illustrated how such adaptive sampling could lead to risk-averse as well as risk-seeking behavior. In this article, the author develops a formal theory of how adaptive sampling influences risk…
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.; ...
2018-02-06
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation
NASA Technical Reports Server (NTRS)
Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet
2015-01-01
When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.
Automatic motor task selection via a bandit algorithm for a brain-controlled button
NASA Astrophysics Data System (ADS)
Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen
2013-02-01
Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.
2018-05-01
We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.
Analysis of penicillin G in milk by liquid chromatography.
Boison, J O; Keng, L J; MacNeil, J D
1994-01-01
A liquid chromatographic (LC) method that was previously developed for penicillin G residues in animal tissues has been adapted to milk and milk products. After protein precipitation with sodium tungstate, samples are applied to a C18 solid-phase extraction cartridge, from which penicillin is eluted, derivatized with 1,2,4-triazole-mercuric chloride solution, and analyzed by isocratic liquid chromatography (LC) on a C18 column with UV detection at 325 nm. Quantitation is done with reference to penicillin V as an internal standard. Penicillin G recoveries were determined to be > 70% on standards fortified at 3-60 ppb. Accuracy approached 100% using the penicillin V internal standard. The detection limit for penicillin G residues was 3 ppb in fluid milk. Samples may be confirmed by thermospray/LC at concentrations approaching the detection limit of the UV method.
Behavioral and Neural Adaptation in Approach Behavior.
Wang, Shuo; Falvello, Virginia; Porter, Jenny; Said, Christopher P; Todorov, Alexander
2018-06-01
People often make approachability decisions based on perceived facial trustworthiness. However, it remains unclear how people learn trustworthiness from a population of faces and whether this learning influences their approachability decisions. Here we investigated the neural underpinning of approach behavior and tested two important hypotheses: whether the amygdala adapts to different trustworthiness ranges and whether the amygdala is modulated by task instructions and evaluative goals. We showed that participants adapted to the stimulus range of perceived trustworthiness when making approach decisions and that these decisions were further modulated by the social context. The right amygdala showed both linear response and quadratic response to trustworthiness level, as observed in prior studies. Notably, the amygdala's response to trustworthiness was not modulated by stimulus range or social context, a possible neural dynamic adaptation. Together, our data have revealed a robust behavioral adaptation to different trustworthiness ranges as well as a neural substrate underlying approach behavior based on perceived facial trustworthiness.
A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy
Wen, Hui; Xie, Weixin; Pei, Jihong
2016-01-01
This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737
Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod
Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.
2008-01-01
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.
NASA Astrophysics Data System (ADS)
Benkler, Erik; Telle, Harald R.
2007-06-01
An improved phase-locked loop (PLL) for versatile synchronization of a sampling pulse train to an optical data stream is presented. It enables optical sampling of the true waveform of repetitive high bit-rate optical time division multiplexed (OTDM) data words such as pseudorandom bit sequences. Visualization of the true waveform can reveal details, which cause systematic bit errors. Such errors cannot be inferred from eye diagrams and require word-synchronous sampling. The programmable direct-digital-synthesis circuit used in our novel PLL approach allows flexible adaption of virtually any problem-specific synchronization scenario, including those required for waveform sampling, for jitter measurements by slope detection, and for classical eye-diagrams. Phase comparison of the PLL is performed at 10-GHz OTDM base clock rate, leading to a residual synchronization jitter of less than 70 fs.
Lateral Coherence and Mixing in the Coastal Ocean: Adaptive Sampling using Gliders
2012-09-30
Adaptive Sampling using Gliders R. Kipp Shearman Jonathan D. Nash James N. Moum John A. Barth College of Oceanic & Atmospheric Sciences Oregon State...persistent on O (3 day) timescales, so are ideally suited to be adaptively sampled by autonomous gliders that actively report both turbulent and...plan to deploy 4 AUV gliders to perform intensive, adaptive surveys. Newly-enhanced to measure turbulent mixing, water-column currents and dye
Lateral Coherence and Mixing in the Coastal Ocean: Adaptive Sampling using Gliders
2011-09-30
Coherence and Mixing in the Coastal Ocean: Adaptive Sampling using Gliders R. Kipp Shearman Jonathan D. Nash James N. Moum John A. Barth College of...These structures evolve yet are often persistent on O (3 day) timescales, so are ideally suited to be adaptively sampled by autonomous gliders that...processes driving lateral dispersion, we plan to deploy 4 AUV gliders to perform intensive, adaptive surveys. Newly-enhanced to measure turbulent mixing
An Incremental Weighted Least Squares Approach to Surface Lights Fields
NASA Astrophysics Data System (ADS)
Coombe, Greg; Lastra, Anselmo
An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.
Reversible Cryopreservation of Living Cells Using an Electron Microscopy Cryo-Fixation Method.
Huebinger, Jan; Han, Hong-Mei; Grabenbauer, Markus
2016-01-01
Rapid cooling of aqueous solutions is a useful approach for two important biological applications: (I) cryopreservation of cells and tissues for long-term storage, and (II) cryofixation for ultrastructural investigations by electron and cryo-electron microscopy. Usually, both approaches are very different in methodology. Here we show that a novel, fast and easy to use cryofixation technique called self-pressurized rapid freezing (SPRF) is-after some adaptations-also a useful and versatile technique for cryopreservation. Sealed metal tubes with high thermal diffusivity containing the samples are plunged into liquid cryogen. Internal pressure builds up reducing ice crystal formation and therefore supporting reversible cryopreservation through vitrification of cells. After rapid rewarming of pressurized samples, viability rates of > 90% can be reached, comparable to best-performing of the established rapid cooling devices tested. In addition, the small SPRF tubes allow for space-saving sample storage and the sealed containers prevent contamination from or into the cryogen during freezing, storage, or thawing.
Marti, Guillaume; Boccard, Julien; Mehl, Florence; Debrus, Benjamin; Marcourt, Laurence; Merle, Philippe; Delort, Estelle; Baroux, Lucie; Sommer, Horst; Rudaz, Serge; Wolfender, Jean-Luc
2014-05-01
The detailed characterization of cold-pressed lemon oils (CPLOs) is of great importance for the flavor and fragrance (F&F) industry. Since a control of authenticity by standard analytical techniques can be bypassed using elaborated adulterated oils to pretend a higher quality, a combination of advanced orthogonal methods has been developed. The present study describes a combined metabolomic approach based on UHPLC-TOF-MS profiling and (1)H NMR fingerprinting to highlight metabolite differences on a set of representative samples used in the F&F industry. A new protocol was set up and adapted to the use of CPLO residues. Multivariate analysis based on both fingerprinting methods showed significant chemical variations between Argentinian and Italian samples. Discriminating markers identified in mixtures belong to furocoumarins, flavonoids, terpenoids and fatty acids. Quantitative NMR revealed low citropten and high bergamottin content in Italian samples. The developed metabolomic approach applied to CPLO residues gives some new perspectives for authenticity assessment. Copyright © 2013 Elsevier Ltd. All rights reserved.
Passive and active adaptive management: Approaches and an example
Williams, B.K.
2011-01-01
Adaptive management is a framework for resource conservation that promotes iterative learning-based decision making. Yet there remains considerable confusion about what adaptive management entails, and how to actually make resource decisions adaptively. A key but somewhat ambiguous distinction in adaptive management is between active and passive forms of adaptive decision making. The objective of this paper is to illustrate some approaches to active and passive adaptive management with a simple example involving the drawdown of water impoundments on a wildlife refuge. The approaches are illustrated for the drawdown example, and contrasted in terms of objectives, costs, and potential learning rates. Some key challenges to the actual practice of AM are discussed, and tradeoffs between implementation costs and long-term benefits are highlighted. ?? 2010 Elsevier Ltd.
Chorlton, Kathryn; Hess, Stephane; Jamson, Samantha; Wardman, Mark
2012-09-01
Given the burden of injury, economic, environmental and social consequences associated with speeding, reducing road traffic speed remains a major priority. Intelligent speed adaptation (ISA) is a promising but controversial new in-vehicle system that provides drivers with support on the speed-control task. In order to model potential system uptake, this paper explores drivers' preferences for two different types of ISA given a number of alternative fiscal incentives and non-fiscal measures, using a stated preference approach. As would be expected with such a contentious issue, the analysis revealed the presence of significant variations in sensitivities and preferences in the sample. While a non-negligible part of the sample population has such strong opposition to ISA that no reasonable discounts or incentives would lead to them buying or accepting such a system, there is also a large part of the population that, if given the right incentives, would be willing or even keen to equip their vehicle with an ISA device. Copyright © 2011. Published by Elsevier Ltd.
Yagmur, Sengul; Mesman, Judi; Malda, Maike; Bakermans-Kranenburg, Marian J; Ekmekci, Hatice
2014-01-01
Using a randomized control trial design we tested the effectiveness of a culturally sensitive adaptation of the Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) in a sample of 76 Turkish minority families in the Netherlands. The VIPP-SD was adapted based on a pilot with feedback of the target mothers, resulting in the VIPP-TM (VIPP-Turkish Minorities). The sample included families with 20-47-month-old children with high levels of externalizing problems. Maternal sensitivity, nonintrusiveness, and discipline strategies were observed during pretest and posttest home visits. The VIPP-TM was effective in increasing maternal sensitivity and nonintrusiveness, but not in enhancing discipline strategies. Applying newly learned sensitivity skills in discipline situations may take more time, especially in a cultural context that favors more authoritarian strategies. We conclude that the VIPP-SD program and its video-feedback approach can be successfully applied in immigrant families with a non-Western cultural background, with demonstrated effects on parenting sensitivity and nonintrusiveness.
NASA Astrophysics Data System (ADS)
Evans, William J.; Yoo, Choong-Shik; Lee, Geun Woo; Cynn, Hyunchae; Lipp, Magnus J.; Visbeck, Ken
2007-07-01
We have developed a unique device, a dynamic diamond anvil cell (dDAC), which repetitively applies a time-dependent load/pressure profile to a sample. This capability allows studies of the kinetics of phase transitions and metastable phases at compression (strain) rates of up to 500GPa/s (˜0.16s-1 for a metal). Our approach adapts electromechanical piezoelectric actuators to a conventional diamond anvil cell design, which enables precise specification and control of a time-dependent applied load/pressure. Existing DAC instrumentation and experimental techniques are easily adapted to the dDAC to measure the properties of a sample under the varying load/pressure conditions. This capability addresses the sparsely studied regime of dynamic phenomena between static research (diamond anvil cells and large volume presses) and dynamic shock-driven experiments (gas guns, explosive, and laser shock). We present an overview of a variety of experimental measurements that can be made with this device.
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
NASA Astrophysics Data System (ADS)
Kesiman, Made Windu Antara; Valy, Dona; Burie, Jean-Christophe; Paulus, Erick; Sunarya, I. Made Gede; Hadi, Setiawan; Sok, Kim Heng; Ogier, Jean-Marc
2017-01-01
Due to their specific characteristics, palm leaf manuscripts provide new challenges for text line segmentation tasks in document analysis. We investigated the performance of six text line segmentation methods by conducting comparative experimental studies for the collection of palm leaf manuscript images. The image corpus used in this study comes from the sample images of palm leaf manuscripts of three different Southeast Asian scripts: Balinese script from Bali and Sundanese script from West Java, both from Indonesia, and Khmer script from Cambodia. For the experiments, four text line segmentation methods that work on binary images are tested: the adaptive partial projection line segmentation approach, the A* path planning approach, the shredding method, and our proposed energy function for shredding method. Two other methods that can be directly applied on grayscale images are also investigated: the adaptive local connectivity map method and the seam carving-based method. The evaluation criteria and tool provided by ICDAR2013 Handwriting Segmentation Contest were used in this experiment.
Is it really theoretical? A review of sampling in grounded theory studies in nursing journals.
McCrae, Niall; Purssell, Edward
2016-10-01
Grounded theory is a distinct method of qualitative research, where core features are theoretical sampling and constant comparative analysis. However, inconsistent application of these activities has been observed in published studies. This review assessed the use of theoretical sampling in grounded theory studies in nursing journals. An adapted systematic review was conducted. Three leading nursing journals (2010-2014) were searched for studies stating grounded theory as the method. Sampling was assessed using a concise rating tool. A high proportion (86%) of the 134 articles described an iterative process of data collection and analysis. However, half of the studies did not demonstrate theoretical sampling, with many studies declaring or indicating a purposive sampling approach throughout. Specific reporting guidelines for grounded theory studies should be developed to ensure that study reports describe an iterative process of fieldwork and theoretical development. © 2016 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Benardini, James N.; Koukol, Robert C.; Schubert, Wayne W.; Morales, Fabian; Klatte, Marlin F.
2012-01-01
A report describes an adaptation of a filter assembly to enable it to be used to filter out microorganisms from a propulsion system. The filter assembly has previously been used for particulates greater than 2 micrometers. Projects that utilize large volumes of nonmetallic materials of planetary protection concern pose a challenge to their bioburden budget, as a conservative specification value of 30 spores per cubic centimeter is typically used. Helium was collected utilizing an adapted filtration approach employing an existing Millipore filter assembly apparatus used by the propulsion team for particulate analysis. The filter holder on the assembly has a 47-mm diameter, and typically a 1.2-5 micrometer pore-size filter is used for particulate analysis making it compatible with commercially available sterilization filters (0.22 micrometers) that are necessary for biological sampling. This adaptation to an existing technology provides a proof-of-concept and a demonstration of successful use in a ground equipment system. This adaptation has demonstrated that the Millipore filter assembly can be utilized to filter out microorganisms from a propulsion system, whereas in previous uses the filter assembly was utilized for particulates greater than 2 micrometers.
Approach for Using Learner Satisfaction to Evaluate the Learning Adaptation Policy
ERIC Educational Resources Information Center
Jeghal, Adil; Oughdir, Lahcen; Tairi, Hamid; Radouane, Abdelhay
2016-01-01
The learning adaptation is a very important phase in a learning situation in human learning environments. This paper presents the authors' approach used to evaluate the effectiveness of learning adaptive systems. This approach is based on the analysis of learner satisfaction notices collected by a questionnaire on a learning situation; to analyze…
Tong, Shao Cheng; Li, Yong Ming; Zhang, Hua-Guang
2011-07-01
In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of "explosion of complexity" inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches.
McAuliffe, Alan; McGann, Marek
2016-01-01
Speelman and McGann’s (2013) examination of the uncritical way in which the mean is often used in psychological research raises questions both about the average’s reliability and its validity. In the present paper, we argue that interrogating the validity of the mean involves, amongst other things, a better understanding of the person’s experiences, the meaning of their actions, at the time that the behavior of interest is carried out. Recently emerging approaches within Psychology and Cognitive Science have argued strongly that experience should play a more central role in our examination of behavioral data, but the relationship between experience and behavior remains very poorly understood. We outline some of the history of the science on this fraught relationship, as well as arguing that contemporary methods for studying experience fall into one of two categories. “Wide” approaches tend to incorporate naturalistic behavior settings, but sacrifice accuracy and reliability in behavioral measurement. “Narrow” approaches maintain controlled measurement of behavior, but involve too specific a sampling of experience, which obscures crucial temporal characteristics. We therefore argue for a novel, mid-range sampling technique, that extends Hurlburt’s descriptive experience sampling, and adapts it for the controlled setting of the laboratory. This controlled descriptive experience sampling may be an appropriate tool to help calibrate both the mean and the meaning of an experimental situation with one another. PMID:27242588
Climate change adaptation and Integrated Water Resource Management in the water sector
NASA Astrophysics Data System (ADS)
Ludwig, Fulco; van Slobbe, Erik; Cofino, Wim
2014-10-01
Integrated Water Resources Management (IWRM) was introduced in 1980s to better optimise water uses between different water demanding sectors. However, since it was introduced water systems have become more complicated due to changes in the global water cycle as a result of climate change. The realization that climate change will have a significant impact on water availability and flood risks has driven research and policy making on adaptation. This paper discusses the main similarities and differences between climate change adaptation and IWRM. The main difference between the two is the focus on current and historic issues of IWRM compared to the (long-term) future focus of adaptation. One of the main problems of implementing climate change adaptation is the large uncertainties in future projections. Two completely different approaches to adaptation have been developed in response to these large uncertainties. A top-down approach based on large scale biophysical impacts analyses focussing on quantifying and minimizing uncertainty by using a large range of scenarios and different climate and impact models. The main problem with this approach is the propagation of uncertainties within the modelling chain. The opposite is the bottom up approach which basically ignores uncertainty. It focusses on reducing vulnerabilities, often at local scale, by developing resilient water systems. Both these approaches however are unsuitable for integrating into water management. The bottom up approach focuses too much on socio-economic vulnerability and too little on developing (technical) solutions. The top-down approach often results in an “explosion” of uncertainty and therefore complicates decision making. A more promising direction of adaptation would be a risk based approach. Future research should further develop and test an approach which starts with developing adaptation strategies based on current and future risks. These strategies should then be evaluated using a range of future scenarios in order to develop robust adaptation measures and strategies.
Adaptive Local Spatiotemporal Features from RGB-D Data for One-Shot Learning Gesture Recognition
Lin, Jia; Ruan, Xiaogang; Yu, Naigong; Yang, Yee-Hong
2016-01-01
Noise and constant empirical motion constraints affect the extraction of distinctive spatiotemporal features from one or a few samples per gesture class. To tackle these problems, an adaptive local spatiotemporal feature (ALSTF) using fused RGB-D data is proposed. First, motion regions of interest (MRoIs) are adaptively extracted using grayscale and depth velocity variance information to greatly reduce the impact of noise. Then, corners are used as keypoints if their depth, and velocities of grayscale and of depth meet several adaptive local constraints in each MRoI. With further filtering of noise, an accurate and sufficient number of keypoints is obtained within the desired moving body parts (MBPs). Finally, four kinds of multiple descriptors are calculated and combined in extended gradient and motion spaces to represent the appearance and motion features of gestures. The experimental results on the ChaLearn gesture, CAD-60 and MSRDailyActivity3D datasets demonstrate that the proposed feature achieves higher performance compared with published state-of-the-art approaches under the one-shot learning setting and comparable accuracy under the leave-one-out cross validation. PMID:27999337
Adaptive Local Spatiotemporal Features from RGB-D Data for One-Shot Learning Gesture Recognition.
Lin, Jia; Ruan, Xiaogang; Yu, Naigong; Yang, Yee-Hong
2016-12-17
Noise and constant empirical motion constraints affect the extraction of distinctive spatiotemporal features from one or a few samples per gesture class. To tackle these problems, an adaptive local spatiotemporal feature (ALSTF) using fused RGB-D data is proposed. First, motion regions of interest (MRoIs) are adaptively extracted using grayscale and depth velocity variance information to greatly reduce the impact of noise. Then, corners are used as keypoints if their depth, and velocities of grayscale and of depth meet several adaptive local constraints in each MRoI. With further filtering of noise, an accurate and sufficient number of keypoints is obtained within the desired moving body parts (MBPs). Finally, four kinds of multiple descriptors are calculated and combined in extended gradient and motion spaces to represent the appearance and motion features of gestures. The experimental results on the ChaLearn gesture, CAD-60 and MSRDailyActivity3D datasets demonstrate that the proposed feature achieves higher performance compared with published state-of-the-art approaches under the one-shot learning setting and comparable accuracy under the leave-one-out cross validation.
Brain Slice Staining and Preparation for Three-Dimensional Super-Resolution Microscopy
German, Christopher L.; Gudheti, Manasa V.; Fleckenstein, Annette E.; Jorgensen, Erik M.
2018-01-01
Localization microscopy techniques – such as photoactivation localization microscopy (PALM), fluorescent PALM (FPALM), ground state depletion (GSD), and stochastic optical reconstruction microscopy (STORM) – provide the highest precision for single molecule localization currently available. However, localization microscopy has been largely limited to cell cultures due to the difficulties that arise in imaging thicker tissue sections. Sample fixation and antibody staining, background fluorescence, fluorophore photoinstability, light scattering in thick sections, and sample movement create significant challenges for imaging intact tissue. We have developed a sample preparation and image acquisition protocol to address these challenges in rat brain slices. The sample preparation combined multiple fixation steps, saponin permeabilization, and tissue clarification. Together, these preserve intracellular structures, promote antibody penetration, reduce background fluorescence and light scattering, and allow acquisition of images deep in a 30 μm thick slice. Image acquisition challenges were resolved by overlaying samples with a permeable agarose pad and custom-built stainless steel imaging adapter, and sealing the imaging chamber. This approach kept slices flat, immobile, bathed in imaging buffer, and prevented buffer oxidation during imaging. Using this protocol, we consistently obtained single molecule localizations of synaptic vesicle and active zone proteins in three-dimensions within individual synaptic terminals of the striatum in rat brain slices. These techniques may be easily adapted to the preparation and imaging of other tissues, substantially broadening the application of super-resolution imaging. PMID:28924666
A practical guide to environmental association analysis in landscape genomics.
Rellstab, Christian; Gugerli, Felix; Eckert, Andrew J; Hancock, Angela M; Holderegger, Rolf
2015-09-01
Landscape genomics is an emerging research field that aims to identify the environmental factors that shape adaptive genetic variation and the gene variants that drive local adaptation. Its development has been facilitated by next-generation sequencing, which allows for screening thousands to millions of single nucleotide polymorphisms in many individuals and populations at reasonable costs. In parallel, data sets describing environmental factors have greatly improved and increasingly become publicly accessible. Accordingly, numerous analytical methods for environmental association studies have been developed. Environmental association analysis identifies genetic variants associated with particular environmental factors and has the potential to uncover adaptive patterns that are not discovered by traditional tests for the detection of outlier loci based on population genetic differentiation. We review methods for conducting environmental association analysis including categorical tests, logistic regressions, matrix correlations, general linear models and mixed effects models. We discuss the advantages and disadvantages of different approaches, provide a list of dedicated software packages and their specific properties, and stress the importance of incorporating neutral genetic structure in the analysis. We also touch on additional important aspects such as sampling design, environmental data preparation, pooled and reduced-representation sequencing, candidate-gene approaches, linearity of allele-environment associations and the combination of environmental association analyses with traditional outlier detection tests. We conclude by summarizing expected future directions in the field, such as the extension of statistical approaches, environmental association analysis for ecological gene annotation, and the need for replication and post hoc validation studies. © 2015 John Wiley & Sons Ltd.
Providing Adaptivity in Moodle LMS Courses
ERIC Educational Resources Information Center
Despotovic-Zrakic, Marijana; Markovic, Aleksandar; Bogdanovic, Zorica; Barac, Dusan; Krco, Srdjan
2012-01-01
In this paper, we describe an approach to providing adaptivity in e-education courses. The primary goal of the paper is to enhance an existing e-education system, namely Moodle LMS, by developing a method for creating adaptive courses, and to compare its effectiveness with non-adaptive education approach. First, we defined the basic requirements…
Systems and methods for self-synchronized digital sampling
NASA Technical Reports Server (NTRS)
Samson, Jr., John R. (Inventor)
2008-01-01
Systems and methods for self-synchronized data sampling are provided. In one embodiment, a system for capturing synchronous data samples is provided. The system includes an analog to digital converter adapted to capture signals from one or more sensors and convert the signals into a stream of digital data samples at a sampling frequency determined by a sampling control signal; and a synchronizer coupled to the analog to digital converter and adapted to receive a rotational frequency signal from a rotating machine, wherein the synchronizer is further adapted to generate the sampling control signal, and wherein the sampling control signal is based on the rotational frequency signal.
2010-12-21
House of Representatives Subject: Missile Defense: European Phased Adaptive Approach Acquisitions Face Synchronization , Transparency, and...TITLE AND SUBTITLE Missile Defense: European Phased Adaptive Approach Acquisitions Face Synchronization , Transparency, and Accountability...However, we found that DOD has not fully implemented a management process that synchronizes EPAA acquisition activities and ensures transparency and
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
Brown, Patrick O.
2013-01-01
Background High throughput molecular-interaction studies using immunoprecipitations (IP) or affinity purifications are powerful and widely used in biology research. One of many important applications of this method is to identify the set of RNAs that interact with a particular RNA-binding protein (RBP). Here, the unique statistical challenge presented is to delineate a specific set of RNAs that are enriched in one sample relative to another, typically a specific IP compared to a non-specific control to model background. The choice of normalization procedure critically impacts the number of RNAs that will be identified as interacting with an RBP at a given significance threshold – yet existing normalization methods make assumptions that are often fundamentally inaccurate when applied to IP enrichment data. Methods In this paper, we present a new normalization methodology that is specifically designed for identifying enriched RNA or DNA sequences in an IP. The normalization (called adaptive or AD normalization) uses a basic model of the IP experiment and is not a variant of mean, quantile, or other methodology previously proposed. The approach is evaluated statistically and tested with simulated and empirical data. Results and Conclusions The adaptive (AD) normalization method results in a greatly increased range in the number of enriched RNAs identified, fewer false positives, and overall better concordance with independent biological evidence, for the RBPs we analyzed, compared to median normalization. The approach is also applicable to the study of pairwise RNA, DNA and protein interactions such as the analysis of transcription factors via chromatin immunoprecipitation (ChIP) or any other experiments where samples from two conditions, one of which contains an enriched subset of the other, are studied. PMID:23349766
Marfeo, Elizabeth E; Ni, Pengsheng; Haley, Stephen M; Bogusz, Kara; Meterko, Mark; McDonough, Christine M; Chan, Leighton; Rasch, Elizabeth K; Brandt, Diane E; Jette, Alan M
2013-09-01
To use item response theory (IRT) data simulations to construct and perform initial psychometric testing of a newly developed instrument, the Social Security Administration Behavioral Health Function (SSA-BH) instrument, that aims to assess behavioral health functioning relevant to the context of work. Cross-sectional survey followed by IRT calibration data simulations. Community. Sample of individuals applying for Social Security Administration disability benefits: claimants (n=1015) and a normative comparative sample of U.S. adults (n=1000). None. SSA-BH measurement instrument. IRT analyses supported the unidimensionality of 4 SSA-BH scales: mood and emotions (35 items), self-efficacy (23 items), social interactions (6 items), and behavioral control (15 items). All SSA-BH scales demonstrated strong psychometric properties including reliability, accuracy, and breadth of coverage. High correlations of the simulated 5- or 10-item computer adaptive tests with the full item bank indicated robust ability of the computer adaptive testing approach to comprehensively characterize behavioral health function along 4 distinct dimensions. Initial testing and evaluation of the SSA-BH instrument demonstrated good accuracy, reliability, and content coverage along all 4 scales. Behavioral function profiles of Social Security Administration claimants were generated and compared with age- and sex-matched norms along 4 scales: mood and emotions, behavioral control, social interactions, and self-efficacy. Using the computer adaptive test-based approach offers the ability to collect standardized, comprehensive functional information about claimants in an efficient way, which may prove useful in the context of the Social Security Administration's work disability programs. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Katigbak, Carina; Flaherty, Erin; Chao, Ying-Yu; Nguyen, Tam; Cheung, Daphne; Yiu-Cho Kwan, Rick
Physical activity (PA) is a significant modifiable risk factor for cardiovascular disease. For older adults, engaging in PA is shown to improve cardiac status, reduce cognitive, and functional decline, and improve overall quality of life. However, only 17% of Asian American adults meet the 2008 federal recommended guidelines for aerobic and muscle strengthening activity; and there is a paucity of data reporting on older Asian Americans - a rapidly growing, underserved group. While data pertaining to Asian Americans is frequently reported at the aggregate level, this masks differences (eg, language, culture, income) among Asian ethnic subgroups that may impact health behaviors. The purpose of this review was to identify intervention, and cultural adaptation strategies in studies promoting PA for older Asian Americans. A comprehensive literature search was performed to identify interventions published between 1996-2016 focused on improving PA among older Asian Americans (> 60 years old). Data were abstracted to examine intervention study designs, cultural adaptation strategies, theoretical frameworks, and physical activity measures. Nine studies met the review's inclusion criteria. Community-based recruitment approaches were widely used, and all studies employed cultural adaptation to varying degrees. Most studies reported improvements in PA outcomes, focused on Chinese Americans, and relied on self-reports of PA, while few aimed to increase PA using a multi-component approach. Future studies would benefit from larger sample sizes, a wider representation of Asian ethnic subgroups, and concentrated efforts to implement deep level adaptations that may increase the salience and sustainability of these interventions.
HIFI-C: a robust and fast method for determining NMR couplings from adaptive 3D to 2D projections.
Cornilescu, Gabriel; Bahrami, Arash; Tonelli, Marco; Markley, John L; Eghbalnia, Hamid R
2007-08-01
We describe a novel method for the robust, rapid, and reliable determination of J couplings in multi-dimensional NMR coupling data, including small couplings from larger proteins. The method, "High-resolution Iterative Frequency Identification of Couplings" (HIFI-C) is an extension of the adaptive and intelligent data collection approach introduced earlier in HIFI-NMR. HIFI-C collects one or more optimally tilted two-dimensional (2D) planes of a 3D experiment, identifies peaks, and determines couplings with high resolution and precision. The HIFI-C approach, demonstrated here for the 3D quantitative J method, offers vital features that advance the goal of rapid and robust collection of NMR coupling data. (1) Tilted plane residual dipolar couplings (RDC) data are collected adaptively in order to offer an intelligent trade off between data collection time and accuracy. (2) Data from independent planes can provide a statistical measure of reliability for each measured coupling. (3) Fast data collection enables measurements in cases where sample stability is a limiting factor (for example in the presence of an orienting medium required for residual dipolar coupling measurements). (4) For samples that are stable, or in experiments involving relatively stronger couplings, robust data collection enables more reliable determinations of couplings in shorter time, particularly for larger biomolecules. As a proof of principle, we have applied the HIFI-C approach to the 3D quantitative J experiment to determine N-C' RDC values for three proteins ranging from 56 to 159 residues (including a homodimer with 111 residues in each subunit). A number of factors influence the robustness and speed of data collection. These factors include the size of the protein, the experimental set up, and the coupling being measured, among others. To exhibit a lower bound on robustness and the potential for time saving, the measurement of dipolar couplings for the N-C' vector represents a realistic "worst case analysis". These couplings are among the smallest currently measured, and their determination in both isotropic and anisotropic media demands the highest measurement precision. The new approach yielded excellent quantitative agreement with values determined independently by the conventional 3D quantitative J NMR method (in cases where sample stability in oriented media permitted these measurements) but with a factor of 2-5 in time savings. The statistical measure of reliability, measuring the quality of each RDC value, offers valuable adjunct information even in cases where modest time savings may be realized.
On adaptive robustness approach to Anti-Jam signal processing
NASA Astrophysics Data System (ADS)
Poberezhskiy, Y. S.; Poberezhskiy, G. Y.
An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.
NASA Astrophysics Data System (ADS)
Eric, L.; Vrugt, J. A.
2010-12-01
Spatially distributed hydrologic models potentially contain hundreds of parameters that need to be derived by calibration against a historical record of input-output data. The quality of this calibration strongly determines the predictive capability of the model and thus its usefulness for science-based decision making and forecasting. Unfortunately, high-dimensional optimization problems are typically difficult to solve. Here we present our recent developments to the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al., 2009) to warrant efficient solution of high-dimensional parameter estimation problems. The algorithm samples from an archive of past states (Ter Braak and Vrugt, 2008), and uses multiple-try Metropolis sampling (Liu et al., 2000) to decrease the required burn-in time for each individual chain and increase efficiency of posterior sampling. This approach is hereafter referred to as MT-DREAM. We present results for 2 synthetic mathematical case studies, and 2 real-world examples involving from 10 to 240 parameters. Results for those cases show that our multiple-try sampler, MT-DREAM, can consistently find better solutions than other Bayesian MCMC methods. Moreover, MT-DREAM is admirably suited to be implemented and ran on a parallel machine and is therefore a powerful method for posterior inference.
Shryock, Daniel F.; Havrilla, Caroline A.; DeFalco, Lesley; Esque, Todd C.; Custer, Nathan; Wood, Troy E.
2015-01-01
Local adaptation influences plant species’ responses to climate change and their performance in ecological restoration. Fine-scale physiological or phenological adaptations that direct demographic processes may drive intraspecific variability when baseline environmental conditions change. Landscape genomics characterize adaptive differentiation by identifying environmental drivers of adaptive genetic variability and mapping the associated landscape patterns. We applied such an approach to Sphaeralcea ambigua, an important restoration plant in the arid southwestern United States, by analyzing variation at 153 amplified fragment length polymorphism loci in the context of environmental gradients separating 47 Mojave Desert populations. We identified 37 potentially adaptive loci through a combination of genome scan approaches. We then used a generalized dissimilarity model (GDM) to relate variability in potentially adaptive loci with spatial gradients in temperature, precipitation, and topography. We identified non-linear thresholds in loci frequencies driven by summer maximum temperature and water stress, along with continuous variation corresponding to temperature seasonality. Two GDM-based approaches for mapping predicted patterns of local adaptation are compared. Additionally, we assess uncertainty in spatial interpolations through a novel spatial bootstrapping approach. Our study presents robust, accessible methods for deriving spatially-explicit models of adaptive genetic variability in non-model species that will inform climate change modelling and ecological restoration.
Kaiser, Marie; Kuwert, Philipp; Glaesmer, Heide
2015-01-01
To date the experiences of German occupation children (GOC) have been described solely in historical studies; empirical research on the psychosocial consequences growing up as German occupation children was missing. This paper provides an introduction to the background, methodological approaches and descriptive information on a sample for the first German-based empirical study on this topic. It also touches on methodical challenges and solution processes. Children born of war resemble a target group that is difficult to reach (hidden population). Therefore, an investigation needs consultation of both people from the target group and scientific experts (participatory approach) as well as specific methodological approaches. The questionnaire utilized contains adaptations of established and psychometrically validated instruments as well as adapted self-developed items. N = 146 occupation children were surveyed (mean age 63.4, 63.0% women) via press release and contact to platforms of children born of war. Despite methodological challenges an instrument to assess the target group was developed through participatory methods. The instrument shows high relevance for the target group and is highly accepted. High rates of American and French participants show the influence of networking in platforms on successful recruitment.
Lee, Michael S; Olson, Mark A
2011-06-28
Temperature-based replica exchange (T-ReX) enhances sampling of molecular dynamics simulations by autonomously heating and cooling simulation clients via a Metropolis exchange criterion. A pathological case for T-ReX can occur when a change in state (e.g., folding to unfolding of a protein) has a large energetic difference over a short temperature interval leading to insufficient exchanges amongst replica clients near the transition temperature. One solution is to allow the temperature set to dynamically adapt in the temperature space, thereby enriching the population of clients near the transition temperature. In this work, we evaluated two approaches for adapting the temperature set: a method that equalizes exchange rates over all neighbor temperature pairs and a method that attempts to induce clients to visit all temperatures (dubbed "current maximization") by positioning many clients at or near the transition temperature. As a test case, we simulated the 57-residue SH3 domain of alpha-spectrin. Exchange rate equalization yielded the same unfolding-folding transition temperature as fixed-temperature ReX with much smoother convergence of this value. Surprisingly, the current maximization method yielded a significantly lower transition temperature, in close agreement with experimental observation, likely due to more extensive sampling of the transition state.
Adaptive Resampling Particle Filters for GPS Carrier-Phase Navigation and Collision Avoidance System
NASA Astrophysics Data System (ADS)
Hwang, Soon Sik
This dissertation addresses three problems: 1) adaptive resampling technique (ART) for Particle Filters, 2) precise relative positioning using Global Positioning System (GPS) Carrier-Phase (CP) measurements applied to nonlinear integer resolution problem for GPS CP navigation using Particle Filters, and 3) collision detection system based on GPS CP broadcasts. First, Monte Carlo filters, called Particle Filters (PF), are widely used where the system is non-linear and non-Gaussian. In real-time applications, their estimation accuracies and efficiencies are significantly affected by the number of particles and the scheduling of relocating weights and samples, the so-called resampling step. In this dissertation, the appropriate number of particles is estimated adaptively such that the error of the sample mean and variance stay in bounds. These bounds are given by the confidence interval of a normal probability distribution for a multi-variate state. Two required number of samples maintaining the mean and variance error within the bounds are derived. The time of resampling is determined when the required sample number for the variance error crosses the required sample number for the mean error. Second, the PF using GPS CP measurements with adaptive resampling is applied to precise relative navigation between two GPS antennas. In order to make use of CP measurements for navigation, the unknown number of cycles between GPS antennas, the so called integer ambiguity, should be resolved. The PF is applied to this integer ambiguity resolution problem where the relative navigation states estimation involves nonlinear observations and nonlinear dynamics equation. Using the PF, the probability density function of the states is estimated by sampling from the position and velocity space and the integer ambiguities are resolved without using the usual hypothesis tests to search for the integer ambiguity. The ART manages the number of position samples and the frequency of the resampling step for real-time kinematics GPS navigation. The experimental results demonstrate the performance of the ART and the insensitivity of the proposed approach to GPS CP cycle-slips. Third, the GPS has great potential for the development of new collision avoidance systems and is being considered for the next generation Traffic alert and Collision Avoidance System (TCAS). The current TCAS equipment, is capable of broadcasting GPS code information to nearby airplanes, and also, the collision avoidance system using the navigation information based on GPS code has been studied by researchers. In this dissertation, the aircraft collision detection system using GPS CP information is addressed. The PF with position samples is employed for the CP based relative position estimation problem and the same algorithm can be used to determine the vehicle attitude if multiple GPS antennas are used. For a reliable and enhanced collision avoidance system, three dimensional trajectories are projected using the estimates of the relative position, velocity, and the attitude. It is shown that the performance of GPS CP based collision detecting algorithm meets the accuracy requirements for a precise approach of flight for auto landing with significantly less unnecessary collision false alarms and no miss alarms.
Caroline Müllenbroich, M; McGhee, Ewan J; Wright, Amanda J; Anderson, Kurt I; Mathieson, Keith
2014-01-01
We have developed a nonlinear adaptive optics microscope utilizing a deformable membrane mirror (DMM) and demonstrated its use in compensating for system- and sample-induced aberrations. The optimum shape of the DMM was determined with a random search algorithm optimizing on either two photon fluorescence or second harmonic signals as merit factors. We present here several strategies to overcome photobleaching issues associated with lengthy optimization routines by adapting the search algorithm and the experimental methodology. Optimizations were performed on extrinsic fluorescent dyes, fluorescent beads loaded into organotypic tissue cultures and the intrinsic second harmonic signal of these cultures. We validate the approach of using these preoptimized mirror shapes to compile a robust look-up table that can be applied for imaging over several days and through a variety of tissues. In this way, the photon exposure to the fluorescent cells under investigation is limited to imaging. Using our look-up table approach, we show signal intensity improvement factors ranging from 1.7 to 4.1 in organotypic tissue cultures and freshly excised mouse tissue. Imaging zebrafish in vivo, we demonstrate signal improvement by a factor of 2. This methodology is easily reproducible and could be applied to many photon starved experiments, for example fluorescent life time imaging, or when photobleaching is a concern.
NASA Astrophysics Data System (ADS)
Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen
2016-05-01
We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347-356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205-214), and isoform 1 of fibrinogen α chain precursor (FGA 588-624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes.
NASA Astrophysics Data System (ADS)
Cho, K.; Kim, B.; Lee, D.; Choi, N.; Park, C.
2011-12-01
Adaptation to environment is a natural phenomena that takes place in many animals, plants and microorganisms. These adapted organisms achieve stronger applicability than unadapted organisms after habitation in a specific environment for a long time. In the biohydrometallurgical industry, adaptation to special environment conditions by selective culturing is the most popular method for improving bioleaching activity of strains-although that is time consuming. This study investigated the influence of the bioleaching efficiency of mine waste under batch experimental conditions (adaptation and pulp density) using the indigenous acidophilic bacteria collected from acid mine drainage in Go-seong and Yeon-hwa, Korea. We conducted the batch experiments at the influences of parameters, such as the adaptation of bacteria and pulp density of the mine waste. In the adaptation case, the value of pH in 1'st adaptation bacteria sample exhibited lower than in 2'nd adaptation bacteria sample. And the content of both Cu and Zn at 1'st adaptation bacteria sample appeared lower than at 2'nd adaptation bacteria sample. In the SEM analysis, the rod-shaped bacteria with 1μm in length were observed on the filter paper (pore size - 0.45μm). The results of pulp density experiments revealed that the content of both Cu and Zn increased with increasing pulp density, since the increment of pulp density resulted in the enhancement of bioleaching capacity.
How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation
ERIC Educational Resources Information Center
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard
2006-01-01
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
Adaptive trial designs: a review of barriers and opportunities
2012-01-01
Adaptive designs allow planned modifications based on data accumulating within a study. The promise of greater flexibility and efficiency stimulates increasing interest in adaptive designs from clinical, academic, and regulatory parties. When adaptive designs are used properly, efficiencies can include a smaller sample size, a more efficient treatment development process, and an increased chance of correctly answering the clinical question of interest. However, improper adaptations can lead to biased studies. A broad definition of adaptive designs allows for countless variations, which creates confusion as to the statistical validity and practical feasibility of many designs. Determining properties of a particular adaptive design requires careful consideration of the scientific context and statistical assumptions. We first review several adaptive designs that garner the most current interest. We focus on the design principles and research issues that lead to particular designs being appealing or unappealing in particular applications. We separately discuss exploratory and confirmatory stage designs in order to account for the differences in regulatory concerns. We include adaptive seamless designs, which combine stages in a unified approach. We also highlight a number of applied areas, such as comparative effectiveness research, that would benefit from the use of adaptive designs. Finally, we describe a number of current barriers and provide initial suggestions for overcoming them in order to promote wider use of appropriate adaptive designs. Given the breadth of the coverage all mathematical and most implementation details are omitted for the sake of brevity. However, the interested reader will find that we provide current references to focused reviews and original theoretical sources which lead to details of the current state of the art in theory and practice. PMID:22917111
Kim, Hyejung; Van Hoof, Chris; Yazicioglu, Refet Firat
2011-01-01
This paper describes a mixed-signal ECG processing platform with an 12-bit ADC architecture that can adapt its sampling rate according to the input signals rate of change. This enables the sampling of ECG signals with significantly reduced data rate without loss of information. The presented adaptive sampling scheme reduces the ADC power consumption, enables the processing of ECG signals with lower power consumption, and reduces the power consumption of the radio while streaming the ECG signals. The test results show that running a CWT-based R peak detection algorithm using the adaptively sampled ECG signals consumes only 45.6 μW and it leads to 36% less overall system power consumption.
NASA Technical Reports Server (NTRS)
Drusano, George L.
1991-01-01
The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.
Integrating diffusion maps with umbrella sampling: Application to alanine dipeptide
NASA Astrophysics Data System (ADS)
Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.; Kevrekidis, Ioannis G.
2011-04-01
Nonlinear dimensionality reduction techniques can be applied to molecular simulation trajectories to systematically extract a small number of variables with which to parametrize the important dynamical motions of the system. For molecular systems exhibiting free energy barriers exceeding a few kBT, inadequate sampling of the barrier regions between stable or metastable basins can lead to a poor global characterization of the free energy landscape. We present an adaptation of a nonlinear dimensionality reduction technique known as the diffusion map that extends its applicability to biased umbrella sampling simulation trajectories in which restraining potentials are employed to drive the system into high free energy regions and improve sampling of phase space. We then propose a bootstrapped approach to iteratively discover good low-dimensional parametrizations by interleaving successive rounds of umbrella sampling and diffusion mapping, and we illustrate the technique through a study of alanine dipeptide in explicit solvent.
NASA Astrophysics Data System (ADS)
Liu, Yuefeng; Duan, Zhuoyi; Chen, Song
2017-10-01
Aerodynamic shape optimization design aiming at improving the efficiency of an aircraft has always been a challenging task, especially when the configuration is complex. In this paper, a hybrid FFD-RBF surface parameterization approach has been proposed for designing a civil transport wing-body configuration. This approach is simple and efficient, with the FFD technique used for parameterizing the wing shape and the RBF interpolation approach used for handling the wing body junction part updating. Furthermore, combined with Cuckoo Search algorithm and Kriging surrogate model with expected improvement adaptive sampling criterion, an aerodynamic shape optimization design system has been established. Finally, the aerodynamic shape optimization design on DLR F4 wing-body configuration has been carried out as a study case, and the result has shown that the approach proposed in this paper is of good effectiveness.
Small target pre-detection with an attention mechanism
NASA Astrophysics Data System (ADS)
Wang, Yuehuan; Zhang, Tianxu; Wang, Guoyou
2002-04-01
We introduce the concept of predetection based on an attention mechanism to improve the efficiency of small-target detection by limiting the image region of detection. According to the characteristics of small-target detection, local contrast is taken as the only feature in predetection and a nonlinear sampling model is adopted to make the predetection adaptive to detect small targets with different area sizes. To simplify the predetection itself and decrease the false alarm probability, neighboring nodes in the sampling grid are used to generate a saliency map, and a short-term memory is adopted to accelerate the `pop-out' of targets. We discuss the fact that the proposed approach is simple enough in computational complexity. In addition, even in a cluttered background, attention can be led to targets in a satisfying few iterations, which ensures that the detection efficiency will not be decreased due to false alarms. Experimental results are presented to demonstrate the applicability of the approach.
Exuberant and inhibited children: Person-centered profiles and links to social adjustment.
Dollar, Jessica M; Stifter, Cynthia A; Buss, Kristin A
2017-07-01
The current study aimed to substantiate and extend our understanding regarding the existence and developmental pathways of 3 distinct temperament profiles-exuberant, inhibited, and average approach-in a sample of 3.5-year-old children (n = 121). The interactions between temperamental styles and specific types of effortful control, inhibitory control and attentional control, were also examined in predicting kindergarten peer acceptance. Latent profile analysis identified 3 temperamental styles: exuberant, inhibited, and average approach. Support was found for the adaptive role of inhibitory control for exuberant children and attentional control for inhibited children in promoting peer acceptance in kindergarten. These findings add to our current understanding of temperamental profiles by using sophisticated methodology in a slightly older, community sample, as well as the importance of examining specific types of self-regulation to identify which skills lower risk for children of different temperamental styles. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Resonant ultrasound spectroscopy for materials with high damping and samples of arbitrary geometry
Remillieux, Marcel C.; Ulrich, T. J.; Payan, Cédric; ...
2015-07-23
This paper describes resonant ultrasound spectroscopy (RUS) as a powerful and established technique for measuring elastic constants of a material with general anisotropy. The first step of this technique consists of extracting resonance frequencies and damping from the vibrational frequency spectrum measured on a sample with free boundary conditions. An inversion technique is then used to retrieve the elastic tensor from the measured resonance frequencies. As originally developed, RUS has been mostly applicable to (i) materials with small damping such that the resonances of the sample are well separated and (ii) samples with simple geometries for which analytical solutions exist.more » In this paper, these limitations are addressed with a new RUS approach adapted to materials with high damping and samples of arbitrary geometry. Resonances are extracted by fitting a sum of exponentially damped sinusoids to the measured frequency spectrum. The inversion of the elastic tensor is achieved with a genetic algorithm, which allows searching for a global minimum within a discrete and relatively wide solution space. First, the accuracy of the proposed approach is evaluated against numerical data simulated for samples with isotropic symmetry and transversely isotropic symmetry. Subsequently, the applicability of the approach is demonstrated using experimental data collected on a composite structure consisting of a cylindrical sample of Berea sandstone glued to a large piezoelectric disk. In the proposed experiments, RUS is further enhanced by the use of a 3-D laser vibrometer allowing the visualization of most of the modes in the frequency band studied.« less
ERIC Educational Resources Information Center
Klein, Ronald; And Others
The Alpha Omega Completed Sentence Form (AOCSF) was developed to identify and measure a person's adaptational approaches to information concerning their own death or the possible death of a significant other. In contrast to the Kubler-Ross stage theory, the adaptational approach recognizes a person's capacity to assimilate new information which…
Methods for correcting tilt anisoplanatism in laser-guide-star-based multiconjugate adaptive optics.
Ellerbroek, B L; Rigaut, F
2001-10-01
Multiconjugate adaptive optics (MCAO) is a technique for correcting turbulence-induced phase distortions in three dimensions instead of two, thereby greatly expanding the corrected field of view of an adaptive optics system. This is accomplished with use of multiple deformable mirrors conjugate to distinct ranges in the atmosphere, with actuator commands computed from wave-front sensor (WFS) measurements from multiple guide stars. Laser guide stars (LGSs) must be used (at least for the forseeable future) to achieve a useful degree of sky coverage in an astronomical MCAO system. Much as a single LGS cannot be used to measure overall wave-front tilt, a constellation of multiple LGSs at a common range cannot detect tilt anisoplanatism. This error alone will significantly degrade the performance of a MCAO system based on a single tilt-only natural guide star (NGS) and multiple tilt-removed LGSs at a common altitude. We present a heuristic, low-order model for the principal source of tilt anisoplanatism that suggests four possible approaches to eliminating this defect in LGS MCAO: (i) tip/tilt measurements from multiple NGS, (ii) a solution to the LGS tilt uncertainty problem, (iii) additional higher-order WFS measurements from a single NGS, or (iv) higher-order WFS measurements from both sodium and Rayleigh LGSs at different ranges. Sample numerical results for one particular MCAO system configuration indicate that approach (ii), if feasible, would provide the highest degree of tilt anisoplanatism compensation. Approaches (i) and (iv) also provide very useful levels of performance and do not require unrealistically low levels of WFS measurement noise. For a representative set of parameters for an 8-m telescope, the additional laser power required for approach (iv) is on the order of 2 W per Rayleigh LGS.
Apparatus and method for handheld sampling
Staab, Torsten A.
2005-09-20
The present invention includes an apparatus, and corresponding method, for taking a sample. The apparatus is built around a frame designed to be held in at least one hand. A sample media is used to secure the sample. A sample media adapter for securing the sample media is operated by a trigger mechanism connectively attached within the frame to the sample media adapter.
Pateman, K A; Ford, P J; Batstone, M D; Farah, C S
2015-08-01
Oral health is essential to general health and well-being and is severely impacted by head and neck cancer (HNC) and its treatment. This study aimed to describe how people who have been treated for HNC cope with altered oral health and function and to identify their supportive care needs. A qualitative, descriptive approach was used. Data was collected from individual interviews with six participants 6 months after treatment. Data analysis was performed by qualitative content analysis involving inductive and directed approaches. Directed content analysis was guided by the Stress, Appraisal and Coping Model. Three themes describing changed oral health were identified from the data: dimensions of eating, maintaining oral health after treatment and adapting to the chronic side effects of treatment. A strong use of problem-focussed coping was described, in addition to the importance of peer support in adapting to the psychosocial outcomes of treatment. Support needs identified related to increased access to specialist dental oncology services post treatment, information needs and a need for more psychological support. The study findings describe the experience of a sample of people who have received treatment for HNC. Due to a demographically homogenous sample and the strong use of positive coping strategies, the results presented may not describe the experience of the wider HNC population; however, these results provide insight into factors that may influence positive coping.
Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; ...
2017-10-17
In this paper we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach ismore » used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan
In this paper we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach ismore » used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.« less
Novel approach for deriving genome wide SNP analysis data from archived blood spots
2012-01-01
Background The ability to transport and store DNA at room temperature in low volumes has the advantage of optimising cost, time and storage space. Blood spots on adapted filter papers are popular for this, with FTA (Flinders Technology Associates) Whatman™TM technology being one of the most recent. Plant material, plasmids, viral particles, bacteria and animal blood have been stored and transported successfully using this technology, however the method of porcine DNA extraction from FTA Whatman™TM cards is a relatively new approach, allowing nucleic acids to be ready for downstream applications such as PCR, whole genome amplification, sequencing and subsequent application to single nucleotide polymorphism microarrays has hitherto been under-explored. Findings DNA was extracted from FTA Whatman™TM cards (following adaptations of the manufacturer’s instructions), whole genome amplified and subsequently analysed to validate the integrity of the DNA for downstream SNP analysis. DNA was successfully extracted from 288/288 samples and amplified by WGA. Allele dropout post WGA, was observed in less than 2% of samples and there was no clear evidence of amplification bias nor contamination. Acceptable call rates on porcine SNP chips were also achieved using DNA extracted and amplified in this way. Conclusions DNA extracted from FTA Whatman cards is of a high enough quality and quantity following whole genomic amplification to perform meaningful SNP chip studies. PMID:22974252
Davidson, Emma M; Liu, Jing Jing; Bhopal, Raj; White, Martin; Johnson, Mark RD; Netto, Gina; Wabnitz, Cecile; Sheikh, Aziz
2013-01-01
Context Adapting behavior change interventions to meet the needs of racial and ethnic minority populations has the potential to enhance their effectiveness in the target populations. But because there is little guidance on how best to undertake these adaptations, work in this field has proceeded without any firm foundations. In this article, we present our Tool Kit of Adaptation Approaches as a framework for policymakers, practitioners, and researchers interested in delivering behavior change interventions to ethnically diverse, underserved populations in the United Kingdom. Methods We undertook a mixed-method program of research on interventions for smoking cessation, increasing physical activity, and promoting healthy eating that had been adapted to improve salience and acceptability for African-, Chinese-, and South Asian–origin minority populations. This program included a systematic review (reported using PRISMA criteria), qualitative interviews, and a realist synthesis of data. Findings We compiled a richly informative data set of 161 publications and twenty-six interviews detailing the adaptation of behavior change interventions and the contexts in which they were undertaken. On the basis of these data, we developed our Tool Kit of Adaptation Approaches, which contains (1) a forty-six-item Typology of Adaptation Approaches; (2) a Pathway to Adaptation, which shows how to use the Typology to create a generic behavior change intervention; and (3) RESET, a decision tool that provides practical guidance on which adaptations to use in different contexts. Conclusions Our Tool Kit of Adaptation Approaches provides the first evidence-derived suite of materials to support the development, design, implementation, and reporting of health behavior change interventions for minority groups. The Tool Kit now needs prospective, empirical evaluation in a range of intervention and population settings. PMID:24320170
Elsäßer, Amelie; Regnstrom, Jan; Vetter, Thorsten; Koenig, Franz; Hemmings, Robert James; Greco, Martina; Papaluca-Amati, Marisa; Posch, Martin
2014-10-02
Since the first methodological publications on adaptive study design approaches in the 1990s, the application of these approaches in drug development has raised increasing interest among academia, industry and regulators. The European Medicines Agency (EMA) as well as the Food and Drug Administration (FDA) have published guidance documents addressing the potentials and limitations of adaptive designs in the regulatory context. Since there is limited experience in the implementation and interpretation of adaptive clinical trials, early interaction with regulators is recommended. The EMA offers such interactions through scientific advice and protocol assistance procedures. We performed a text search of scientific advice letters issued between 1 January 2007 and 8 May 2012 that contained relevant key terms. Letters containing questions related to adaptive clinical trials in phases II or III were selected for further analysis. From the selected letters, important characteristics of the proposed design and its context in the drug development program, as well as the responses of the Committee for Human Medicinal Products (CHMP)/Scientific Advice Working Party (SAWP), were extracted and categorized. For 41 more recent procedures (1 January 2009 to 8 May 2012), additional details of the trial design and the CHMP/SAWP responses were assessed. In addition, case studies are presented as examples. Over a range of 5½ years, 59 scientific advices were identified that address adaptive study designs in phase II and phase III clinical trials. Almost all were proposed as confirmatory phase III or phase II/III studies. The most frequently proposed adaptation was sample size reassessment, followed by dropping of treatment arms and population enrichment. While 12 (20%) of the 59 proposals for an adaptive clinical trial were not accepted, the great majority of proposals were accepted (15, 25%) or conditionally accepted (32, 54%). In the more recent 41 procedures, the most frequent concerns raised by CHMP/SAWP were insufficient justifications of the adaptation strategy, type I error rate control and bias. For the majority of proposed adaptive clinical trials, an overall positive opinion was given albeit with critical comments. Type I error rate control, bias and the justification of the design are common issues raised by the CHMP/SAWP.
Ravera, Federica; Martín-López, Berta; Pascual, Unai; Drucker, Adam
2016-12-01
This paper examines climate change adaptation and gender issues through an application of a feminist intersectional approach. This approach permits the identification of diverse adaptation responses arising from the existence of multiple and fragmented dimensions of identity (including gender) that intersect with power relations to shape situation-specific interactions between farmers and ecosystems. Based on results from contrasting research cases in Bihar and Uttarakhand, India, this paper demonstrates, inter alia, that there are geographically determined gendered preferences and adoption strategies regarding adaptation options and that these are influenced by the socio-ecological context and institutional dynamics. Intersecting identities, such as caste, wealth, age and gender, influence decisions and reveal power dynamics and negotiation within the household and the community, as well as barriers to adaptation among groups. Overall, the findings suggest that a feminist intersectional approach does appear to be useful and worth further exploration in the context of climate change adaptation. In particular, future research could benefit from more emphasis on a nuanced analysis of the intra-gender differences that shape adaptive capacity to climate change.
NASA Astrophysics Data System (ADS)
Krogh, E.; Gill, C.; Bell, R.; Davey, N.; Martinsen, M.; Thompson, A.; Simpson, I. J.; Blake, D. R.
2012-12-01
The release of hydrocarbons into the environment can have significant environmental and economic consequences. The evolution of smaller, more portable mass spectrometers to the field can provide spatially and temporally resolved information for rapid detection, adaptive sampling and decision support. We have deployed a mobile platform membrane introduction mass spectrometer (MIMS) for the in-field simultaneous measurement of volatile and semi-volatile organic compounds. In this work, we report instrument and data handling advances that produce geographically referenced data in real-time and preliminary data where these improvements have been combined with high precision ultra-trace VOCs analysis to adaptively sample air plumes near oil and gas operations in Alberta, Canada. We have modified a commercially available ion-trap mass spectrometer (Griffin ICX 400) with an in-house temperature controlled capillary hollow fibre polydimethylsiloxane (PDMS) polymer membrane interface and in-line permeation tube flow cell for a continuously infused internal standard. The system is powered by 24 VDC for remote operations in a moving vehicle. Software modifications include the ability to run continuous, interlaced tandem mass spectrometry (MS/MS) experiments for multiple contaminants/internal standards. All data are time and location stamped with on-board GPS and meteorological data to facilitate spatial and temporal data mapping. Tandem MS/MS scans were employed to simultaneously monitor ten volatile and semi-volatile analytes, including benzene, toluene, ethylbenzene and xylene (BTEX), reduced sulfur compounds, halogenated organics and naphthalene. Quantification was achieved by calibrating against a continuously infused deuterated internal standard (toluene-d8). Time referenced MS/MS data were correlated with positional data and processed using Labview and Matlab to produce calibrated, geographical Google Earth data-visualizations that enable adaptive sampling protocols. This real-time approach has been employed in a moving vehicle to identify and track downwind plumes of fugitive VOC emissions near hydrocarbon upgrading and chemical processing facilities in Fort Saskatchewan, Alberta. This information was relayed to a trailing vehicle, which collected stationary grab samples in evacuated canisters for ultra trace analysis of over seventy VOC analytes. In addition, stationary time series data were collected and compared with grab samples co-located with our sampling line. Spatially and temporally resolved, time referenced MS/MS data for several air contaminants associated with oil and gas processing were processed in real time to produce geospatial data for visualization in Google Earth. This information was used to strategically locate grab samples for high precision, ultra trace analysis.
Sampling procedures for inventory of commercial volume tree species in Amazon Forest.
Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R
2017-01-01
The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.
NASA Astrophysics Data System (ADS)
Miorelli, Roberto; Reboud, Christophe
2018-04-01
Pulsed Eddy Current Testing (PECT) is a popular NonDestructive Testing (NDT) technique for some applications like corrosion monitoring in the oil and gas industry, or rivet inspection in the aeronautic area. Its particularity is to use a transient excitation, which allows to retrieve more information from the piece than conventional harmonic ECT, in a simpler and cheaper way than multi-frequency ECT setups. Efficient modeling tools prove, as usual, very useful to optimize experimental sensors and devices or evaluate their performance, for instance. This paper proposes an efficient simulation of PECT signals based on standard time harmonic solvers and use of an Adaptive Sparse Grid (ASG) algorithm. An adaptive sampling of the ECT signal spectrum is performed with this algorithm, then the complete spectrum is interpolated from this sparse representation and PECT signals are finally synthesized by means of inverse Fourier transform. Simulation results corresponding to existing industrial configurations are presented and the performance of the strategy is discussed by comparison to reference results.
NASA Astrophysics Data System (ADS)
Chernick, Julian A.; Perlovsky, Leonid I.; Tye, David M.
1994-06-01
This paper describes applications of maximum likelihood adaptive neural system (MLANS) to the characterization of clutter in IR images and to the identification of targets. The characterization of image clutter is needed to improve target detection and to enhance the ability to compare performance of different algorithms using diverse imagery data. Enhanced unambiguous IFF is important for fratricide reduction while automatic cueing and targeting is becoming an ever increasing part of operations. We utilized MLANS which is a parametric neural network that combines optimal statistical techniques with a model-based approach. This paper shows that MLANS outperforms classical classifiers, the quadratic classifier and the nearest neighbor classifier, because on the one hand it is not limited to the usual Gaussian distribution assumption and can adapt in real time to the image clutter distribution; on the other hand MLANS learns from fewer samples and is more robust than the nearest neighbor classifiers. Future research will address uncooperative IFF using fused IR and MMW data.
Cross-Layer Adaptive Feedback Scheduling of Wireless Control Systems
Xia, Feng; Ma, Longhua; Peng, Chen; Sun, Youxian; Dong, Jinxiang
2008-01-01
There is a trend towards using wireless technologies in networked control systems. However, the adverse properties of the radio channels make it difficult to design and implement control systems in wireless environments. To attack the uncertainty in available communication resources in wireless control systems closed over WLAN, a cross-layer adaptive feedback scheduling (CLAFS) scheme is developed, which takes advantage of the co-design of control and wireless communications. By exploiting cross-layer design, CLAFS adjusts the sampling periods of control systems at the application layer based on information about deadline miss ratio and transmission rate from the physical layer. Within the framework of feedback scheduling, the control performance is maximized through controlling the deadline miss ratio. Key design parameters of the feedback scheduler are adapted to dynamic changes in the channel condition. An event-driven invocation mechanism for the feedback scheduler is also developed. Simulation results show that the proposed approach is efficient in dealing with channel capacity variations and noise interference, thus providing an enabling technology for control over WLAN. PMID:27879934
Moritz, Steffen; Jahns, Anna Katharina; Schröder, Johanna; Berger, Thomas; Lincoln, Tania M; Klein, Jan Philipp; Göritz, Anja S
2016-02-01
Lack of adaptive and enhanced maladaptive coping with stress and negative emotions are implicated in many psychopathological disorders. We describe the development of a new scale to investigate the relative contribution of different coping styles to psychopathology in a large population sample. We hypothesized that the magnitude of the supposed positive correlation between maladaptive coping and psychopathology would be stronger than the supposed negative correlation between adaptive coping and psychopathology. We also examined whether distinct coping style patterns emerge for different psychopathological syndromes. A total of 2200 individuals from the general population participated in an online survey. The Patient Health Questionnaire-9 (PHQ-9), the Obsessive-Compulsive Inventory revised (OCI-R) and the Paranoia Checklist were administered along with a novel instrument called Maladaptive and Adaptive Coping Styles (MAX) questionnaire. Participants were reassessed six months later. MAX consists of three dimensions representing adaptive coping, maladaptive coping and avoidance. Across all psychopathological syndromes, similar response patterns emerged. Maladaptive coping was more strongly related to psychopathology than adaptive coping both cross-sectionally and longitudinally. The overall number of coping styles adopted by an individual predicted greater psychopathology. Mediation analysis suggests that a mild positive relationship between adaptive and certain maladaptive styles (emotional suppression) partially accounts for the attenuated relationship between adaptive coping and depressive symptoms. Results should be replicated in a clinical population. Results suggest that maladaptive and adaptive coping styles are not reciprocal. Reducing maladaptive coping seems to be more important for outcome than enhancing adaptive coping. The study supports transdiagnostic approaches advocating that maladaptive coping is a common factor across different psychopathologies. Copyright © 2015 Elsevier B.V. All rights reserved.
Group adaptation, formal darwinism and contextual analysis.
Okasha, S; Paternotte, C
2012-06-01
We consider the question: under what circumstances can the concept of adaptation be applied to groups, rather than individuals? Gardner and Grafen (2009, J. Evol. Biol.22: 659-671) develop a novel approach to this question, building on Grafen's 'formal Darwinism' project, which defines adaptation in terms of links between evolutionary dynamics and optimization. They conclude that only clonal groups, and to a lesser extent groups in which reproductive competition is repressed, can be considered as adaptive units. We re-examine the conditions under which the selection-optimization links hold at the group level. We focus on an important distinction between two ways of understanding the links, which have different implications regarding group adaptationism. We show how the formal Darwinism approach can be reconciled with G.C. Williams' famous analysis of group adaptation, and we consider the relationships between group adaptation, the Price equation approach to multi-level selection, and the alternative approach based on contextual analysis. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.
Adaptive Delta Management: cultural aspects of dealing with uncertainty
NASA Astrophysics Data System (ADS)
Timmermans, Jos; Haasnoot, Marjolijn; Hermans, Leon; Kwakkel, Jan
2016-04-01
Deltas are generally recognized as vulnerable to climate change and therefore a salient topic in adaptation science. Deltas are also highly dynamic systems viewed from physical (erosion, sedimentation, subsidence), social (demographic), economic (trade), infrastructures (transport, energy, metropolization) and cultural (multi-ethnic) perspectives. This multi-faceted dynamic character of delta areas warrants the emergence of a branch of applied adaptation science, Adaptive Delta Management, which explicitly focuses on climate adaptation of such highly dynamic and deeply uncertain systems. The application of Adaptive Delta Management in the Dutch Delta Program and its active international dissemination by Dutch professionals results in the rapid dissemination of Adaptive Delta Management to deltas worldwide. This global dissemination raises concerns among professionals in delta management on its applicability in deltas with cultural conditions and historical developments quite different from those found in the Netherlands and the United Kingdom where the practices now labelled as Adaptive Delta Management first emerged. This research develops an approach and gives a first analysis of the interaction between the characteristics of different approaches in Adaptive Delta Management and their alignment with the cultural conditions encountered in various delta's globally. In this analysis, first different management theories underlying approaches to Adaptive Delta Management as encountered in both scientific and professional publications are identified and characterized on three dimensions: The characteristics dimensions used are: orientation on today, orientation on the future, and decision making (Timmermans, 2015). The different underlying management theories encountered are policy analysis, strategic management, transition management, and adaptive management. These four management theories underlying different approaches in Adaptive Delta Management are connected to Hofstede's (1983) cultural dimensions, of which uncertainty avoidance and long-term orientation are of particular relevance for our analysis. Our conclusions comment on the suitability of approaches in Adaptive Delta Management rooted in different management theories are more suitable for specific delta countries than others. The most striking conclusion is the unsuitability of rational policy analytic approaches for The Netherlands. Although surprising this conclusion finds some support in the process dominated approach taken in the Dutch Delta Program. In addition, the divergence between Vietnam, Bangladesh and Myanmar, all located in South East Asia, is striking. References Hofstede, G. (1983). The cultural relativity of organizational practices and theories. Journal of international business studies, 75-89. Jos Timmermans, Marjolijn Haasnoot, Leon Hermans, Jan Kwakkel, Martine Rutten and Wil Thissen (2015). Adaptive Delta Management: Roots and Branches, IAHR The Hague 2015.
Experimental Design and Primary Data Analysis Methods for Comparing Adaptive Interventions
Nahum-Shani, Inbal; Qian, Min; Almirall, Daniel; Pelham, William E.; Gnagy, Beth; Fabiano, Greg; Waxmonsky, Jim; Yu, Jihnhee; Murphy, Susan
2013-01-01
In recent years, research in the area of intervention development is shifting from the traditional fixed-intervention approach to adaptive interventions, which allow greater individualization and adaptation of intervention options (i.e., intervention type and/or dosage) over time. Adaptive interventions are operationalized via a sequence of decision rules that specify how intervention options should be adapted to an individual’s characteristics and changing needs, with the general aim to optimize the long-term effectiveness of the intervention. Here, we review adaptive interventions, discussing the potential contribution of this concept to research in the behavioral and social sciences. We then propose the sequential multiple assignment randomized trial (SMART), an experimental design useful for addressing research questions that inform the construction of high-quality adaptive interventions. To clarify the SMART approach and its advantages, we compare SMART with other experimental approaches. We also provide methods for analyzing data from SMART to address primary research questions that inform the construction of a high-quality adaptive intervention. PMID:23025433
Holliday, Trenton W; Hilton, Charles E
2010-06-01
Given the well-documented fact that human body proportions covary with climate (presumably due to the action of selection), one would expect that the Ipiutak and Tigara Inuit samples from Point Hope, Alaska, would be characterized by an extremely cold-adapted body shape. Comparison of the Point Hope Inuit samples to a large (n > 900) sample of European and European-derived, African and African-derived, and Native American skeletons (including Koniag Inuit from Kodiak Island, Alaska) confirms that the Point Hope Inuit evince a cold-adapted body form, but analyses also reveal some unexpected results. For example, one might suspect that the Point Hope samples would show a more cold-adapted body form than the Koniag, given their more extreme environment, but this is not the case. Additionally, univariate analyses seldom show the Inuit samples to be more cold-adapted in body shape than Europeans, and multivariate cluster analyses that include a myriad of body shape variables such as femoral head diameter, bi-iliac breadth, and limb segment lengths fail to effectively separate the Inuit samples from Europeans. In fact, in terms of body shape, the European and the Inuit samples tend to be cold-adapted and tend to be separated in multivariate space from the more tropically adapted Africans, especially those groups from south of the Sahara. Copyright 2009 Wiley-Liss, Inc.
Adaptive Sampling for Urban Air Quality through Participatory Sensing
Zeng, Yuanyuan; Xiang, Kai
2017-01-01
Air pollution is one of the major problems of the modern world. The popularization and powerful functions of smartphone applications enable people to participate in urban sensing to better know about the air problems surrounding them. Data sampling is one of the most important problems that affect the sensing performance. In this paper, we propose an Adaptive Sampling Scheme for Urban Air Quality (AS-air) through participatory sensing. Firstly, we propose to find the pattern rules of air quality according to the historical data contributed by participants based on Apriori algorithm. Based on it, we predict the on-line air quality and use it to accelerate the learning process to choose and adapt the sampling parameter based on Q-learning. The evaluation results show that AS-air provides an energy-efficient sampling strategy, which is adaptive toward the varied outside air environment with good sampling efficiency. PMID:29099766
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.
Kann, Birthe; Windbergs, Maike
2013-04-01
Confocal Raman microscopy is an analytical technique with a steadily increasing impact in the field of pharmaceutics as the instrumental setup allows for nondestructive visualization of component distribution within drug delivery systems. Here, the attention is mainly focused on classic solid carrier systems like tablets, pellets, or extrudates. Due to the opacity of these systems, Raman analysis is restricted either to exterior surfaces or cross sections. As Raman spectra are only recorded from one focal plane at a time, the sample is usually altered to create a smooth and even surface. However, this manipulation can lead to misinterpretation of the analytical results. Here, we present a trendsetting approach to overcome these analytical pitfalls with a combination of confocal Raman microscopy and optical profilometry. By acquiring a topography profile of the sample area of interest prior to Raman spectroscopy, the profile height information allowed to level the focal plane to the sample surface for each spectrum acquisition. We first demonstrated the basic principle of this complementary approach in a case study using a tilted silica wafer. In a second step, we successfully adapted the two techniques to investigate an extrudate and a lyophilisate as two exemplary solid drug carrier systems. Component distribution analysis with the novel analytical approach was neither hampered by the curvature of the cylindrical extrudate nor the highly structured surface of the lyophilisate. Therefore, the combined analytical approach bears a great potential to be implemented in diversified fields of pharmaceutical sciences.
Lim, Eelin L.; Tomita, Aoy V.; Thilly, William G.; Polz, Martin F.
2001-01-01
A novel quantitative PCR (QPCR) approach, which combines competitive PCR with constant-denaturant capillary electrophoresis (CDCE), was adapted for enumerating microbial cells in environmental samples using the marine nanoflagellate Cafeteria roenbergensis as a model organism. Competitive PCR has been used successfully for quantification of DNA in environmental samples. However, this technique is labor intensive, and its accuracy is dependent on an internal competitor, which must possess the same amplification efficiency as the target yet can be easily discriminated from the target DNA. The use of CDCE circumvented these problems, as its high resolution permitted the use of an internal competitor which differed from the target DNA fragment by a single base and thus ensured that both sequences could be amplified with equal efficiency. The sensitivity of CDCE also enabled specific and precise detection of sequences over a broad range of concentrations. The combined competitive QPCR and CDCE approach accurately enumerated C. roenbergensis cells in eutrophic, coastal seawater at abundances ranging from approximately 10 to 104 cells ml−1. The QPCR cell estimates were confirmed by fluorescent in situ hybridization counts, but estimates of samples with <50 cells ml−1 by QPCR were less variable. This novel approach extends the usefulness of competitive QPCR by demonstrating its ability to reliably enumerate microorganisms at a range of environmentally relevant cell concentrations in complex aquatic samples. PMID:11525983
Absorbance and fluorometric sensing with capillary wells microplates.
Tan, Han Yen; Cheong, Brandon Huey-Ping; Neild, Adrian; Liew, Oi Wah; Ng, Tuck Wah
2010-12-01
Detection and readout from small volume assays in microplates are a challenge. The capillary wells microplate approach [Ng et al., Appl. Phys. Lett. 93, 174105 (2008)] offers strong advantages in small liquid volume management. An adapted design is described and shown here to be able to detect, in a nonimaging manner, fluorescence and absorbance assays minus the error often associated with meniscus forming at the air-liquid interface. The presence of bubbles in liquid samples residing in microplate wells can cause inaccuracies. Pipetting errors, if not adequately managed, can result in misleading data and wrong interpretations of assay results; particularly in the context of high throughput screening. We show that the adapted design is also able to detect for bubbles and pipetting errors during actual assay runs to ensure accuracy in screening.
Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.
2011-01-01
Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.
Wu, Wei Mo; Wang, Jia Qiang; Cao, Qi; Wu, Jia Ping
2017-02-01
Accurate prediction of soil organic carbon (SOC) distribution is crucial for soil resources utilization and conservation, climate change adaptation, and ecosystem health. In this study, we selected a 1300 m×1700 m solonchak sampling area in northern Tarim Basin, Xinjiang, China, and collected a total of 144 soil samples (5-10 cm). The objectives of this study were to build a Baye-sian geostatistical model to predict SOC content, and to assess the performance of the Bayesian model for the prediction of SOC content by comparing with other three geostatistical approaches [ordinary kriging (OK), sequential Gaussian simulation (SGS), and inverse distance weighting (IDW)]. In the study area, soil organic carbon contents ranged from 1.59 to 9.30 g·kg -1 with a mean of 4.36 g·kg -1 and a standard deviation of 1.62 g·kg -1 . Sample semivariogram was best fitted by an exponential model with the ratio of nugget to sill being 0.57. By using the Bayesian geostatistical approach, we generated the SOC content map, and obtained the prediction variance, upper 95% and lower 95% of SOC contents, which were then used to evaluate the prediction uncertainty. Bayesian geostatistical approach performed better than that of the OK, SGS and IDW, demonstrating the advantages of Bayesian approach in SOC prediction.
Random sampling of elementary flux modes in large-scale metabolic networks.
Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel
2012-09-15
The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.
ERIC Educational Resources Information Center
Lindsley, Robert Bugden
2011-01-01
A recent movement in international development has seen the expansion of capacity development activities to include adaptive approaches to education. Adaptive approaches are distinct from traditional approaches to education as they seek not only to provide new knowledge, but to cultivate more complex and flexible qualities of mind. Borrowed from…
SKATE: a docking program that decouples systematic sampling from scoring.
Feng, Jianwen A; Marshall, Garland R
2010-11-15
SKATE is a docking prototype that decouples systematic sampling from scoring. This novel approach removes any interdependence between sampling and scoring functions to achieve better sampling and, thus, improves docking accuracy. SKATE systematically samples a ligand's conformational, rotational and translational degrees of freedom, as constrained by a receptor pocket, to find sterically allowed poses. Efficient systematic sampling is achieved by pruning the combinatorial tree using aggregate assembly, discriminant analysis, adaptive sampling, radial sampling, and clustering. Because systematic sampling is decoupled from scoring, the poses generated by SKATE can be ranked by any published, or in-house, scoring function. To test the performance of SKATE, ligands from the Asetex/CDCC set, the Surflex set, and the Vertex set, a total of 266 complexes, were redocked to their respective receptors. The results show that SKATE was able to sample poses within 2 A RMSD of the native structure for 98, 95, and 98% of the cases in the Astex/CDCC, Surflex, and Vertex sets, respectively. Cross-docking accuracy of SKATE was also assessed by docking 10 ligands to thymidine kinase and 73 ligands to cyclin-dependent kinase. 2010 Wiley Periodicals, Inc.
Superresolution restoration of an image sequence: adaptive filtering approach.
Elad, M; Feuer, A
1999-01-01
This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented.
Biometric templates selection and update using quality measures
NASA Astrophysics Data System (ADS)
Abboud, Ali J.; Jassim, Sabah A.
2012-06-01
To deal with severe variation in recording conditions, most biometric systems acquire multiple biometric samples, at the enrolment stage, for the same person and then extract their individual biometric feature vectors and store them in the gallery in the form of biometric template(s), labelled with the person's identity. The number of samples/templates and the choice of the most appropriate templates influence the performance of the system. The desired biometric template(s) selection technique must aim to control the run time and storage requirements while improving the recognition accuracy of the biometric system. This paper is devoted to elaborating on and discussing a new two stages approach for biometric templates selection and update. This approach uses a quality-based clustering, followed by a special criterion for the selection of an ultimate set of biometric templates from the various clusters. This approach is developed to select adaptively a specific number of templates for each individual. The number of biometric templates depends mainly on the performance of each individual (i.e. gallery size should be optimised to meet the needs of each target individual). These experiments have been conducted on two face image databases and their results will demonstrate the effectiveness of proposed quality-guided approach.
An Evidence-Based Public Health Approach to Climate Change Adaptation
Eidson, Millicent; Tlumak, Jennifer E.; Raab, Kristin K.; Luber, George
2014-01-01
Background: Public health is committed to evidence-based practice, yet there has been minimal discussion of how to apply an evidence-based practice framework to climate change adaptation. Objectives: Our goal was to review the literature on evidence-based public health (EBPH), to determine whether it can be applied to climate change adaptation, and to consider how emphasizing evidence-based practice may influence research and practice decisions related to public health adaptation to climate change. Methods: We conducted a substantive review of EBPH, identified a consensus EBPH framework, and modified it to support an EBPH approach to climate change adaptation. We applied the framework to an example and considered implications for stakeholders. Discussion: A modified EBPH framework can accommodate the wide range of exposures, outcomes, and modes of inquiry associated with climate change adaptation and the variety of settings in which adaptation activities will be pursued. Several factors currently limit application of the framework, including a lack of higher-level evidence of intervention efficacy and a lack of guidelines for reporting climate change health impact projections. To enhance the evidence base, there must be increased attention to designing, evaluating, and reporting adaptation interventions; standardized health impact projection reporting; and increased attention to knowledge translation. This approach has implications for funders, researchers, journal editors, practitioners, and policy makers. Conclusions: The current approach to EBPH can, with modifications, support climate change adaptation activities, but there is little evidence regarding interventions and knowledge translation, and guidelines for projecting health impacts are lacking. Realizing the goal of an evidence-based approach will require systematic, coordinated efforts among various stakeholders. Citation: Hess JJ, Eidson M, Tlumak JE, Raab KK, Luber G. 2014. An evidence-based public health approach to climate change adaptation. Environ Health Perspect 122:1177–1186; http://dx.doi.org/10.1289/ehp.1307396 PMID:25003495
ASA-FTL: An adaptive separation aware flash translation layer for solid state drives
Xie, Wei; Chen, Yong; Roth, Philip C
2016-11-03
Here, the flash-memory based Solid State Drive (SSD) presents a promising storage solution for increasingly critical data-intensive applications due to its low latency (high throughput), high bandwidth, and low power consumption. Within an SSD, its Flash Translation Layer (FTL) is responsible for exposing the SSD’s flash memory storage to the computer system as a simple block device. The FTL design is one of the dominant factors determining an SSD’s lifespan and performance. To reduce the garbage collection overhead and deliver better performance, we propose a new, low-cost, adaptive separation-aware flash translation layer (ASA-FTL) that combines sampling, data clustering and selectivemore » caching of recency information to accurately identify and separate hot/cold data while incurring minimal overhead. We use sampling for light-weight identification of separation criteria, and our dedicated selective caching mechanism is designed to save the limited RAM resource in contemporary SSDs. Using simulations of ASA-FTL with both real-world and synthetic workloads, we have shown that our proposed approach reduces the garbage collection overhead by up to 28% and the overall response time by 15% compared to one of the most advanced existing FTLs. We find that the data clustering using a small sample size provides significant performance benefit while only incurring a very small computation and memory cost. In addition, our evaluation shows that ASA-FTL is able to adapt to the changes in the access pattern of workloads, which is a major advantage comparing to existing fixed data separation methods.« less
Herrera, Carlos M
2012-01-01
Methods for estimating quantitative trait heritability in wild populations have been developed in recent years which take advantage of the increased availability of genetic markers to reconstruct pedigrees or estimate relatedness between individuals, but their application to real-world data is not exempt from difficulties. This chapter describes a recent marker-based technique which, by adopting a genomic scan approach and focusing on the relationship between phenotypes and genotypes at the individual level, avoids the problems inherent to marker-based estimators of relatedness. This method allows the quantification of the genetic component of phenotypic variance ("degree of genetic determination" or "heritability in the broad sense") in wild populations and is applicable whenever phenotypic trait values and multilocus data for a large number of genetic markers (e.g., amplified fragment length polymorphisms, AFLPs) are simultaneously available for a sample of individuals from the same population. The method proceeds by first identifying those markers whose variation across individuals is significantly correlated with individual phenotypic differences ("adaptive loci"). The proportion of phenotypic variance in the sample that is statistically accounted for by individual differences in adaptive loci is then estimated by fitting a linear model to the data, with trait value as the dependent variable and scores of adaptive loci as independent ones. The method can be easily extended to accommodate quantitative or qualitative information on biologically relevant features of the environment experienced by each sampled individual, in which case estimates of the environmental and genotype × environment components of phenotypic variance can also be obtained.
Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew
2013-09-01
Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Dynamics of salivary proteins and metabolites during extreme endurance sports - a case study.
Zauber, Henrik; Mosler, Stephan; von Heßberg, Andreas; Schulze, Waltraud X
2012-07-01
As noninvasively accessible body fluid, saliva is of growing interest in diagnostics. To exemplify the diagnostic potential of saliva, we used a mass spectrometry-based approach to gain insights into adaptive physiological processes underlying long-lasting endurance work load in a case study. Saliva was collected from male and female athlete at four diurnal time points throughout a 1060 km nonstop cycling event. Total sampling time covered 180 h comprising 62 h of endurance cycling as well as reference samples taken over 3 days before the event, and over 2 days after. Altogether, 1405 proteins and 62 metabolites were identified in these saliva samples, of which 203 could be quantified across the majority of the sampling time points. Many proteins show clear diurnal abundance patterns in saliva. In many cases, these patterns were disturbed and altered by the long-term endurance stress. During the stress phase, metabolites of energy mobilization, such as creatinine and glucose were of high abundance, as well as metabolites with antioxidant functions. Lysozyme, amylase, and proteins with redox-regulatory function showed significant increase in average abundance during work phase compared to rest or recovery phase. The recovery phase was characterized by an increased abundance of immunoglobulins. Our work exemplifies the application of high-throughput technologies to understand adaptive processes in human physiology. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Adaptive optics improves multiphoton super-resolution imaging
NASA Astrophysics Data System (ADS)
Zheng, Wei; Wu, Yicong; Winter, Peter; Shroff, Hari
2018-02-01
Three dimensional (3D) fluorescence microscopy has been essential for biological studies. It allows interrogation of structure and function at spatial scales spanning the macromolecular, cellular, and tissue levels. Critical factors to consider in 3D microscopy include spatial resolution, signal-to-noise (SNR), signal-to-background (SBR), and temporal resolution. Maintaining high quality imaging becomes progressively more difficult at increasing depth (where optical aberrations, induced by inhomogeneities of refractive index in the sample, degrade resolution and SNR), and in thick or densely labeled samples (where out-of-focus background can swamp the valuable, in-focus-signal from each plane). In this report, we introduce our new instrumentation to address these problems. A multiphoton structured illumination microscope was simply modified to integrate an adpative optics system for optical aberrations correction. Firstly, the optical aberrations are determined using direct wavefront sensing with a nonlinear guide star and subsequently corrected using a deformable mirror, restoring super-resolution information. We demonstrate the flexibility of our adaptive optics approach on a variety of semi-transparent samples, including bead phantoms, cultured cells in collagen gels and biological tissues. The performance of our super-resolution microscope is improved in all of these samples, as peak intensity is increased (up to 40-fold) and resolution recovered (up to 176+/-10 nm laterally and 729+/-39 nm axially) at depths up to 250 μm from the coverslip surface.
An evidence-based public health approach to climate change adaptation.
Hess, Jeremy J; Eidson, Millicent; Tlumak, Jennifer E; Raab, Kristin K; Luber, George
2014-11-01
Public health is committed to evidence-based practice, yet there has been minimal discussion of how to apply an evidence-based practice framework to climate change adaptation. Our goal was to review the literature on evidence-based public health (EBPH), to determine whether it can be applied to climate change adaptation, and to consider how emphasizing evidence-based practice may influence research and practice decisions related to public health adaptation to climate change. We conducted a substantive review of EBPH, identified a consensus EBPH framework, and modified it to support an EBPH approach to climate change adaptation. We applied the framework to an example and considered implications for stakeholders. A modified EBPH framework can accommodate the wide range of exposures, outcomes, and modes of inquiry associated with climate change adaptation and the variety of settings in which adaptation activities will be pursued. Several factors currently limit application of the framework, including a lack of higher-level evidence of intervention efficacy and a lack of guidelines for reporting climate change health impact projections. To enhance the evidence base, there must be increased attention to designing, evaluating, and reporting adaptation interventions; standardized health impact projection reporting; and increased attention to knowledge translation. This approach has implications for funders, researchers, journal editors, practitioners, and policy makers. The current approach to EBPH can, with modifications, support climate change adaptation activities, but there is little evidence regarding interventions and knowledge translation, and guidelines for projecting health impacts are lacking. Realizing the goal of an evidence-based approach will require systematic, coordinated efforts among various stakeholders.
The Impact of Biosampling Procedures on Molecular Data Interpretation*
Sköld, Karl; Alm, Henrik; Scholz, Birger
2013-01-01
The separation between biological and technical variation without extensive use of technical replicates is often challenging, particularly in the context of different forms of protein and peptide modifications. Biosampling procedures in the research laboratory are easier to conduct within a shorter time frame and under controlled conditions as compared with clinical sampling, with the latter often having issues of reproducibility. But is the research laboratory biosampling really less variable? Biosampling introduces within minutes rapid tissue-specific changes in the cellular microenvironment, thus inducing a range of different pathways associated with cell survival. Biosampling involves hypoxia and, depending on the circumstances, hypothermia, circumstances for which there are evolutionarily conserved defense strategies in the range of species and also are relevant for the range of biomedical conditions. It remains unclear to what extent such adaptive processes are reflected in different biosampling procedures or how important they are for the definition of sample quality. Lately, an increasing number of comparative studies on different biosampling approaches, post-mortem effects and pre-sampling biological state, have investigated such immediate early biosampling effects. Commonalities between biosampling effects and a range of ischemia/reperfusion- and hypometabolism/anoxia-associated biological phenomena indicate that even small variations in post-sampling time intervals are likely to introduce a set of nonrandom and tissue-specific effects of experimental importance (both in vivo and in vitro). This review integrates the information provided by these comparative studies and discusses how an adaptive biological perspective in biosampling procedures may be relevant for sample quality issues. PMID:23382104
Guo, Xiaoting; Sun, Changku; Wang, Peng
2017-08-01
This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Burken, John; Ishihara, Abraham
2011-01-01
This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.
Astrobiology Objectives for Mars Sample Return
NASA Astrophysics Data System (ADS)
Meyer, M. A.
2002-05-01
Astrobiology is the study of life in the Universe, and a major objective is to understand the past, present, and future biologic potential of Mars. The current Mars Exploration Program encompasses a series of missions for reconnaissance and in-situ analyses to define in time and space the degree of habitability on Mars. Determining whether life ever existed on Mars is a more demanding question as evidenced by controversies concerning the biogenicity of features in the Mars meteorite ALH84001 and in the earliest rocks on Earth. In-situ studies may find samples of extreme interest but resolution of the life question most probably would require a sample returned to Earth. A selected sample from Mars has the many advantages: State-of-the-art instruments, precision sample handling and processing, scrutiny by different investigators employing different techniques, and adaptation of approach to any surprises It is with a returned sample from Mars that Astrobiology has the most to gain in determining whether life did, does, or could exist on Mars.
Sample rotating turntable kit for infrared spectrometers
Eckels, Joel Del [Livermore, CA; Klunder, Gregory L [Oakland, CA
2008-03-04
An infrared spectrometer sample rotating turntable kit has a rotatable sample cup containing the sample. The infrared spectrometer has an infrared spectrometer probe for analyzing the sample and the rotatable sample cup is adapted to receive the infrared spectrometer probe. A reflectance standard is located in the rotatable sample cup. A sleeve is positioned proximate the sample cup and adapted to receive the probe. A rotator rotates the rotatable sample cup. A battery is connected to the rotator.
Repurposing a Benchtop Centrifuge for High-Throughput Single-Molecule Force Spectroscopy.
Yang, Darren; Wong, Wesley P
2018-01-01
We present high-throughput single-molecule manipulation using a benchtop centrifuge, overcoming limitations common in other single-molecule approaches such as high cost, low throughput, technical difficulty, and strict infrastructure requirements. An inexpensive and compact Centrifuge Force Microscope (CFM) adapted to a commercial centrifuge enables use by nonspecialists, and integration with DNA nanoswitches facilitates both reliable measurements and repeated molecular interrogation. Here, we provide detailed protocols for constructing the CFM, creating DNA nanoswitch samples, and carrying out single-molecule force measurements.
Distinguishing ferritin from apoferritin using magnetic force microscopy
NASA Astrophysics Data System (ADS)
Nocera, Tanya M.; Zeng, Yuzhi; Agarwal, Gunjan
2014-11-01
Estimating the amount of iron-replete ferritin versus iron-deficient apoferritin proteins is important in biomedical and nanotechnology applications. This work introduces a simple and novel approach to quantify ferritin by using magnetic force microscopy (MFM). We demonstrate how high magnetic moment probes enhance the magnitude of MFM signal, thus enabling accurate quantitative estimation of ferritin content in ferritin/apoferritin mixtures in vitro. We envisage MFM could be adapted to accurately determine ferritin content in protein mixtures or in small aliquots of clinical samples.
Carbonaceous Chondrite Thin Section Preparation
NASA Technical Reports Server (NTRS)
Harrington, R.; Righter, K.
2017-01-01
Carbonaceous chondrite meteorites have long posed a challenge for thin section makers. The variability in sample hardness among the different types, and sometimes within individual sections, creates the need for an adaptable approach at each step of the thin section making process. This poster will share some of the procedural adjustments that have proven to be successful at the NASA JSC Meteorite Thin Section Laboratory. These adjustments are modifications of preparation methods that have been in use for decades and therefore do not require investment in new technology or materials.
Efficient method for computing the electronic transport properties of a multiterminal system
NASA Astrophysics Data System (ADS)
Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio
2018-04-01
We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.
Autonomous Sensing of Layered Structures in Hawaiian Waters
2008-01-01
layers in the sea. APPROACH In March of 2007 we were awarded $112,842 for the fabrication of an autonomous profiler (the SeaHorse ) for the...detection of thin layers of phytoplankton in the coastal ocean. The SeaHorse (Figures 1, 2) makes use of wave energy to power extended, high-resolution...to adaptively change the sample rate of the SeaHorse profiler itself. For example, if we observe a layer at 10 m depth, we can instruct the profiler
Ion source design for industrial applications
NASA Technical Reports Server (NTRS)
Kaufman, H. R.; Robinson, R. S.
1981-01-01
The design of broad-beam industrial ion sources is described. The approach used emphasizes refractory metal cathodes and permanent-magnet multipole discharge chambers. Design procedures and sample calculations are given for the discharge chamber, ion optics, cathodes, and magnetic circuit. Hardware designs are included for the isolator, cathode supports, anode supports, pole-piece assembly, and ion-optics supports. There are other ways of designing most ion source components, but the designs presented are representative of current technology and adaptable to a wide range of configurations.
Fast Adaptive Least Trimmed Squares for Robust Evaluation of Quality of Experience
2014-07-01
fact that not every Internet user is trustworthy . In other words, due to the lack of supervision when subjects perform experiments in crowdsourcing, they...21], [22], etc. However, a major challenge of crowdsourcing QoE evaluation is that not every Internet user is trustworthy . That is, some raters try...regularization paths of the LASSO problem could provide us an order on samples tending to be outliers. Such an approach is inspired by Huber’s celebrated work on
Malinina, E S; Andreeva, I G
2013-01-01
The perceptual peculiarities of sound source withdrawing and approaching and their influence on auditory aftereffects were studied in the free field. The radial movement of the auditory adapting stimuli was imitated by two methods: (1) by oppositely directed simultaneous amplitude change of the wideband signals at two loudspeakers placed at 1.1 and 4.5 m from a listener; (2) by an increase or a decrease of the wideband noise amplitude of the impulses at one of the loudspeakers--whether close or distant. The radial auditory movement of test stimuli was imitated by using the first method of imitation of adapting stimuli movement. Nine listeners estimated the direction of test stimuli movement without adaptation (control) and after adaptation. Adapting stimuli were stationary, slowly moving with sound level variation of 2 dB and rapidly moving with variation of 12 dB. The percentage of "withdrawing" responses was used for psychometric curve construction. Three perceptual phenomena were found. The growing louder effect was shown in control series without adaptation. The effect was characterized by a decrease of the number of "withdrawing" responses and overestimation of test stimuli as approaching. The position-dependent aftereffects were noticed after adaptation to the stationary and slowly moving sound stimuli. The aftereffect was manifested as an increase of the number of "withdrawing" responses and overestimation of test stimuli as withdrawal. The effect was reduced with increase of the distance between the listener and the loudspeaker. Movement aftereffects were revealed after adaptation to the rapidly moving stimuli. Aftereffects were direction-dependent: the number of "withdrawal" responses after adaptation to approach increased, whereas after adaptation to withdrawal it decreased relative to control. The movement aftereffects were more pronounced at imitation of movement of adapting stimuli by the first method. In this case the listener could determine the starting and the finishing points of movement trajectory. Interaction of movement aftereffects with the growing louder effect was absent in all ways of presentation of adapting stimuli. With increase of distance to the source of adapting stimuli, there was observed a tendency for a decrease of aftereffect of approach and for an increase of aftereffect of withdrawal.
Time-Based Indicators of Emotional Complexity: Interrelations and Correlates
Grühn, Daniel; Lumley, Mark A.; Diehl, Manfred; Labouvie-Vief, Gisela
2012-01-01
Emotional complexity has been regarded as one correlate of adaptive emotion regulation in adulthood. One novel and potentially valuable approach to operationalizing emotional complexity is to use reports of emotions obtained repeatedly in real time, which can generate a number of potential time-based indicators of emotional complexity. It is not known, however, how these indicators relate to each other, to other measures of affective complexity, such as those derived from a cognitive-developmental view of emotional complexity, or to measures of adaptive functioning, such as well-being. A sample of 109 adults, aged 23 to 90 years, participated in an experience-sampling study and reported their negative and positive affect five times a day for one week. Based on these reports, we calculated nine different time-based indicators potentially reflecting emotional complexity. Analyses showed three major findings: First, the indicators showed a diverse pattern of interrelations suggestive of four distinct components of emotional complexity. Second, age was generally not related to time-based indicators of emotional complexity; however, older adults showed overall low variability in negative affect. Third, time-based indicators of emotional complexity were either unrelated or inversely related to measures of adaptive functioning; that is, these measures tended to predict a less adaptive profile, such as lower subjective and psychological well-being. In sum, time-based indicators of emotional complexity displayed a more complex and less beneficial picture than originally thought. In particular, variability in negative affect seems to indicate suboptimal adjustments. Future research would benefit from collecting empirical data for the interrelations and correlates of time-based indicators of emotional complexity in different contexts. PMID:23163712
Deep learning with domain adaptation for accelerated projection-reconstruction MR.
Han, Yoseob; Yoo, Jaejun; Kim, Hak Hee; Shin, Hee Jung; Sung, Kyunghyun; Ye, Jong Chul
2018-09-01
The radial k-space trajectory is a well-established sampling trajectory used in conjunction with magnetic resonance imaging. However, the radial k-space trajectory requires a large number of radial lines for high-resolution reconstruction. Increasing the number of radial lines causes longer acquisition time, making it more difficult for routine clinical use. On the other hand, if we reduce the number of radial lines, streaking artifact patterns are unavoidable. To solve this problem, we propose a novel deep learning approach with domain adaptation to restore high-resolution MR images from under-sampled k-space data. The proposed deep network removes the streaking artifacts from the artifact corrupted images. To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of X-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets. The proposed method outperforms existing compressed sensing algorithms, such as the total variation and PR-FOCUSS methods. In addition, the calculation time is several orders of magnitude faster than the total variation and PR-FOCUSS methods. Moreover, we found that pre-training using CT or MR data from similar organ data is more important than pre-training using data from the same modality for different organ. We demonstrate the possibility of a domain-adaptation when only a limited amount of MR data is available. The proposed method surpasses the existing compressed sensing algorithms in terms of the image quality and computation time. © 2018 International Society for Magnetic Resonance in Medicine.
SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry
NASA Astrophysics Data System (ADS)
Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.
2007-03-01
A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.
NASA Technical Reports Server (NTRS)
Sarture, Charles M.; Chovit, Christopher J.; Chrien, Thomas G.; Eastwood, Michael L.; Green, Robert O.; Kurzwell, Charles G.
1998-01-01
From 1987 through 1997 the Airborne Visible-InfraRed Imaging Spectrometer has matured into a remote sensing instrument capable of producing prodigious amounts of high quality data. Using the NASA/Ames ER-2 high altitude aircraft platform, flight operations have become very reliable as well. Being exclusively dependent on the ER-2, however, has limitations: the ER-2 has a narrow cruise envelope which fixes the AVIRIS ground pixel at 20 meters; it requires a significant support infrastructure; and it has a very limited number of bases it can operate from. In the coming years, the ER-2 will also become less available for AVIRIS flights as NASA Earth Observing System satellite underflights increase. Adapting AVIRIS to lower altitude, less specialized aircraft will create a much broader envelope for data acquisition, i.e., higher ground geometric resolution while maintaining nearly the ideal spatial sampling. This approach will also greatly enhance flexibility while decreasing the overall cost of flight operations and field support. Successful adaptation is expected to culminate with a one-month period of demonstration flights.
Adaptive metric learning with deep neural networks for video-based facial expression recognition
NASA Astrophysics Data System (ADS)
Liu, Xiaofeng; Ge, Yubin; Yang, Chao; Jia, Ping
2018-01-01
Video-based facial expression recognition has become increasingly important for plenty of applications in the real world. Despite that numerous efforts have been made for the single sequence, how to balance the complex distribution of intra- and interclass variations well between sequences has remained a great difficulty in this area. We propose the adaptive (N+M)-tuplet clusters loss function and optimize it with the softmax loss simultaneously in the training phrase. The variations introduced by personal attributes are alleviated using the similarity measurements of multiple samples in the feature space with many fewer comparison times as conventional deep metric learning approaches, which enables the metric calculations for large data applications (e.g., videos). Both the spatial and temporal relations are well explored by a unified framework that consists of an Inception-ResNet network with long short term memory and the two fully connected layer branches structure. Our proposed method has been evaluated with three well-known databases, and the experimental results show that our method outperforms many state-of-the-art approaches.
New realisation of Preisach model using adaptive polynomial approximation
NASA Astrophysics Data System (ADS)
Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young
2012-09-01
Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.
Adapting to living with a mechanical aortic heart valve: a phenomenographic study.
Oterhals, Kjersti; Fridlund, Bengt; Nordrehaug, Jan Erik; Haaverstad, Rune; Norekvål, Tone M
2013-09-01
To describe how patients adapt to living with a mechanical aortic heart valve. Aortic valve replacement with a mechanical prosthesis is preferred for patients with life expectancy of more than 10 years as they are more durable than bioprosthetic valves. Mechanical valves have some disadvantages, such as higher risk of thrombosis and embolism, increased risk of bleeding related to lifelong oral anticoagulation treatment and noise from the valve. An explorative design with a phenomenographic approach was employed. An explorative design with a phenomenographic approach was applied. Interviews were conducted over 4 months during 2010-2011 with 20 strategically sampled patients, aged 24-74 years having undergone aortic valve replacement with mechanical prosthesis during the last 10 years. Patients adapted to living with a mechanical aortic heart valve in four ways: 'The competent patient' wanted to stay in control of his/her life. 'The adjusted patient' considered the implications of having a mechanical aortic valve as part of his/her daily life. 'The unaware patient' was not aware of warfarin-diet-medication interactions. 'The worried patient' was bothered with the oral anticoagulation and annoyed by the sound of the valve. Patients moved between the different ways of adapting. The oral anticoagulation therapy was considered the most troublesome consequence, but also the sound of the valve was difficult to accept. Patient counselling and adequate follow-up can make patients with mechanical aortic heart valves more confident and competent to manage their own health. We recommend that patients should participate in a rehabilitation programme following cardiac surgery. © 2013 Blackwell Publishing Ltd.
Bass, Judith K; Ryder, Robert W; Lammers, Marie-Christine; Mukaba, Thibaut N; Bolton, Paul A
2008-12-01
To determine if a post-partum depression syndrome exists among mothers in Kinshasa, Democratic Republic of Congo, by adapting and validating standard screening instruments. Using qualitative interviewing techniques, we interviewed a convenience sample of 80 women living in a large peri-urban community to better understand local conceptions of mental illness. We used this information to adapt two standard depression screeners, the Edinburgh Post-partum Depression Scale and the Hopkins Symptom Checklist. In a subsequent quantitative study, we identified another 133 women with and without the local depression syndrome and used this information to validate the adapted screening instruments. Based on the qualitative data, we found a local syndrome that closely approximates the Western model of major depressive disorder. The women we interviewed, representative of the local populace, considered this an important syndrome among new mothers because it negatively affects women and their young children. Women (n = 41) identified as suffering from this syndrome had statistically significantly higher depression severity scores on both adapted screeners than women identified as not having this syndrome (n = 20; P < 0.0001). When it is unclear or unknown if Western models of psychopathology are appropriate for use in the local context, these models must be validated to ensure cross-cultural applicability. Using a mixed-methods approach we found a local syndrome similar to depression and validated instruments to screen for this disorder. As the importance of compromised mental health in developing world populations becomes recognized, the methods described in this report will be useful more widely.
Flight Test Approach to Adaptive Control Research
NASA Technical Reports Server (NTRS)
Pavlock, Kate Maureen; Less, James L.; Larson, David Nils
2011-01-01
The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.
Faster and less phototoxic 3D fluorescence microscopy using a versatile compressed sensing scheme
Woringer, Maxime; Darzacq, Xavier; Zimmer, Christophe
2017-01-01
Three-dimensional fluorescence microscopy based on Nyquist sampling of focal planes faces harsh trade-offs between acquisition time, light exposure, and signal-to-noise. We propose a 3D compressed sensing approach that uses temporal modulation of the excitation intensity during axial stage sweeping and can be adapted to fluorescence microscopes without hardware modification. We describe implementations on a lattice light sheet microscope and an epifluorescence microscope, and show that images of beads and biological samples can be reconstructed with a 5-10 fold reduction of light exposure and acquisition time. Our scheme opens a new door towards faster and less damaging 3D fluorescence microscopy. PMID:28788909
Embedding clinical interventions into observational studies
Newman, Anne B.; Avilés-Santa, M. Larissa; Anderson, Garnet; Heiss, Gerardo; Howard, Wm. James; Krucoff, Mitchell; Kuller, Lewis H.; Lewis, Cora E.; Robinson, Jennifer G.; Taylor, Herman; Treviño, Roberto P.; Weintraub, William
2017-01-01
Novel approaches to observational studies and clinical trials could improve the cost-effectiveness and speed of translation of research. Hybrid designs that combine elements of clinical trials with observational registries or cohort studies should be considered as part of a long-term strategy to transform clinical trials and epidemiology, adapting to the opportunities of big data and the challenges of constrained budgets. Important considerations include study aims, timing, breadth and depth of the existing infrastructure that can be leveraged, participant burden, likely participation rate and available sample size in the cohort, required sample size for the trial, and investigator expertise. Community engagement and stakeholder (including study participants) support are essential for these efforts to succeed. PMID:26611435
González-Bueno, Javier; Calvo-Cidoncha, Elena; Sevilla-Sánchez, Daniel; Espaulella-Panicot, Joan; Codina-Jané, Carles; Santos-Ramos, Bernardo
2017-10-01
Translate the ARMS scale into Spanish ensuring cross-cultural equivalence for measuring medication adherence in polypathological patients. Translation, cross-cultural adaptation and pilot testing. Secondary hospital. (i)Forward and blind-back translations followed by cross-cultural adaptation through qualitative methodology to ensure conceptual, semantic and content equivalence between the original scale and the Spanish version. (ii)Pilot testing in non-institutionalized polypathological patients to assess the instrument for clarity. The Spanish version of the ARMS scale has been obtained. Overall scores from translators involved in forward and blind-back translations were consistent with a low difficulty for assuring conceptual equivalence between both languages. Pilot testing (cognitive debriefing) in a sample of 40 non-institutionalized polypathological patients admitted to an internal medicine department of a secondary hospital showed an excellent clarity. The ARMS-e scale is a Spanish-adapted version of the ARMS scale, suitable for measuring adherence in polypathological patients. Its structure enables a multidimensional approach of the lack of adherence allowing the implementation of individualized interventions guided by the barriers detected in every patient. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Cross-cultural adaptation of the Work Role Functioning Questionnaire 2.0 to Norwegian and Danish.
Johansen, Thomas; Lund, Thomas; Jensen, Chris; Momsen, Anne-Mette Hedeager; Eftedal, Monica; Øyeflaten, Irene; Braathen, Tore N; Stapelfeldt, Christina M; Amick, Ben; Labriola, Merete
2018-01-01
A healthy and productive working life has attracted attention owing to future employment and demographic challenges. The aim was to translate and adapt the Work Role Functioning Questionnaire (WRFQ) 2.0 to Norwegian and Danish. The WRFQ is a self-administered tool developed to identify health-related work limitations. Standardised cross-cultural adaptation procedures were followed in both countries' translation processes. Direct translation, synthesis, back translation and consolidation were carried out successfully. A pre-test among 78 employees who had returned to work after sickness absence found idiomatic issues requiring reformulation in the instructions, four items in the Norwegian version, and three items in the Danish version, respectively. In the final versions, seven items were adjusted in each country. Psychometric properties were analysed for the Norwegian sample (n = 40) and preliminary Cronbach's alpha coefficients were satisfactory. A final consensus process was performed to achieve similar titles and introductions. The WRFQ 2.0 cross-cultural adaptation to Norwegian and Danish was performed and consensus was obtained. Future validation studies will examine validity, reliability, responsiveness and differential item response. The WRFQ can be used to elucidate both individual and work environmental factors leading to a more holistic approach in work rehabilitation.
Tsang, Tawny; Gillespie-Lynch, Kristen; Hutman, Ted
2016-01-01
Subclinical variants of the social-communicative challenges and rigidity that define autism spectrum disorder (ASD) are known as the broader autism phenotype (BAP). The BAP has been conceptualized categorically (as specific to a subset of relatives of individuals with ASD) and dimensionally (as continuously distributed within the general population). The current study examined the compatibility of these two approaches by assessing associations among autism symptoms and social-communicative skills in young school-age children with ASD, children who have a sibling with ASD, and children without a sibling with ASD. Autism symptoms were associated with reduced Theory of Mind (ToM), adaptive skills, cognitive empathy, and language skills across the full sample. Reduced ToM was a core aspect of the BAP in the current sample regardless of whether the BAP was defined categorically (in terms of siblings of children with ASD who exhibited atypical developmental) or dimensionally (in terms of associations with autism symptoms across the entire sample). Early language skills predicted school-age ToM. Findings support the compatibility of categorical and dimensional approaches to the BAP, highlight reduced ToM as a core aspect of the school-age BAP, and suggest that narrative-based approaches to promoting ToM may be beneficial for siblings of children with ASD.
Climate change adaptation frameworks: an evaluation of plans for coastal Suffolk, UK
NASA Astrophysics Data System (ADS)
Armstrong, J.; Wilby, R.; Nicholls, R. J.
2015-11-01
This paper asserts that three principal frameworks for climate change adaptation can be recognised in the literature: scenario-led (SL), vulnerability-led (VL) and decision-centric (DC) frameworks. A criterion is developed to differentiate these frameworks in recent adaptation projects. The criterion features six key hallmarks as follows: (1) use of climate model information; (2) analysis of metrics/units; (3) socio-economic knowledge; (4) stakeholder engagement; (5) adaptation of implementation mechanisms; (6) tier of adaptation implementation. The paper then tests the validity of this approach using adaptation projects on the Suffolk coast, UK. Fourteen adaptation plans were identified in an online survey. They were analysed in relation to the hallmarks outlined above and assigned to an adaptation framework. The results show that while some adaptation plans are primarily SL, VL or DC, the majority are hybrid, showing a mixture of DC/VL and DC/SL characteristics. Interestingly, the SL/VL combination is not observed, perhaps because the DC framework is intermediate and attempts to overcome weaknesses of both SL and VL approaches. The majority (57 %) of adaptation projects generated a risk assessment or advice notes. Further development of this type of framework analysis would allow better guidance on approaches for organisations when implementing climate change adaptation initiatives, and other similar proactive long-term planning.
Climate change adaptation frameworks: an evaluation of plans for coastal, Suffolk, UK
NASA Astrophysics Data System (ADS)
Armstrong, J.; Wilby, R.; Nicholls, R. J.
2015-06-01
This paper asserts that three principal frameworks for climate change adaptation can be recognised in the literature: Scenario-Led (SL), Vulnerability-Led (VL) and Decision-Centric (DC) frameworks. A criterion is developed to differentiate these frameworks in recent adaptation projects. The criterion features six key hallmarks as follows: (1) use of climate model information; (2) analysis metrics/units; (3) socio-economic knowledge; (4) stakeholder engagement; (5) adaptation implementation mechanisms; (6) tier of adaptation implementation. The paper then tests the validity of this approach using adaptation projects on the Suffolk coast, UK. Fourteen adaptation plans were identified in an online survey. They were analysed in relation to the hallmarks outlined above and assigned to an adaptation framework. The results show that while some adaptation plans are primarily SL, VL or DC, the majority are hybrid showing a mixture of DC/VL and DC/SL characteristics. Interestingly, the SL/VL combination is not observed, perhaps because the DC framework is intermediate and attempts to overcome weaknesses of both SL and VL approaches. The majority (57 %) of adaptation projects generated a risk assessment or advice notes. Further development of this type of framework analysis would allow better guidance on approaches for organisations when implementing climate change adaptation initiatives, and other similar proactive long-term planning.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Yang, Juan; Li, Wenhua; Liu, Siyuan; Yuan, Dongya; Guo, Yijiao; Jia, Cheng; Song, Tusheng; Huang, Chen
2016-01-01
We aimed to identify serum biomarkers for screening individuals who could adapt to high-altitude hypoxia at sea level. HHA (high-altitude hypoxia acclimated; n = 48) and HHI (high-altitude hypoxia illness; n = 48) groups were distinguished at high altitude, routine blood tests were performed for both groups at high altitude and at sea level. Serum biomarkers were identified by comparing serum peptidome profiling between HHI and HHA groups collected at sea level. Routine blood tests revealed the concentration of hemoglobin and red blood cells were significantly higher in HHI than in HHA at high altitude. Serum peptidome profiling showed that ten significantly differentially expressed peaks between HHA and HHI at sea level. Three potential serum peptide peaks (m/z values: 1061.91, 1088.33, 4057.63) were further sequence identified as regions of the inter-α trypsin inhibitor heavy chain H4 fragment (ITIH4 347–356), regions of the inter-α trypsin inhibitor heavy chain H1 fragment (ITIH1 205–214), and isoform 1 of fibrinogen α chain precursor (FGA 588–624). Expression of their full proteins was also tested by ELISA in HHA and HHI samples collected at sea level. Our study provided a novel approach for identifying potential biomarkers for screening people at sea level who can adapt to high altitudes. PMID:27150491
Peltola, Tomi; Marttinen, Pekka; Vehtari, Aki
2012-01-01
High-dimensional datasets with large amounts of redundant information are nowadays available for hypothesis-free exploration of scientific questions. A particular case is genome-wide association analysis, where variations in the genome are searched for effects on disease or other traits. Bayesian variable selection has been demonstrated as a possible analysis approach, which can account for the multifactorial nature of the genetic effects in a linear regression model. Yet, the computation presents a challenge and application to large-scale data is not routine. Here, we study aspects of the computation using the Metropolis-Hastings algorithm for the variable selection: finite adaptation of the proposal distributions, multistep moves for changing the inclusion state of multiple variables in a single proposal and multistep move size adaptation. We also experiment with a delayed rejection step for the multistep moves. Results on simulated and real data show increase in the sampling efficiency. We also demonstrate that with application specific proposals, the approach can overcome a specific mixing problem in real data with 3822 individuals and 1,051,811 single nucleotide polymorphisms and uncover a variant pair with synergistic effect on the studied trait. Moreover, we illustrate multimodality in the real dataset related to a restrictive prior distribution on the genetic effect sizes and advocate a more flexible alternative. PMID:23166669
Klemm, Matthias; Schweitzer, Dietrich; Peters, Sven; Sauer, Lydia; Hammer, Martin; Haueisen, Jens
2015-01-01
Fluorescence lifetime imaging ophthalmoscopy (FLIO) is a new technique for measuring the in vivo autofluorescence intensity decays generated by endogenous fluorophores in the ocular fundus. Here, we present a software package called FLIM eXplorer (FLIMX) for analyzing FLIO data. Specifically, we introduce a new adaptive binning approach as an optimal tradeoff between the spatial resolution and the number of photons required per pixel. We also expand existing decay models (multi-exponential, stretched exponential, spectral global analysis, incomplete decay) to account for the layered structure of the eye and present a method to correct for the influence of the crystalline lens fluorescence on the retina fluorescence. Subsequently, the Holm-Bonferroni method is applied to FLIO measurements to allow for group comparisons between patients and controls on the basis of fluorescence lifetime parameters. The performance of the new approaches was evaluated in five experiments. Specifically, we evaluated static and adaptive binning in a diabetes mellitus patient, we compared the different decay models in a healthy volunteer and performed a group comparison between diabetes patients and controls. An overview of the visualization capabilities and a comparison of static and adaptive binning is shown for a patient with macular hole. FLIMX's applicability to fluorescence lifetime imaging microscopy is shown in the ganglion cell layer of a porcine retina sample, obtained by a laser scanning microscope using two-photon excitation.
NASA Astrophysics Data System (ADS)
Fang, Y.; Hou, J.; Engel, D.; Lin, G.; Yin, J.; Han, B.; Fang, Z.; Fountoulakis, V.
2011-12-01
In this study, we introduce an uncertainty quantification (UQ) software framework for carbon sequestration, with the focus of studying being the effect of spatial heterogeneity of reservoir properties on CO2 migration. We use a sequential Gaussian method (SGSIM) to generate realizations of permeability fields with various spatial statistical attributes. To deal with the computational difficulties, we integrate the following ideas/approaches: 1) firstly, we use three different sampling approaches (probabilistic collocation, quasi-Monte Carlo, and adaptive sampling approaches) to reduce the required forward calculations while trying to explore the parameter space and quantify the input uncertainty; 2) secondly, we use eSTOMP as the forward modeling simulator. eSTOMP is implemented using the Global Arrays toolkit (GA) that is based on one-sided inter-processor communication and supports a shared memory programming style on distributed memory platforms. It provides highly-scalable performance. It uses a data model to partition most of the large scale data structures into a relatively small number of distinct classes. The lower level simulator infrastructure (e.g. meshing support, associated data structures, and data mapping to processors) is separated from the higher level physics and chemistry algorithmic routines using a grid component interface; and 3) besides the faster model and more efficient algorithms to speed up the forward calculation, we built an adaptive system infrastructure to select the best possible data transfer mechanisms, to optimally allocate system resources to improve performance, and to integrate software packages and data for composing carbon sequestration simulation, computation, analysis, estimation and visualization. We will demonstrate the framework with a given CO2 injection scenario in a heterogeneous sandstone reservoir.
Gutzweiler, Ludwig; Gleichmann, Tobias; Tanguy, Laurent; Koltay, Peter; Zengerle, Roland; Riegger, Lutz
2017-07-01
Gel electrophoresis is one of the most applied and standardized tools for separation and analysis of macromolecules and their fragments in academic research and in industry. In this work we present a novel approach for conducting on-demand electrophoretic separations of DNA molecules in open microfluidic (OM) systems on planar polymer substrates. The approach combines advantages of slab gel, capillary- and chip-based methods offering low consumable costs (<0.1$) circumventing cost-intensive microfluidic chip fabrication, short process times (5 min per analysis) and high sensitivity (4 ng/μL dsDNA) combined with reasonable resolution (17 bases). The open microfluidic separation system comprises two opposing reservoirs of 2-4 μL in volume, a semi-contact written gel line acting as separation channel interconnecting the reservoirs and sample injected into the line via non-contact droplet dispensing and thus enabling the precise control of the injection plug and sample concentration. Evaporation is prevented by covering aqueous structures with PCR-grade mineral oil while maintaining surface temperature at 15°C. The liquid gel line exhibits a semi-circular cross section of adaptable width (∼200-600 μm) and height (∼30-80 μm) as well as a typical length of 15-55 mm. Layout of such liquid structures is adaptable on-demand not requiring time consuming and repetitive fabrication steps. The approach was successfully demonstrated by the separation of a standard label-free DNA ladder (100-1000 bp) at 100 V/cm via in-line staining and laser induced fluorescent end-point detection using an automated prototype. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Towards a Comparative Index of Seaport Climate-Risk: Development of Indicators from Open Data
NASA Astrophysics Data System (ADS)
McIntosh, R. D.; Becker, A.
2016-02-01
Seaports represent an example of coastal infrastructure that is at once critical to global trade, constrained to the land-sea interface, and exposed to weather and climate hazards. Seaports face impacts associated with projected changes in sea level, sedimentation, ocean chemistry, wave dynamics, temperature, precipitation, and storm frequency and intensity. Port decision-makers have the responsibility to enhance resilience against these impacts. At the multi-port (regional or national) scale, policy-makers must prioritize adaptation efforts to maximize the efficiency of limited physical and financial resources. Prioritization requires comparing across seaports, and comparison requires a standardized assessment method, but efforts to date have either been limited in scope to exposure-only assessments or limited in scale to evaluate one port in isolation from a system of ports. In order to better understand the distribution of risk across ports and to inform transportation resilience policy, we are developing a comparative assessment method to measure the relative climate-risk faced by a sample of ports. Our mixed-methods approach combines a quantitative, data-driven, indicator-based assessment with qualitative data collected via expert-elicitation. In this presentation, we identify and synthesize over 120 potential risk indicators from open data sources. Indicators represent exposure, sensitivity, and adaptive capacity for a pilot sample of 20 ports. Our exploratory data analysis, including Principal Component Analysis, uncovered sources of variance between individual ports and between indicators. Next steps include convening an expert panel representing the perspectives of multiple transportation system agencies to find consensus on a suite of robust indicators and metrics for maritime freight node climate risk assessment. The index will be refined based on expert feedback, the sample size expanded, and additional indicators sought from closed data sources. Developing standardized indicators from available data is an essential step in risk assessment, as robust indicators can help policy-makers monitor resilience strategy implementation, target and justify resource expenditure for adaptation schemes, communicate adaptation to stakeholders, and benchmark progress.
Guo, Zongyi; Chang, Jing; Guo, Jianguo; Zhou, Jun
2018-06-01
This paper focuses on the adaptive twisting sliding mode control for the Hypersonic Reentry Vehicles (HRVs) attitude tracking issue. The HRV attitude tracking model is transformed into the error dynamics in matched structure, whereas an unmeasurable state is redefined by lumping the existing unmatched disturbance with the angular rate. Hence, an adaptive finite-time observer is used to estimate the unknown state. Then, an adaptive twisting algorithm is proposed for systems subject to disturbances with unknown bounds. The stability of the proposed observer-based adaptive twisting approach is guaranteed, and the case of noisy measurement is analyzed. Also, the developed control law avoids the aggressive chattering phenomenon of the existing adaptive twisting approaches because the adaptive gains decrease close to the disturbance once the trajectories reach the sliding surface. Finally, numerical simulations on the attitude control of the HRV are conducted to verify the effectiveness and benefit of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Irregular and adaptive sampling for automatic geophysic measure systems
NASA Astrophysics Data System (ADS)
Avagnina, Davide; Lo Presti, Letizia; Mulassano, Paolo
2000-07-01
In this paper a sampling method, based on an irregular and adaptive strategy, is described. It can be used as automatic guide for rovers designed to explore terrestrial and planetary environments. Starting from the hypothesis that a explorative vehicle is equipped with a payload able to acquire measurements of interesting quantities, the method is able to detect objects of interest from measured points and to realize an adaptive sampling, while badly describing the not interesting background.
Beyond Reduction: Climate Change Adaptation Planning for Universities and Colleges
ERIC Educational Resources Information Center
Owen, Rochelle; Fisher, Erica; McKenzie, Kyle
2013-01-01
Purpose: The purpose of this paper is to outline a unique six-step process for the inclusion of climate change adaption goals and strategies in a University Climate Change Plan. Design/methodology/approach: A mixed-method approach was used to gather data on campus climate change vulnerabilities and adaption strategies. A literature review…
An Adaptive Approach to Managing Knowledge Development in a Project-Based Learning Environment
ERIC Educational Resources Information Center
Tilchin, Oleg; Kittany, Mohamed
2016-01-01
In this paper we propose an adaptive approach to managing the development of students' knowledge in the comprehensive project-based learning (PBL) environment. Subject study is realized by two-stage PBL. It shapes adaptive knowledge management (KM) process and promotes the correct balance between personalized and collaborative learning. The…
ERIC Educational Resources Information Center
Emmons, Natalie; Smith, Hayley; Kelemen, Deborah
2016-01-01
Research Findings: Educational guidelines recommend a delayed, piecemeal approach to instruction on adaptation by natural selection. This approach is questionable given suggestions that older students' pervasive misunderstandings about adaptation are rooted in cognitive biases that develop early. In response to this, Kelemen et al. (2014) recently…
ERIC Educational Resources Information Center
Reinschmidt, Kerstin M.; Teufel-Shone, Nicolette I.; Bradford, Gail; Drummond, Rebecca L.; Torres, Emma; Redondo, Floribella; Elenes, Jo Jean; Sanders, Alicia; Gastelum, Sylvia; Moore-Monroy, Martha; Barajas, Salvador; Fernandez, Lourdes; Alvidrez, Rosy; de Zapien, Jill Guernsey; Staten, Lisa K.
2010-01-01
Diabetes health disparities among Hispanic populations have been countered with federally funded health promotion and disease prevention programs. Dissemination has focused on program adaptation to local cultural contexts for greater acceptability and sustainability. Taking a broader approach and drawing on our experience in Mexican American…
Method and apparatus for telemetry adaptive bandwidth compression
NASA Technical Reports Server (NTRS)
Graham, Olin L.
1987-01-01
Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.
Ivanich, Jerreed D; Mousseau, Alicia C; Walls, Melissa; Whitbeck, Les; Whitesell, Nancy Rumbaugh
2018-06-06
Indigenous communities often face disproportionate challenges across a variety of health domains, and effective prevention strategies are sorely needed. Unfortunately, evidence is scant regarding what approaches are effective for these communities. A common approach is to take an evidence-based practice or program with documented effectiveness in other populations and implement it with Indigenous populations. While a science of intervention adaptation is emerging, there remains little guidance on processes for adaptation that strategically leverage both existing scientific evidence and Indigenous prevention strategies. In this paper, two case studies illustrate promising practices for adaptation, documenting the approaches of two research teams funded under the National Institutes of Health's initiative to support Intervention Research to Improve Native American Health (IRINAH). These teams worked with distinct Indigenous populations in the USA and Canada to culturally adapt the same prevention program, the Iowa Strengthening Families Program for Parents and Youth 10-14. The approaches of these two teams and the programs that resulted are compared and contrasted, and critical elements of adaptation in partnership with Indigenous communities are discussed.
Kertesz, Vilmos; Van Berkel, Gary J
2010-07-15
In this work, a commercially available autosampler was adapted to perform direct liquid microjunction (LMJ) surface sampling followed by a high-pressure liquid chromatography (HPLC) separation of the extract components and detection with electrospray ionization mass spectrometry (ESI-MS). To illustrate the utility of coupling a separation with this direct liquid extraction based surface sampling approach, four different organs (brain, lung, kidney, and liver) from whole-body thin tissue sections of propranolol dosed and control mice were examined. The parent drug was observed in the chromatograms of the surface sampling extracts from all the organs of the dosed mouse examined. In addition, two isomeric phase II metabolites of propranolol (an aliphatic and an aromatic hydroxypropranolol glucuronide) were observed in the chromatograms of the extracts from lung, kidney, and liver. Confirming the presence of one or the other or both of these glucuronides in the extract from the various organs was not possible without the separation. These drug and metabolite data obtained using the LMJ surface sampling/HPLC-MS method and the results achieved by analyzing similar samples by conventional extraction of the tissues and subsequent HPLC-MS analysis were consistent. The ability to directly and efficiently sample from thin tissue sections via a liquid extraction and then perform a subsequent liquid phase separation increases the utility of this liquid extraction surface sampling approach.
NASA Astrophysics Data System (ADS)
Asfahani, Jamal
2016-05-01
A new alternative approach based on using Vertical electrical sounding (VES) technique is proposed for computing the hydraulic conductivity K of an aquifer. The approach takes only the salinity of the groundwater into consideration. VES measurements in the locations, where available water samples exist, are required in such an approach, in order to calibrate and establish empirical relationships between transverse resistance Dar-Zarrouck TR parameter and modified transverse resistance MTR, and between MTR and transmissivity T. Those relationships are thereafter used to extrapolate the transmissivity even in the VES points where no water samples exist. This approach is tested and practiced in the Khanasser valley, Northern Syria, where the hydraulic conductivity of the Quaternary aquifer is computed. An acceptable agreement is found between the hydraulic conductivity values obtained by the proposed approach and those obtained by the pumping test which range between 0.864 and 8.64 m/day (10-5 and 10-4 m/s). The Quaternary aquifer transmissivity of the Khanasser Valley, has been characterized by using this approach and by adapting the MTR parameter. The transmissivity varies between a minimum of 79 m2/day and a maximum of 814 m2/day, with an average of 283 m2/day and a standard deviation of 145 m2/day. The easy and inexpensive approach proposed in this paper can be applied in other semi arid regions.
Review of sampling hard-to-reach and hidden populations for HIV surveillance.
Magnani, Robert; Sabin, Keith; Saidel, Tobi; Heckathorn, Douglas
2005-05-01
Adequate surveillance of hard-to-reach and 'hidden' subpopulations is crucial to containing the HIV epidemic in low prevalence settings and in slowing the rate of transmission in high prevalence settings. For a variety of reasons, however, conventional facility and survey-based surveillance data collection strategies are ineffective for a number of key subpopulations, particularly those whose behaviors are illegal or illicit. This paper critically reviews alternative sampling strategies for undertaking behavioral or biological surveillance surveys of such groups. Non-probability sampling approaches such as facility-based sentinel surveillance and snowball sampling are the simplest to carry out, but are subject to a high risk of sampling/selection bias. Most of the probability sampling methods considered are limited in that they are adequate only under certain circumstances and for some groups. One relatively new method, respondent-driven sampling, an adaptation of chain-referral sampling, appears to be the most promising for general applications. However, as its applicability to HIV surveillance in resource-poor settings has yet to be established, further field trials are needed before a firm conclusion can be reached.
A new hybrid case-based reasoning approach for medical diagnosis systems.
Sharaf-El-Deen, Dina A; Moawad, Ibrahim F; Khalifa, M E
2014-02-01
Case-Based Reasoning (CBR) has been applied in many different medical applications. Due to the complexities and the diversities of this domain, most medical CBR systems become hybrid. Besides, the case adaptation process in CBR is often a challenging issue as it is traditionally carried out manually by domain experts. In this paper, a new hybrid case-based reasoning approach for medical diagnosis systems is proposed to improve the accuracy of the retrieval-only CBR systems. The approach integrates case-based reasoning and rule-based reasoning, and also applies the adaptation process automatically by exploiting adaptation rules. Both adaptation rules and reasoning rules are generated from the case-base. After solving a new case, the case-base is expanded, and both adaptation and reasoning rules are updated. To evaluate the proposed approach, a prototype was implemented and experimented to diagnose breast cancer and thyroid diseases. The final results show that the proposed approach increases the diagnosing accuracy of the retrieval-only CBR systems, and provides a reliable accuracy comparing to the current breast cancer and thyroid diagnosis systems.
Saylor, Karen L.; Anver, Miriam R.; Salomon, David S.; Golubeva, Yelena G.
2016-01-01
Laser capture microdissection (LCM) of tissue is an established tool in medical research for collection of distinguished cell populations under direct microscopic visualization for molecular analysis. LCM samples have been successfully analyzed in a number of genomic and proteomic downstream molecular applications. However, LCM sample collection and preparation procedure has to be adapted to each downstream analysis platform. In this present manuscript we describe in detail the adaptation of LCM methodology for the collection and preparation of fresh frozen samples for NanoString analysis based on a study of a model of mouse mammary gland carcinoma and its lung metastasis. Our adaptation of LCM sample preparation and workflow to the requirements of the NanoString platform allowed acquiring samples with high RNA quality. The NanoString analysis of such samples provided sensitive detection of genes of interest and their associated molecular pathways. NanoString is a reliable gene expression analysis platform that can be effectively coupled with LCM. PMID:27077656
Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith
2011-01-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089
Recruitment of hard-to-reach population subgroups via adaptations of the snowball sampling strategy.
Sadler, Georgia Robins; Lee, Hau-Chen; Lim, Rod Seung-Hwan; Fullerton, Judith
2010-09-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author's program of research are provided to demonstrate how adaptations of snowball sampling can be used effectively in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more-vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or for research studies when the recruitment of a population-based sample is not essential.
Procedures for Selecting Items for Computerized Adaptive Tests.
ERIC Educational Resources Information Center
Kingsbury, G. Gage; Zara, Anthony R.
1989-01-01
Several classical approaches and alternative approaches to item selection for computerized adaptive testing (CAT) are reviewed and compared. The study also describes procedures for constrained CAT that may be added to classical item selection approaches to allow them to be used for applied testing. (TJH)
Adaptive Flight Control Research at NASA
NASA Technical Reports Server (NTRS)
Motter, Mark A.
2008-01-01
A broad overview of current adaptive flight control research efforts at NASA is presented, as well as some more detailed discussion of selected specific approaches. The stated objective of the Integrated Resilient Aircraft Control Project, one of NASA s Aviation Safety programs, is to advance the state-of-the-art of adaptive controls as a design option to provide enhanced stability and maneuverability margins for safe landing in the presence of adverse conditions such as actuator or sensor failures. Under this project, a number of adaptive control approaches are being pursued, including neural networks and multiple models. Validation of all the adaptive control approaches will use not only traditional methods such as simulation, wind tunnel testing and manned flight tests, but will be augmented with recently developed capabilities in unmanned flight testing.
McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S
2016-10-01
The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.
ERIC Educational Resources Information Center
Mao, Xiuzhen; Xin, Tao
2013-01-01
The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…
Döpfner, Manfred; Hautmann, Christopher; Dose, Christina; Banaschewski, Tobias; Becker, Katja; Brandeis, Daniel; Holtmann, Martin; Jans, Thomas; Jenkner, Carolin; Millenet, Sabina; Renner, Tobias; Romanos, Marcel; von Wirth, Elena
2017-07-24
The ESCAschool study addresses the treatment of school-age children with attention-deficit/hyperactivity disorder (ADHD) in a large multicentre trial. It aims to investigate three interrelated topics: (i) Clinical guidelines often recommend a stepped care approach, including different treatment strategies for children with mild to moderate and severe ADHD symptoms, respectively. However, this approach has not yet been empirically validated. (ii) Behavioural interventions and neurofeedback have been shown to be effective, but the superiority of combined treatment approaches such as medication plus behaviour therapy or medication plus neurofeedback compared to medication alone remains questionable. (iii) Growing evidence indicates that telephone-assisted self-help interventions are effective in the treatment of ADHD. However, larger randomised controlled trials (RCTs) are lacking. This report presents the ESCAschool trial protocol. In an adaptive treatment design, two RCTs and additional observational treatment arms are considered. The target sample size of ESCAschool is 521 children with ADHD. Based on their baseline ADHD symptom severity, the children will be assigned to one of two groups (mild to moderate symptom group and severe symptom group). The adaptive design includes two treatment phases (Step 1 and Step 2). According to clinical guidelines, different treatment protocols will be followed for the two severity groups. In the moderate group, the efficacy of telephone-assisted self-help for parents and teachers will be tested against waitlist control in Step 1 (RCT I). The severe group will receive pharmacotherapy combined with psychoeducation in Step 1. For both groups, treatment response will be determined after Step 1 treatment (no, partial or full response). In severe group children demonstrating partial response to medication, in Step 2, the efficacy of (1) counselling, (2) behaviour therapy and (3) neurofeedback will be tested (RCT II). All other treatment arms in Step 2 (severe group: no or full response; moderate group: no, partial or full response) are observational. The ESCAschool trial will provide evidence-based answers to several important questions for clinical practice following a stepped care approach. The adaptive study design will also provide new insights into the effects of additional treatments in children with partial response. German Clinical Trials Register (DRKS) DRKS00008973 . Registered 18 December 2015.
Taravat, Alireza; Oppelt, Natascha
2014-01-01
Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
Science-based Forest Management in an Era of Climate Change
NASA Astrophysics Data System (ADS)
Swanston, C.; Janowiak, M.; Brandt, L.; Butler, P.; Handler, S.; Shannon, D.
2014-12-01
Recognizing the need to provide climate adaptation information, training, and tools to forest managers, the Forest Service joined with partners in 2009 to launch a comprehensive effort called the Climate Change Response Framework (www.forestadaptation.org). The Framework provides a structured approach to help managers integrate climate considerations into forest management plans and then implement adaptation actions on the ground. A planning tool, the Adaptation Workbook, is used in conjunction with vulnerability assessments and a diverse "menu" of adaptation approaches to generate site-specific adaptation actions that meet explicit management objectives. Additionally, a training course, designed around the Adaptation Workbook, leads management organizations through this process of designing on-the-ground adaptation tactics for their management projects. The Framework is now being actively pursued in 20 states in the Northwoods, Central Hardwoods, Central Appalachians, Mid-Atlantic, and New England. The Framework community includes over 100 science and management groups, dozens of whom have worked together to complete six ecoregional vulnerability assessments covering nearly 135 million acres. More than 75 forest and urban forest adaptation strategies and approaches were synthesized from peer-reviewed and gray literature, expert solicitation, and on-the-ground adaptation projects. These are being linked through the Adaptation Workbook process to on-the-ground adaptation tactics being planned and employed in more than 50 adaptation "demonstrations". This presentation will touch on the scientific and professional basis of the vulnerability assessments, and showcase efforts where adaptation actions are currently being implemented in forests.
Nika, Heinz; Nieves, Edward; Hawke, David H.; Angeletti, Ruth Hogue
2013-01-01
We previously adapted the β-elimination/Michael addition chemistry to solid-phase derivatization on reversed-phase supports, and demonstrated the utility of this reaction format to prepare phosphoseryl peptides in unfractionated protein digests for mass spectrometric identification and facile phosphorylation-site determination. Here, we have expanded the use of this technique to β-N-acetylglucosamine peptides, modified at serine/threonine, phosphothreonyl peptides, and phosphoseryl/phosphothreonyl peptides, followed in sequence by proline. The consecutive β-elimination with Michael addition was adapted to optimize the solid-phase reaction conditions for throughput and completeness of derivatization. The analyte remained intact during derivatization and was recovered efficiently from the silica-based, reversed-phase support with minimal sample loss. The general use of the solid-phase approach for enzymatic dephosphorylation was demonstrated with phosphoseryl and phosphothreonyl peptides and was used as an orthogonal method to confirm the identity of phosphopeptides in proteolytic mixtures. The solid-phase approach proved highly suitable to prepare substrates from low-level amounts of protein digests for phosphorylation-site determination by chemical-targeted proteolysis. The solid-phase protocol provides for a simple, robust, and efficient tool to prepare samples for phosphopeptide identification in MALDI mass maps of unfractionated protein digests, using standard equipment available in most biological laboratories. The use of a solid-phase analytical platform is expected to be readily expanded to prepare digest from O-glycosylated- and O-sulfonated proteins for mass spectrometry-based structural characterization. PMID:23997661
Absorbance and fluorometric sensing with capillary wells microplates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Han Yen; Cheong, Brandon Huey-Ping; Neild, Adrian
2010-12-15
Detection and readout from small volume assays in microplates are a challenge. The capillary wells microplate approach [Ng et al., Appl. Phys. Lett. 93, 174105 (2008)] offers strong advantages in small liquid volume management. An adapted design is described and shown here to be able to detect, in a nonimaging manner, fluorescence and absorbance assays minus the error often associated with meniscus forming at the air-liquid interface. The presence of bubbles in liquid samples residing in microplate wells can cause inaccuracies. Pipetting errors, if not adequately managed, can result in misleading data and wrong interpretations of assay results; particularly inmore » the context of high throughput screening. We show that the adapted design is also able to detect for bubbles and pipetting errors during actual assay runs to ensure accuracy in screening.« less
Multi-modal automatic montaging of adaptive optics retinal images
Chen, Min; Cooper, Robert F.; Han, Grace K.; Gee, James; Brainard, David H.; Morgan, Jessica I. W.
2016-01-01
We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download. PMID:28018714
New Gateway Installed onto Space Station on This Week @NASA – August 19, 2016
2016-08-19
Outside the International Space Station, Expedition 48 Commander Jeff Williams and Flight Engineer Kate Rubins of NASA installed the first of two International Docking Adapters onto the forward end of the station’s Harmony module, during a spacewalk on Aug. 19. The new docking port will be used by the Boeing CST-100 “Starliner” and SpaceX Crew Dragon commercial crew spacecraft being developed to transport U.S. astronauts to and from the station. The second International Docking Adapter – currently under construction – eventually will be placed on the space-facing side of the Harmony module. Also, Commercial Crew Access Arm Installed on Launchpad, Behind the Scenes of our Journey to Mars, Asteroid Redirect Mission Milestone, Asteroid Sample Return Mission Approaches, and Chasing Greenhouse Gases in the Midwest!
Model-Based Adaptive Event-Triggered Control of Strict-Feedback Nonlinear Systems.
Li, Yuan-Xin; Yang, Guang-Hong
2018-04-01
This paper is concerned with the adaptive event-triggered control problem of nonlinear continuous-time systems in strict-feedback form. By using the event-sampled neural network (NN) to approximate the unknown nonlinear function, an adaptive model and an associated event-triggered controller are designed by exploiting the backstepping method. In the proposed method, the feedback signals and the NN weights are aperiodically updated only when the event-triggered condition is violated. A positive lower bound on the minimum intersample time is guaranteed to avoid accumulation point. The closed-loop stability of the resulting nonlinear impulsive dynamical system is rigorously proved via Lyapunov analysis under an adaptive event sampling condition. In comparing with the traditional adaptive backstepping design with a fixed sample period, the event-triggered method samples the state and updates the NN weights only when it is necessary. Therefore, the number of transmissions can be significantly reduced. Finally, two simulation examples are presented to show the effectiveness of the proposed control method.
Concept Based Approach for Adaptive Personalized Course Learning System
ERIC Educational Resources Information Center
Salahli, Mehmet Ali; Özdemir, Muzaffer; Yasar, Cumali
2013-01-01
One of the most important factors for improving the personalization aspects of learning systems is to enable adaptive properties to them. The aim of the adaptive personalized learning system is to offer the most appropriate learning path and learning materials to learners by taking into account their profiles. In this paper, a new approach to…
Swarm Intelligence: New Techniques for Adaptive Systems to Provide Learning Support
ERIC Educational Resources Information Center
Wong, Lung-Hsiang; Looi, Chee-Kit
2012-01-01
The notion of a system adapting itself to provide support for learning has always been an important issue of research for technology-enabled learning. One approach to provide adaptivity is to use social navigation approaches and techniques which involve analysing data of what was previously selected by a cluster of users or what worked for…
Maria K. Janowiak; Christopher W. Swanston; Linda M. Nagel; Leslie A. Brandt; Patricia R. Butler; Stephen D. Handler; P. Danielle Shannon; Louis R. Iverson; Stephen N. Matthews; Anantha Prasad; Matthew P. Peters
2014-01-01
There is an ever-growing body of literature on forest management strategies for climate change adaptation; however, few frameworks have been presented for integrating these strategies with the real-world challenges of forest management. We have developed a structured approach for translating broad adaptation concepts into specific management actions and silvicultural...
Analytical approach to an integrate-and-fire model with spike-triggered adaptation
NASA Astrophysics Data System (ADS)
Schwalger, Tilo; Lindner, Benjamin
2015-12-01
The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.
NASA Astrophysics Data System (ADS)
Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.
2015-06-01
Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.
Practical characteristics of adaptive design in phase 2 and 3 clinical trials.
Sato, A; Shimura, M; Gosho, M
2018-04-01
Adaptive design methods are expected to be ethical, reflect real medical practice, increase the likelihood of research and development success and reduce the allocation of patients into ineffective treatment groups by the early termination of clinical trials. However, the comprehensive details regarding which types of clinical trials will include adaptive designs remain unclear. We examined the practical characteristics of adaptive design used in clinical trials. We conducted a literature search of adaptive design clinical trials published from 2012 to 2015 using PubMed, EMBASE, and the Cochrane Central Register of Controlled Trials, with common search terms related to adaptive design. We systematically assessed the types and characteristics of adaptive designs and disease areas employed in the adaptive design trials. Our survey identified 245 adaptive design clinical trials. The number of trials by the publication year increased from 2012 to 2013 and did not greatly change afterwards. The most frequently used adaptive design was group sequential design (n = 222, 90.6%), especially for neoplasm or cardiovascular disease trials. Among the other types of adaptive design, adaptive dose/treatment group selection (n = 21, 8.6%) and adaptive sample-size adjustment (n = 19, 7.8%) were frequently used. The adaptive randomization (n = 8, 3.3%) and adaptive seamless design (n = 6, 2.4%) were less frequent. Adaptive dose/treatment group selection and adaptive sample-size adjustment were frequently used (up to 23%) in "certain infectious and parasitic diseases," "diseases of nervous system," and "mental and behavioural disorders" in comparison with "neoplasms" (<6.6%). For "mental and behavioural disorders," adaptive randomization was used in two trials of eight trials in total (25%). Group sequential design and adaptive sample-size adjustment were used frequently in phase 3 trials or in trials where study phase was not specified, whereas the other types of adaptive designs were used more in phase 2 trials. Approximately 82% (202 of 245 trials) resulted in early termination at the interim analysis. Among the 202 trials, 132 (54% of 245 trials) had fewer randomized patients than initially planned. This result supports the motive to use adaptive design to make study durations shorter and include a smaller number of subjects. We found that adaptive designs have been applied to clinical trials in various therapeutic areas and interventions. The applications were frequently reported in neoplasm or cardiovascular clinical trials. The adaptive dose/treatment group selection and sample-size adjustment are increasingly common, and these adaptations generally follow the Food and Drug Administration's (FDA's) recommendations. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Halsnæs, Kirsten; Trærup, Sara
2009-05-01
The paper introduces the so-called climate change mainstreaming approach, where vulnerability and adaptation measures are assessed in the context of general development policy objectives. The approach is based on the application of a limited set of indicators. These indicators are selected as representatives of focal development policy objectives, and a stepwise approach for addressing climate change impacts, development linkages, and the economic, social and environmental dimensions related to vulnerability and adaptation are introduced. Within this context it is illustrated using three case studies how development policy indicators in practice can be used to assess climate change impacts and adaptation measures based on three case studies, namely a road project in flood prone areas of Mozambique, rainwater harvesting in the agricultural sector in Tanzania and malaria protection in Tanzania. The conclusions of the paper confirm that climate risks can be reduced at relatively low costs, but the uncertainty is still remaining about some of the wider development impacts of implementing climate change adaptation measures.
Halsnaes, Kirsten; Traerup, Sara
2009-05-01
The paper introduces the so-called climate change mainstreaming approach, where vulnerability and adaptation measures are assessed in the context of general development policy objectives. The approach is based on the application of a limited set of indicators. These indicators are selected as representatives of focal development policy objectives, and a stepwise approach for addressing climate change impacts, development linkages, and the economic, social and environmental dimensions related to vulnerability and adaptation are introduced. Within this context it is illustrated using three case studies how development policy indicators in practice can be used to assess climate change impacts and adaptation measures based on three case studies, namely a road project in flood prone areas of Mozambique, rainwater harvesting in the agricultural sector in Tanzania and malaria protection in Tanzania. The conclusions of the paper confirm that climate risks can be reduced at relatively low costs, but the uncertainty is still remaining about some of the wider development impacts of implementing climate change adaptation measures.
Serang, Oliver
2012-01-01
Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741
Harrisson, Katherine A; Amish, Stephen J; Pavlova, Alexandra; Narum, Shawn R; Telonis-Scott, Marina; Rourke, Meaghan L; Lyon, Jarod; Tonkin, Zeb; Gilligan, Dean M; Ingram, Brett A; Lintermans, Mark; Gan, Han Ming; Austin, Christopher M; Luikart, Gordon; Sunnucks, Paul
2017-11-01
Adaptive differences across species' ranges can have important implications for population persistence and conservation management decisions. Despite advances in genomic technologies, detecting adaptive variation in natural populations remains challenging. Key challenges in gene-environment association studies involve distinguishing the effects of drift from those of selection and identifying subtle signatures of polygenic adaptation. We used paired-end restriction site-associated DNA sequencing data (6,605 biallelic single nucleotide polymorphisms; SNPs) to examine population structure and test for signatures of adaptation across the geographic range of an iconic Australian endemic freshwater fish species, the Murray cod Maccullochella peelii. Two univariate gene-association methods identified 61 genomic regions associated with climate variation. We also tested for subtle signatures of polygenic adaptation using a multivariate method (redundancy analysis; RDA). The RDA analysis suggested that climate (temperature- and precipitation-related variables) and geography had similar magnitudes of effect in shaping the distribution of SNP genotypes across the sampled range of Murray cod. Although there was poor agreement among the candidate SNPs identified by the univariate methods, the top 5% of SNPs contributing to significant RDA axes included 67% of the SNPs identified by univariate methods. We discuss the potential implications of our findings for the management of Murray cod and other species generally, particularly in relation to informing conservation actions such as translocations to improve evolutionary resilience of natural populations. Our results highlight the value of using a combination of different approaches, including polygenic methods, when testing for signatures of adaptation in landscape genomic studies. © 2017 John Wiley & Sons Ltd.
Hogan, Anthony; Tanton, Robert; Lockie, Stewart; May, Sarah
2013-01-01
Objective: This study examined whether a wellbeing approach to resilience and adaptation would provide practical insights for prioritizing support to communities experiencing environmental and socio-economic stressors. Methods: A cross-sectional survey, based on a purposive sample of 2,196 stakeholders (landholders, hobby farmers, town resident and change agents) from three irrigation-dependent communities in Australia’s Murray-Darling Basin. Respondents’ adaptive capacity and wellbeing (individual and collective adaptive capacity, subjective wellbeing, social support, community connectivity, community leadership, in the context of known life stressors) were examined using chi-square, comparison of mean scores, hierarchical regression and factor-cluster analysis. Results: Statistically significant correlations (p < 0.05) were observed between individual (0.331) and collective (0.318) adaptive capacity and wellbeing. Taking into account respondents’ self-assessed health and socio-economic circumstances, perceptions of individual (15%) and collective adaptive capacity (10%) as well as community connectivity (13%) were associated with wellbeing (R2 = 0.36; F (9, 2099) = 132.9; p < 0.001). Cluster analysis found that 11% of respondents were particularly vulnerable, reporting below average scores on all indicators, with 56% of these reporting below threshold scores on subjective wellbeing. Conclusions: Addressing the capacity of individuals to work with others and to adapt to change, serve as important strategies in maintaining wellbeing in communities under stress. The human impacts of exogenous stressors appear to manifest themselves in poorer health outcomes; addressing primary stressors may in turn aid wellbeing. Longitudinal studies are indicated to verify these findings. Wellbeing may serve as a useful and parsimonious proxy measure for resilience and adaptive capacity. PMID:23924885
State-space self-tuner for on-line adaptive control
NASA Technical Reports Server (NTRS)
Shieh, L. S.
1994-01-01
Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.
NASA Astrophysics Data System (ADS)
Girard, Corentin; Rinaudo, Jean-Daniel; Pulido-Velazquez, Manuel
2016-04-01
Adaptation to the multiple facets of global change challenges the conventional means of sustainably planning and managing water resources at the river basin scale. Numerous demand or supply management options are available, from which adaptation measures need to be selected in a context of high uncertainty of future conditions. Given the interdependency of water users, agreements need to be found at the local level to implement the most effective adaptation measures. Therefore, this work develops an approach combining economics and water resources engineering to select a cost-effective programme of adaptation measures in the context of climate change uncertainty, and to define an equitable allocation of the cost of the adaptation plan between the stakeholders involved. A framework is developed to integrate inputs from the two main approaches commonly used to plan for adaptation. The first, referred to as "top-down", consists of a modelling chain going from global greenhouse gases emission scenarios to local hydrological models used to assess the impact of climate change on water resources. Conversely, the second approach, called "bottom-up", starts from assessing vulnerability at the local level to then identify adaptation measures used to face an uncertain future. The methodological framework presented in this contribution relies on a combination of these two approaches to support the selection of adaptation measures at the local level. Outcomes from these two approaches are integrated to select a cost-effective combination of adaptation measures through a least-cost optimization model developed at the river basin scale. The performances of a programme of measures are assessed under different climate projections to identify cost-effective and least-regret adaptation measures. The issue of allocating the cost of the adaptation plan is considered through two complementary perspectives. The outcome of a negotiation process between the stakeholders is modelled through the implementation of cooperative game theory to define cost allocation scenarios. These results are compared with cost allocation rules based on social justice principles to provide contrasted insights into a negotiation process. The interdisciplinary framework developed in this research combines economics and water resources engineering methods, establishing a promising means of bridging the gap between bottom-up and top-down approaches and supporting the creation of cost-effective and equitable adaptation plans at the local level. The approach has been applied to the Orb river basin in Southern France. Acknowledgements The study has been partially supported by the IMPADAPT project /CGL2013-48424-C2-1-R) from the Spanish ministry MINECO (Ministerio de Economía y Competitividad) and European FEDER funds. Corentin Girard is supported by a grant from the University Lecturer Training Program (FPU12/03803) of the Ministry of Education, Culture and Sports of Spain.
Mishra, Arabinda; Anderson, Adam W; Wu, Xi; Gore, John C; Ding, Zhaohua
2010-08-01
The purpose of this work is to design a neuronal fiber tracking algorithm, which will be more suitable for reconstruction of fibers associated with functionally important regions in the human brain. The functional activations in the brain normally occur in the gray matter regions. Hence the fibers bordering these regions are weakly myelinated, resulting in poor performance of conventional tractography methods to trace the fiber links between them. A lower fractional anisotropy in this region makes it even difficult to track the fibers in the presence of noise. In this work, the authors focused on a stochastic approach to reconstruct these fiber pathways based on a Bayesian regularization framework. To estimate the true fiber direction (propagation vector), the a priori and conditional probability density functions are calculated in advance and are modeled as multivariate normal. The variance of the estimated tensor element vector is associated with the uncertainty due to noise and partial volume averaging (PVA). An adaptive and multiple sampling of the estimated tensor element vector, which is a function of the pre-estimated variance, overcomes the effect of noise and PVA in this work. The algorithm has been rigorously tested using a variety of synthetic data sets. The quantitative comparison of the results to standard algorithms motivated the authors to implement it for in vivo DTI data analysis. The algorithm has been implemented to delineate fibers in two major language pathways (Broca's to SMA and Broca's to Wernicke's) across 12 healthy subjects. Though the mean of standard deviation was marginally bigger than conventional (Euler's) approach [P. J. Basser et al., "In vivo fiber tractography using DT-MRI data," Magn. Reson. Med. 44(4), 625-632 (2000)], the number of extracted fibers in this approach was significantly higher. The authors also compared the performance of the proposed method to Lu's method [Y. Lu et al., "Improved fiber tractography with Bayesian tensor regularization," Neuroimage 31(3), 1061-1074 (2006)] and Friman's stochastic approach [O. Friman et al., "A Bayesian approach for stochastic white matter tractography," IEEE Trans. Med. Imaging 25(8), 965-978 (2006)]. Overall performance of the approach is found to be superior to above two methods, particularly when the signal-to-noise ratio was low. The authors observed that an adaptive sampling of the tensor element vectors, estimated as a function of the variance in a Bayesian framework, can effectively delineate neuronal fibers to analyze the structure-function relationship in human brain. The simulated and in vivo results are in good agreement with the theoretical aspects of the algorithm.
Conflicts of thermal adaptation and fever--a cybernetic approach based on physiological experiments.
Werner, J; Beckmann, U
1998-01-01
Cold adaptation aims primarily at a better economy, i.e., preservation of energy often at the cost of a lower mean body temperature during cold stress, whereas heat adaptation whether achieved by exposure to a hot environment or by endogenous heat produced by muscle exercise, often brings about a higher efficiency of control, i.e., a lower mean body temperature during heat stress, at the cost of a higher water loss. While cold adaptation is beneficial in a cold environment, it may constitute a detrimental factor for exposure to a hot environment, mainly because of morphological adaptation. Heat adaptation may be maladaptive for cold exposure, mainly because of functional adaptation. Heat adaptation clearly is best suited to avoid higher body temperatures in fever, no matter which environmental conditions prevail. On the other hand, cold adaptation is detrimental for coping with fever in hot environment. Yet, in the cold, preceding cold adaptation may, because of reduced metabolic heat production, result in lower febrile increase of body temperature. Apparently controversial effects and results may be analyzed in the framework of a cybernetic approach to the main mechanisms of thermal adaptation and fever. Morphological adaptations alter the properties of the heat transfer characteristics of the body ("passive system"), whereas functional adaptation and fever concern the subsystems of control, namely sensors, integrative centers and effectors. In a closed control-loop the two types of adaptation have totally different consequences. It is shown that the experimental results are consistent with the predictions of such an approach.
Lagrangian methods of cosmic web classification
NASA Astrophysics Data System (ADS)
Fisher, J. D.; Faltenbacher, A.; Johnson, M. S. T.
2016-05-01
The cosmic web defines the large-scale distribution of matter we see in the Universe today. Classifying the cosmic web into voids, sheets, filaments and nodes allows one to explore structure formation and the role environmental factors have on halo and galaxy properties. While existing studies of cosmic web classification concentrate on grid-based methods, this work explores a Lagrangian approach where the V-web algorithm proposed by Hoffman et al. is implemented with techniques borrowed from smoothed particle hydrodynamics. The Lagrangian approach allows one to classify individual objects (e.g. particles or haloes) based on properties of their nearest neighbours in an adaptive manner. It can be applied directly to a halo sample which dramatically reduces computational cost and potentially allows an application of this classification scheme to observed galaxy samples. Finally, the Lagrangian nature admits a straightforward inclusion of the Hubble flow negating the necessity of a visually defined threshold value which is commonly employed by grid-based classification methods.
Adaptive Cluster Sampling for Forest Inventories
Francis A. Roesch
1993-01-01
Adaptive cluster sampling is shown to be a viable alternative for sampling forests when there are rare characteristics of the forest trees which are of interest and occur on clustered trees. The ideas of recent work in Thompson (1990) have been extended to the case in which the initial sample is selected with unequal probabilities. An example is given in which the...
Adaptation of a Weighted Regression Approach to Evaluate Water Quality Trends in an Estuary
To improve the description of long-term changes in water quality, we adapted a weighted regression approach to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach, originally developed to resolve pollutant transport trends in rivers...
Adaptation of a weighted regression approach to evaluate water quality trends in anestuary
To improve the description of long-term changes in water quality, a weighted regression approach developed to describe trends in pollutant transport in rivers was adapted to analyze a long-term water quality dataset from Tampa Bay, Florida. The weighted regression approach allows...
Jia, Peilin; Zhao, Zhongming
2014-01-01
A major challenge in interpreting the large volume of mutation data identified by next-generation sequencing (NGS) is to distinguish driver mutations from neutral passenger mutations to facilitate the identification of targetable genes and new drugs. Current approaches are primarily based on mutation frequencies of single-genes, which lack the power to detect infrequently mutated driver genes and ignore functional interconnection and regulation among cancer genes. We propose a novel mutation network method, VarWalker, to prioritize driver genes in large scale cancer mutation data. VarWalker fits generalized additive models for each sample based on sample-specific mutation profiles and builds on the joint frequency of both mutation genes and their close interactors. These interactors are selected and optimized using the Random Walk with Restart algorithm in a protein-protein interaction network. We applied the method in >300 tumor genomes in two large-scale NGS benchmark datasets: 183 lung adenocarcinoma samples and 121 melanoma samples. In each cancer, we derived a consensus mutation subnetwork containing significantly enriched consensus cancer genes and cancer-related functional pathways. These cancer-specific mutation networks were then validated using independent datasets for each cancer. Importantly, VarWalker prioritizes well-known, infrequently mutated genes, which are shown to interact with highly recurrently mutated genes yet have been ignored by conventional single-gene-based approaches. Utilizing VarWalker, we demonstrated that network-assisted approaches can be effectively adapted to facilitate the detection of cancer driver genes in NGS data. PMID:24516372
Jia, Peilin; Zhao, Zhongming
2014-02-01
A major challenge in interpreting the large volume of mutation data identified by next-generation sequencing (NGS) is to distinguish driver mutations from neutral passenger mutations to facilitate the identification of targetable genes and new drugs. Current approaches are primarily based on mutation frequencies of single-genes, which lack the power to detect infrequently mutated driver genes and ignore functional interconnection and regulation among cancer genes. We propose a novel mutation network method, VarWalker, to prioritize driver genes in large scale cancer mutation data. VarWalker fits generalized additive models for each sample based on sample-specific mutation profiles and builds on the joint frequency of both mutation genes and their close interactors. These interactors are selected and optimized using the Random Walk with Restart algorithm in a protein-protein interaction network. We applied the method in >300 tumor genomes in two large-scale NGS benchmark datasets: 183 lung adenocarcinoma samples and 121 melanoma samples. In each cancer, we derived a consensus mutation subnetwork containing significantly enriched consensus cancer genes and cancer-related functional pathways. These cancer-specific mutation networks were then validated using independent datasets for each cancer. Importantly, VarWalker prioritizes well-known, infrequently mutated genes, which are shown to interact with highly recurrently mutated genes yet have been ignored by conventional single-gene-based approaches. Utilizing VarWalker, we demonstrated that network-assisted approaches can be effectively adapted to facilitate the detection of cancer driver genes in NGS data.
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less
Flight Approach to Adaptive Control Research
NASA Technical Reports Server (NTRS)
Pavlock, Kate Maureen; Less, James L.; Larson, David Nils
2011-01-01
The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The testbed served as a full-scale vehicle to test and validate adaptive flight control research addressing technical challenges involved with reducing risk to enable safe flight in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.
A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework
NASA Astrophysics Data System (ADS)
Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco
2017-11-01
We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.
Self-Avoiding Walks Over Adaptive Triangular Grids
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1999-01-01
Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.
Observer-Based Adaptive Neural Network Control for Nonlinear Systems in Nonstrict-Feedback Form.
Chen, Bing; Zhang, Huaguang; Lin, Chong
2016-01-01
This paper focuses on the problem of adaptive neural network (NN) control for a class of nonlinear nonstrict-feedback systems via output feedback. A novel adaptive NN backstepping output-feedback control approach is first proposed for nonlinear nonstrict-feedback systems. The monotonicity of system bounding functions and the structure character of radial basis function (RBF) NNs are used to overcome the difficulties that arise from nonstrict-feedback structure. A state observer is constructed to estimate the immeasurable state variables. By combining adaptive backstepping technique with approximation capability of radial basis function NNs, an output-feedback adaptive NN controller is designed through backstepping approach. It is shown that the proposed controller guarantees semiglobal boundedness of all the signals in the closed-loop systems. Two examples are used to illustrate the effectiveness of the proposed approach.
Durand, Guillaume
2018-05-03
Although highly debated, the notion of the existence of an adaptive side to psychopathy is supported by some researchers. Currently, 2 instruments assessing psychopathic traits include an adaptive component, which might not cover the full spectrum of adaptive psychopathic traits. The Durand Adaptive Psychopathic Traits Questionnaire (DAPTQ; Durand, 2017 ) is a 41-item self-reported instrument assessing adaptive traits known to correlate with the psychopathic personality. In this study, I investigated in 2 samples (N = 263 and N = 262) the incremental validity of the DAPTQ over the Psychopathic Personality Inventory-Short Form (PPI-SF) and the Triarchic Psychopathy Measure (TriPM) using multiple criterion measures. Results showed that the DAPTQ significantly increased the predictive validity over the PPI-SF on 5 factors of the HEXACO. Additionally, the DAPTQ provided incremental validity over both the PPI-SF and the TriPM on measures of communication adaptability, perceived stress, and trait anxiety. Overall, these results support the validity of the DAPTQ in community samples. Directions for future studies to further validate the DAPTQ are discussed.
NASA Astrophysics Data System (ADS)
Thebo, A.
2016-12-01
Urban wastewater provides a reliable, nutrient rich source of irrigation water for downstream agricultural producers. However, globally, less than ten percent of collected wastewater receives any form of treatment, resulting in the widespread indirect reuse of untreated, diluted wastewater from surface water sources. This research explores these links between water scarcity, anthropogenic drivers of water quality, and adaptation strategies farmer's employ through a case study in Dharwad, a mid-sized South Indian city. This study took an interdisciplinary approach, incorporating survey based research with geospatial analysis, and molecular methods (for waterborne pathogen detection) to develop a systems level understanding of the drivers, health risks, and adaptation strategies associated with the indirect reuse of wastewater in irrigated agriculture. In Dharwad, farmers with better access to wastewater reported growing more water-intensive, but higher value vegetable crops. While farmers further downstream tended to grow more staple crops. This study evaluated levels of culturable E. coli and diarrheagenic E. coli pathotype gene targets to assess contamination in irrigation water, soil, and on produce from farms. Irrigation water source was a major factor affecting the concentrations of culturable E. coli detected in soil samples and on greens. However, even when irrigation water was not contaminated (all borewell water samples) some culturable E. coli were present at low concentrations in soil and on produce samples, suggesting additional sources of contamination on farms. Maximum temperatures within the previous week showed a significant positive association with concentrations of E. coli on wastewater irrigated produce. This presentation will focus on discussing the ways in which urban wastewater management, climate, irrigation practices and cultivation patterns all come together to define the risks and benefits posed via the indirect reuse of wastewater.
Tank, David C.
2016-01-01
Advances in high-throughput sequencing (HTS) have allowed researchers to obtain large amounts of biological sequence information at speeds and costs unimaginable only a decade ago. Phylogenetics, and the study of evolution in general, is quickly migrating towards using HTS to generate larger and more complex molecular datasets. In this paper, we present a method that utilizes microfluidic PCR and HTS to generate large amounts of sequence data suitable for phylogenetic analyses. The approach uses the Fluidigm Access Array System (Fluidigm, San Francisco, CA, USA) and two sets of PCR primers to simultaneously amplify 48 target regions across 48 samples, incorporating sample-specific barcodes and HTS adapters (2,304 unique amplicons per Access Array). The final product is a pooled set of amplicons ready to be sequenced, and thus, there is no need to construct separate, costly genomic libraries for each sample. Further, we present a bioinformatics pipeline to process the raw HTS reads to either generate consensus sequences (with or without ambiguities) for every locus in every sample or—more importantly—recover the separate alleles from heterozygous target regions in each sample. This is important because it adds allelic information that is well suited for coalescent-based phylogenetic analyses that are becoming very common in conservation and evolutionary biology. To test our approach and bioinformatics pipeline, we sequenced 576 samples across 96 target regions belonging to the South American clade of the genus Bartsia L. in the plant family Orobanchaceae. After sequencing cleanup and alignment, the experiment resulted in ~25,300bp across 486 samples for a set of 48 primer pairs targeting the plastome, and ~13,500bp for 363 samples for a set of primers targeting regions in the nuclear genome. Finally, we constructed a combined concatenated matrix from all 96 primer combinations, resulting in a combined aligned length of ~40,500bp for 349 samples. PMID:26828929
USDA-ARS?s Scientific Manuscript database
Although evolution is now recognized as improving the invasive success of populations, where and when key adaptation event(s) occur often remains unclear. Here we used a multidisciplinary approach to disentangle the eco-evolutionary scenario of invasion of a Mediterranean zone (i.e. Israel) by the t...
NASA Astrophysics Data System (ADS)
Bhave, Ajay; Dessai, Suraje; Conway, Declan; Stainforth, David
2016-04-01
Deep uncertainty in future climate change and socio-economic conditions necessitates the use of assess-risk-of-policy approaches over predict-then-act approaches for adaptation decision making. Robust Decision Making (RDM) approaches embody this principle and help evaluate the ability of adaptation options to satisfy stakeholder preferences under wide-ranging future conditions. This study involves the simultaneous application of two RDM approaches; qualitative and quantitative, in the Cauvery River Basin in Karnataka (population ~23 million), India. The study aims to (a) determine robust water resources adaptation options for the 2030s and 2050s and (b) compare the usefulness of a qualitative stakeholder-driven approach with a quantitative modelling approach. For developing a large set of future scenarios a combination of climate narratives and socio-economic narratives was used. Using structured expert elicitation with a group of climate experts in the Indian Summer Monsoon, climatic narratives were developed. Socio-economic narratives were developed to reflect potential future urban and agricultural water demand. In the qualitative RDM approach, a stakeholder workshop helped elicit key vulnerabilities, water resources adaptation options and performance criteria for evaluating options. During a second workshop, stakeholders discussed and evaluated adaptation options against the performance criteria for a large number of scenarios of climatic and socio-economic change in the basin. In the quantitative RDM approach, a Water Evaluation And Planning (WEAP) model was forced by precipitation and evapotranspiration data, coherent with the climatic narratives, together with water demand data based on socio-economic narratives. We find that compared to business-as-usual conditions options addressing urban water demand satisfy performance criteria across scenarios and provide co-benefits like energy savings and reduction in groundwater depletion, while options reducing agricultural water demand significantly affect downstream water availability. Water demand options demonstrate potential to improve environmental flow conditions and satisfy legal water supply requirements for downstream riparian states. On the other hand, currently planned large scale infrastructural projects demonstrate reduced value in certain scenarios, illustrating the impacts of lock-in effects of large scale infrastructure. From a methodological perspective, we find that while the stakeholder-driven approach revealed robust options in a resource-light manner and helped initiate much needed interaction amongst stakeholders, the modelling approach provides complementary quantitative information. The study reveals robust adaptation options for this important basin and provides a strong methodological basis for carrying out future studies that support adaptation decision making.
Simple robust control laws for robot manipulators. Part 2: Adaptive case
NASA Technical Reports Server (NTRS)
Bayard, D. S.; Wen, J. T.
1987-01-01
A new class of asymptotically stable adaptive control laws is introduced for application to the robotic manipulator. Unlike most applications of adaptive control theory to robotic manipulators, this analysis addresses the nonlinear dynamics directly without approximation, linearization, or ad hoc assumptions, and utilizes a parameterization based on physical (time-invariant) quantities. This approach is made possible by using energy-like Lyapunov functions which retain the nonlinear character and structure of the dynamics, rather than simple quadratic forms which are ubiquitous to the adaptive control literature, and which have bound the theory tightly to linear systems with unknown parameters. It is a unique feature of these results that the adaptive forms arise by straightforward certainty equivalence adaptation of their nonadaptive counterparts found in the companion to this paper (i.e., by replacing unknown quantities by their estimates) and that this simple approach leads to asymptotically stable closed-loop adaptive systems. Furthermore, it is emphasized that this approach does not require convergence of the parameter estimates (i.e., via persistent excitation), invertibility of the mass matrix estimate, or measurement of the joint accelerations.
QPSO-Based Adaptive DNA Computing Algorithm
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409
Evaluating Adaptive Governance Approaches to Sustainable Water Management in North-West Thailand
NASA Astrophysics Data System (ADS)
Clark, Julian R. A.; Semmahasak, Chutiwalanch
2013-04-01
Adaptive governance is advanced as a potent means of addressing institutional fit of natural resource systems with prevailing modes of political-administrative management. Its advocates also argue that it enhances participatory and learning opportunities for stakeholders over time. Yet an increasing number of studies demonstrate real difficulties in implementing adaptive governance `solutions'. This paper builds on these debates by examining the introduction of adaptive governance to water management in Chiang Mai province, north-west Thailand. The paper considers, first, the limitations of current water governance modes at the provincial scale, and the rationale for implementation of an adaptive approach. The new approach is then critically examined, with its initial performance and likely future success evaluated by (i) analysis of water stakeholders' opinions of its first year of operation; and (ii) comparison of its governance attributes against recent empirical accounts of implementation difficulty and failure of adaptive governance of natural resource management more generally. The analysis confirms the potentially significant role that the new approach can play in brokering and resolving the underlying differences in stakeholder representation and knowledge construction at the heart of the prevailing water governance modes in north-west Thailand.
NASA Astrophysics Data System (ADS)
Zhang, Guannan; Lu, Dan; Ye, Ming; Gunzburger, Max; Webster, Clayton
2013-10-01
Bayesian analysis has become vital to uncertainty quantification in groundwater modeling, but its application has been hindered by the computational cost associated with numerous model executions required by exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, a new approach is developed to improve the computational efficiency of Bayesian inference by constructing a surrogate of the PPDF, using an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, this paper utilizes a compactly supported higher-order hierarchical basis to construct the surrogate system, resulting in a significant reduction in the number of required model executions. In addition, using the hierarchical surplus as an error indicator allows locally adaptive refinement of sparse grids in the parameter space, which further improves computational efficiency. To efficiently build the surrogate system for the PPDF with multiple significant modes, optimization techniques are used to identify the modes, for which high-probability regions are defined and components of the aSG-hSC approximation are constructed. After the surrogate is determined, the PPDF can be evaluated by sampling the surrogate system directly without model execution, resulting in improved efficiency of the surrogate-based MCMC compared with conventional MCMC. The developed method is evaluated using two synthetic groundwater reactive transport models. The first example involves coupled linear reactions and demonstrates the accuracy of our high-order hierarchical basis approach in approximating high-dimensional posteriori distribution. The second example is highly nonlinear because of the reactions of uranium surface complexation, and demonstrates how the iterative aSG-hSC method is able to capture multimodal and non-Gaussian features of PPDF caused by model nonlinearity. Both experiments show that aSG-hSC is an effective and efficient tool for Bayesian inference.
Probabilistic co-adaptive brain-computer interfacing
NASA Astrophysics Data System (ADS)
Bryan, Matthew J.; Martin, Stefan A.; Cheung, Willy; Rao, Rajesh P. N.
2013-12-01
Objective. Brain-computer interfaces (BCIs) are confronted with two fundamental challenges: (a) the uncertainty associated with decoding noisy brain signals, and (b) the need for co-adaptation between the brain and the interface so as to cooperatively achieve a common goal in a task. We seek to mitigate these challenges. Approach. We introduce a new approach to brain-computer interfacing based on partially observable Markov decision processes (POMDPs). POMDPs provide a principled approach to handling uncertainty and achieving co-adaptation in the following manner: (1) Bayesian inference is used to compute posterior probability distributions (‘beliefs’) over brain and environment state, and (2) actions are selected based on entire belief distributions in order to maximize total expected reward; by employing methods from reinforcement learning, the POMDP’s reward function can be updated over time to allow for co-adaptive behaviour. Main results. We illustrate our approach using a simple non-invasive BCI which optimizes the speed-accuracy trade-off for individual subjects based on the signal-to-noise characteristics of their brain signals. We additionally demonstrate that the POMDP BCI can automatically detect changes in the user’s control strategy and can co-adaptively switch control strategies on-the-fly to maximize expected reward. Significance. Our results suggest that the framework of POMDPs offers a promising approach for designing BCIs that can handle uncertainty in neural signals and co-adapt with the user on an ongoing basis. The fact that the POMDP BCI maintains a probability distribution over the user’s brain state allows a much more powerful form of decision making than traditional BCI approaches, which have typically been based on the output of classifiers or regression techniques. Furthermore, the co-adaptation of the system allows the BCI to make online improvements to its behaviour, adjusting itself automatically to the user’s changing circumstances.
NASA Astrophysics Data System (ADS)
Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi
2016-10-01
Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.
Highly accurate adaptive TOF determination method for ultrasonic thickness measurement
NASA Astrophysics Data System (ADS)
Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing
2018-04-01
Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.
Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling
2016-05-01
Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.
Landscape genetics, adaptive diversity and population structure in Phaseolus vulgaris.
Rodriguez, Monica; Rau, Domenico; Bitocchi, Elena; Bellucci, Elisa; Biagetti, Eleonora; Carboni, Andrea; Gepts, Paul; Nanni, Laura; Papa, Roberto; Attene, Giovanna
2016-03-01
Here we studied the organization of genetic variation of the common bean (Phaseolus vulgaris) in its centres of domestication. We used 131 single nucleotide polymorphisms to investigate 417 wild common bean accessions and a representative sample of 160 domesticated genotypes, including Mesoamerican and Andean genotypes, for a total of 577 accessions. By analysing the genetic spatial patterns of the wild common bean, we documented the existence of several genetic groups and the occurrence of variable degrees of diversity in Mesoamerica and the Andes. Moreover, using a landscape genetics approach, we demonstrated that both demographic processes and selection for adaptation were responsible for the observed genetic structure. We showed that the study of correlations between markers and ecological variables at a continental scale can help in identifying local adaptation genes. We also located putative areas of common bean domestication in Mesoamerica, in the Oaxaca Valley, and the Andes, in southern Bolivia-northern Argentina. These observations are of paramount importance for the conservation and exploitation of the genetic diversity preserved within this species and other plant genetic resources. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Spiritual activities as a resistance resource for women with human immunodeficiency virus.
Sowell, R; Moneyham, L; Hennessy, M; Guillory, J; Demi, A; Seals, B
2000-01-01
Few studies have investigated the role that spiritual activities play in the adaptational outcomes of women with human immunodeficiency virus (HIV) disease. To examine the role of spiritual activities as a resource that may reduce the negative effects of disease-related stressors on the adaptational outcomes in HIV-infected women. A theoretically based causal model was tested to examine the role of spiritual activities as a moderator of the impact of HIV-related stressors (functional impairment, work impairment, and HIV-related symptoms) on two stress-related adaptational outcomes (emotional distress and quality of life), using a clinic-based sample of 184 HIV-positive women. Findings indicated that as spiritual activities increased, emotional distress decreased even when adjustments were made for HIV-related stressors. A positive relationship between spiritual activities and quality of life was found, which approached significance. Findings showed that HIV-related stressors have a significant negative effect on both emotional distress and quality of life. The findings support the hypothesis that spiritual activities are an important psychological resource accounting for individual variability in adjustment to the stressors associated with HIV disease.
Wages, N A; Slingluff, C L; Petroni, G R
2017-04-01
In recent years, investigators have asserted that the 3 + 3 design lacks flexibility, making its use in modern early-phase trial settings, such as combinations and/or biological agents, inefficient. More innovative approaches are required to address contemporary research questions, such as those posed in trials involving immunotherapies. We describe the implementation of an adaptive design for identifying an optimal treatment regimen, defined by low toxicity and high immune response, in an early-phase trial of a melanoma helper peptide vaccine plus novel adjuvant combinations. Operating characteristics demonstrate the ability of the method to effectively recommend optimal regimens in a high percentage of trials with reasonable sample sizes. The proposed design is a practical, early-phase, adaptive method for use with combined immunotherapy regimens. This design can be applied more broadly to early-phase combination studies, as it was used in an ongoing study of two small molecule inhibitors in relapsed/refractory mantle cell lymphoma. © The Author 2016. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Loudig, Olivier; Liu, Christina; Rohan, Thomas; Ben-Dov, Iddo Z
2018-05-05
-Archived, clinically classified formalin-fixed paraffin-embedded (FFPE) tissues can provide nucleic acids for retrospective molecular studies of cancer development. By using non-invasive or pre-malignant lesions from patients who later develop invasive disease, gene expression analyses may help identify early molecular alterations that predispose to cancer risk. It has been well described that nucleic acids recovered from FFPE tissues have undergone severe physical damage and chemical modifications, which make their analysis difficult and generally requires adapted assays. MicroRNAs (miRNAs), however, which represent a small class of RNA molecules spanning only up to ~18-24 nucleotides, have been shown to withstand long-term storage and have been successfully analyzed in FFPE samples. Here we present a 3' barcoded complementary DNA (cDNA) library preparation protocol specifically optimized for the analysis of small RNAs extracted from archived tissues, which was recently demonstrated to be robust and highly reproducible when using archived clinical specimens stored for up to 35 years. This library preparation is well adapted to the multiplex analysis of compromised/degraded material where RNA samples (up to 18) are ligated with individual 3' barcoded adapters and then pooled together for subsequent enzymatic and biochemical preparations prior to analysis. All purifications are performed by polyacrylamide gel electrophoresis (PAGE), which allows size-specific selections and enrichments of barcoded small RNA species. This cDNA library preparation is well adapted to minute RNA inputs, as a pilot polymerase chain reaction (PCR) allows determination of a specific amplification cycle to produce optimal amounts of material for next-generation sequencing (NGS). This approach was optimized for the use of degraded FFPE RNA from specimens archived for up to 35 years and provides highly reproducible NGS data.
Satlin, Andrew; Wang, Jinping; Logovinsky, Veronika; Berry, Scott; Swanson, Chad; Dhadda, Shobha; Berry, Donald A
2016-01-01
Recent failures in phase 3 clinical trials in Alzheimer's disease (AD) suggest that novel approaches to drug development are urgently needed. Phase 3 risk can be mitigated by ensuring that clinical efficacy is established before initiating confirmatory trials, but traditional phase 2 trials in AD can be lengthy and costly. We designed a Bayesian adaptive phase 2, proof-of-concept trial with a clinical endpoint to evaluate BAN2401, a monoclonal antibody targeting amyloid protofibrils. The study design used dose response and longitudinal modeling. Simulations were used to refine study design features to achieve optimal operating characteristics. The study design includes five active treatment arms plus placebo, a clinical outcome, 12-month primary endpoint, and a maximum sample size of 800. The average overall probability of success is ≥80% when at least one dose shows a treatment effect that would be considered clinically meaningful. Using frequent interim analyses, the randomization ratios are adapted based on the clinical endpoint, and the trial can be stopped for success or futility before full enrollment. Bayesian statistics can enhance the efficiency of analyzing the study data. The adaptive randomization generates more data on doses that appear to be more efficacious, which can improve dose selection for phase 3. The interim analyses permit stopping as soon as a predefined signal is detected, which can accelerate decision making. Both features can reduce the size and duration of the trial. This study design can mitigate some of the risks associated with advancing to phase 3 in the absence of data demonstrating clinical efficacy. Limitations to the approach are discussed.
Embedding clinical interventions into observational studies.
Newman, Anne B; Avilés-Santa, M Larissa; Anderson, Garnet; Heiss, Gerardo; Howard, Wm James; Krucoff, Mitchell; Kuller, Lewis H; Lewis, Cora E; Robinson, Jennifer G; Taylor, Herman; Treviño, Roberto P; Weintraub, William
2016-01-01
Novel approaches to observational studies and clinical trials could improve the cost-effectiveness and speed of translation of research. Hybrid designs that combine elements of clinical trials with observational registries or cohort studies should be considered as part of a long-term strategy to transform clinical trials and epidemiology, adapting to the opportunities of big data and the challenges of constrained budgets. Important considerations include study aims, timing, breadth and depth of the existing infrastructure that can be leveraged, participant burden, likely participation rate and available sample size in the cohort, required sample size for the trial, and investigator expertise. Community engagement and stakeholder (including study participants) support are essential for these efforts to succeed. Copyright © 2015. Published by Elsevier Inc.
Understanding the leaky engineering pipeline: Motivation and job adaptability of female engineers
NASA Astrophysics Data System (ADS)
Saraswathiamma, Manjusha Thekkedathu
This dissertation is a mixed-method study conducted using qualitative grounded theory and quantitative survey and correlation approaches. This study aims to explore the motivation and adaptability of females in the engineering profession and to develop a theoretical framework for both motivation and adaptability issues. As a result, this study endeavors to design solutions for the low enrollment and attenuation of female engineers in the engineering profession, often referred to as the "leaky female engineering pipeline." Profiles of 123 female engineers were studied for the qualitative approach, and 98 completed survey responses were analyzed for the quantitative approach. The qualitative, grounded-theory approach applied the constant comparison method; open, axial, and selective coding was used to classify the information in categories, sub-categories, and themes for both motivation and adaptability. The emergent themes for decisions motivating female enrollment include cognitive, emotional, and environmental factors. The themes identified for adaptability include the seven job adaptability factors: job satisfaction, risk- taking attitude, career/skill development, family, gender stereotyping, interpersonal skills, and personal benefit, as well as the self-perceived job adaptability factor. Illeris' Three-dimensional Learning Theory was modified as a model for decisions motivating female enrollment. This study suggests a firsthand conceptual parallelism of McClusky's Theory of Margin for the adaptability of female engineers in the profession. Also, this study attempted to design a survey instrument to measure job adaptability of female engineers. The study identifies two factors that are significantly related to job adaptability: interpersonal skills (< p = 0.01) and family (< p = 0.05); gender stereotyping and personal benefit are other factors that are also significantly (< p = 0.1) related.
Survey of adaptive image coding techniques
NASA Technical Reports Server (NTRS)
Habibi, A.
1977-01-01
The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.
Samus, Quincy M; Amjad, Halima; Johnston, Deirdre; Black, Betty S; Bartels, Stephen J; Lyketsos, Constantine G
2015-07-01
To provide a critical review of a multipronged recruitment approach used to identify, recruit, and enroll a diverse community-based sample of persons with memory disorders into an 18-month randomized, controlled dementia care coordination trial. Descriptive analysis of a recruitment approach comprised five strategies: community liaison ("gatekeepers") method, letters sent from trusted community organizations, display and distribution of study materials in the community, research registries, and general community outreach and engagement activities. Participants were 55 community organizations and 63 staff of community organizations in Baltimore, Maryland. Participant referral sources, eligibility, enrollment status, demographics, and loss to follow-up were tracked in a relational access database. In total, 1,275 referrals were received and 303 socioeconomically, cognitively, and racially diverse community-dwelling persons with cognitive disorders were enrolled. Most referrals came from letters sent from community organizations directly to clients on the study's behalf (39%) and referrals from community liaison organizations (29%). African American/black enrollees were most likely to come from community liaison organizations. A multipronged, adaptive approach led to the successful recruitment of diverse community-residing elders with memory impairment for an intervention trial. Key factors for success included using a range of evidence-supported outreach strategies, forming key strategic community partnerships, seeking regular stakeholder input through all research phases, and obtaining "buy-in" from community stakeholders by aligning study objectives with perceived unmet community needs. Copyright © 2015 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.
Liu, Jj; Davidson, E; Bhopal, Rs; White, M; Johnson, Mrd; Netto, G; Deverill, M; Sheikh, A
2012-01-01
There is now a considerable body of evidence revealing that a number of ethnic minority groups in the UK and other economically developed countries experience disproportionate levels of morbidity and mortality compared with the majority white European-origin population. Across these countries, health-promoting approaches are increasingly viewed as the long-term strategies most likely to prove clinically effective and cost-effective for preventing disease and improving health outcomes in those with established disease. To identify, appraise and interpret research on the approaches employed to maximise the cross-cultural appropriateness and effectiveness of health promotion interventions for smoking cessation, increasing physical activity and improving healthy eating for African-, Chinese- and South Asian-origin populations. Two national conferences; seven databases of UK guidelines and international systematic reviews of health promotion interventions aimed at the general population, including the Clinical Evidence, National Institute for Health and Clinical Excellence and Scottish Intercollegiate Guidelines Network databases (1950-2009); 11 databases of research on adapted health promotion interventions for ethnic minority populations, including BIOSIS, EMBASE and MEDLINE (1950-2009); and in-depth qualitative interviews with a purposive sample of researchers and health promoters. Theoretically based, mixed-methods, phased programme of research that involved user engagement, systematic reviews and qualitative interviews, which were integrated through a realist synthesis. Following a launch conference, two reviewers independently identified and extracted data from guidelines and systematic reviews on the effectiveness of interventions for the general population and any guidance offered in relation to how to interpret this evidence for ethnic minority populations. Data were thematically analysed. Reviewers then independently identified and critically appraised studies of adapted interventions and summarised data to assess feasibility, acceptability, equity, clinical effectiveness and cost-effectiveness. Interviews were transcribed, coded and thematically analysed. The quantitative and qualitative data were then synthesised using a realist framework to understand better how adapted interventions work and to assess implementation considerations and prioritise future research. Our preliminary findings were refined through discussion and debate at an end-of-study national user engagement conference. Initial user engagement emphasised the importance of extending this work beyond individual-centred behavioural interventions to also include examination of community- and ecological-level interventions; however, individual-centred behavioural approaches dominated the 15 relevant guidelines and 111 systematic reviews we identified. The most consistent evidence of effectiveness was for pharmacological interventions for smoking cessation. This body of work, however, provided scant evidence on the effectiveness of these interventions for ethnic minority groups. We identified 173 reports of adapted health promotion interventions, the majority of which focused on US-based African Americans. This body of evidence was used to develop a 46-item Typology of Adaptation and a Programme Theory of Adapted Health Promotion Interventions. Only nine empirical studies directly compared the effectiveness of culturally adapted interventions with standard health promotion interventions, these failing to yield any consistent evidence; no studies reported on cost-effectiveness. The 26 qualitative interviews highlighted the need to extend thinking on ethnicity from conventional dimensions to more contextual considerations. The realist synthesis enabled the production of a decision-making tool (RESET) to support future research. The lack of robust evidence of effectiveness for physical activity and healthy-eating interventions in the general population identified at the outset limited the comparative synthesis work we could undertake in the latter phases. Furthermore, the majority of studies undertaking an adapted intervention were conducted within African American populations; this raises important questions about the generalisability of findings to, for example, a UK context and other ethnic minority groups. Lastly, given our focus on three health areas and three populations, we have inevitably excluded many studies of adapted interventions for other health topics and other ethnic minority populations. There is currently a lack of evidence on how best to deliver smoking cessation, physical activity and healthy eating-related health promotion interventions to ethnic minority populations. Although culturally adapting interventions can increase salience, acceptability and uptake, there is as yet insufficient evidence on the clinical effectiveness or cost-effectiveness of these adapted approaches. More head-to-head comparisons of adapted compared with standard interventions are warranted. The Typology of Adaptation, Programme Theory of Adapted Health Promotion Interventions and RESET tool should help researchers to develop more considered approaches to adapting interventions than has hitherto been the case. The National Institute for Health Research Health Technology Assessment programme.
Gao, Yi Qin
2008-04-07
Here, we introduce a simple self-adaptive computational method to enhance the sampling in energy, configuration, and trajectory spaces. The method makes use of two strategies. It first uses a non-Boltzmann distribution method to enhance the sampling in the phase space, in particular, in the configuration space. The application of this method leads to a broad energy distribution in a large energy range and a quickly converged sampling of molecular configurations. In the second stage of simulations, the configuration space of the system is divided into a number of small regions according to preselected collective coordinates. An enhanced sampling of reactive transition paths is then performed in a self-adaptive fashion to accelerate kinetics calculations.
NASA Astrophysics Data System (ADS)
Martinez, Dominique; Clément, Maxime; Messaoudi, Belkacem; Gervasoni, Damien; Litaudon, Philippe; Buonviso, Nathalie
2018-04-01
Objective. Modern neuroscience research requires electrophysiological recording of local field potentials (LFPs) in moving animals. Wireless transmission has the advantage of removing the wires between the animal and the recording equipment but is hampered by the large number of data to be sent at a relatively high rate. Approach. To reduce transmission bandwidth, we propose an encoder/decoder scheme based on adaptive non-uniform quantization. Our algorithm uses the current transmitted codeword to adapt the quantization intervals to changing statistics in LFP signals. It is thus backward adaptive and does not require the sending of side information. The computational complexity is low and similar at the encoder and decoder sides. These features allow for real-time signal recovery and facilitate hardware implementation with low-cost commercial microcontrollers. Main results. As proof-of-concept, we developed an open-source neural recording device called NeRD. The NeRD prototype digitally transmits eight channels encoded at 10 kHz with 2 bits per sample. It occupies a volume of 2 × 2 × 2 cm3 and weighs 8 g with a small battery allowing for 2 h 40 min of autonomy. The power dissipation is 59.4 mW for a communication range of 8 m and transmission losses below 0.1%. The small weight and low power consumption offer the possibility of mounting the entire device on the head of a rodent without resorting to a separate head-stage and battery backpack. The NeRD prototype is validated in recording LFPs in freely moving rats at 2 bits per sample while maintaining an acceptable signal-to-noise ratio (>30 dB) over a range of noisy channels. Significance. Adaptive quantization in neural implants allows for lower transmission bandwidths while retaining high signal fidelity and preserving fundamental frequencies in LFPs.
Chai, Xin; Wang, Qisong; Zhao, Yongping; Li, Yongqiang; Liu, Dan; Liu, Xin; Bai, Ou
2017-01-01
Electroencephalography (EEG)-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM) is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects). Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR) can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE), which achieves values of 77.88% and 7.33% on average, respectively. For the online analysis, the average classification accuracy and standard deviation of ASFM in the subject-to-subject evaluation for all the 15 subjects in a dataset was 75.11% and 7.65%, respectively, gaining a significant performance improvement compared to the best baseline LR which achieves 56.38% and 7.48%, respectively. The experimental results confirm the effectiveness of the proposed method relative to state-of-the-art methods. Moreover, computational efficiency of the proposed ASFM method is much better than standard domain adaptation; if the numbers of training samples and test samples are controlled within certain range, it is suitable for real-time classification. It can be concluded that ASFM is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the field of EEG-based emotion recognition. PMID:28467371
Chai, Xin; Wang, Qisong; Zhao, Yongping; Li, Yongqiang; Liu, Dan; Liu, Xin; Bai, Ou
2017-05-03
Electroencephalography (EEG)-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM) is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects). Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR) can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE), which achieves values of 77.88% and 7.33% on average, respectively. For the online analysis, the average classification accuracy and standard deviation of ASFM in the subject-to-subject evaluation for all the 15 subjects in a dataset was 75.11% and 7.65%, respectively, gaining a significant performance improvement compared to the best baseline LR which achieves 56.38% and 7.48%, respectively. The experimental results confirm the effectiveness of the proposed method relative to state-of-the-art methods. Moreover, computational efficiency of the proposed ASFM method is much better than standard domain adaptation; if the numbers of training samples and test samples are controlled within certain range, it is suitable for real-time classification. It can be concluded that ASFM is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the field of EEG-based emotion recognition.
Public health adaptation to climate change in Canadian jurisdictions.
Austin, Stephanie E; Ford, James D; Berrang-Ford, Lea; Araos, Malcolm; Parker, Stephen; Fleury, Manon D
2015-01-12
Climate change poses numerous risks to the health of Canadians. Extreme weather events, poor air quality, and food insecurity in northern regions are likely to increase along with the increasing incidence and range of infectious diseases. In this study we identify and characterize Canadian federal, provincial, territorial and municipal adaptation to these health risks based on publically available information. Federal health adaptation initiatives emphasize capacity building and gathering information to address general health, infectious disease and heat-related risks. Provincial and territorial adaptation is varied. Quebec is a leader in climate change adaptation, having a notably higher number of adaptation initiatives reported, addressing almost all risks posed by climate change in the province, and having implemented various adaptation types. Meanwhile, all other Canadian provinces and territories are in the early stages of health adaptation. Based on publically available information, reported adaptation also varies greatly by municipality. The six sampled Canadian regional health authorities (or equivalent) are not reporting any adaptation initiatives. We also find little relationship between the number of initiatives reported in the six sampled municipalities and their provinces, suggesting that municipalities are adapting (or not adapting) autonomously.
Sequential causal inference: Application to randomized trials of adaptive treatment strategies
Dawson, Ree; Lavori, Philip W.
2009-01-01
SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714
A global sampling approach to designing and reengineering RNA secondary structures.
Levin, Alex; Lis, Mieszko; Ponty, Yann; O'Donnell, Charles W; Devadas, Srinivas; Berger, Bonnie; Waldispühl, Jérôme
2012-11-01
The development of algorithms for designing artificial RNA sequences that fold into specific secondary structures has many potential biomedical and synthetic biology applications. To date, this problem remains computationally difficult, and current strategies to address it resort to heuristics and stochastic search techniques. The most popular methods consist of two steps: First a random seed sequence is generated; next, this seed is progressively modified (i.e. mutated) to adopt the desired folding properties. Although computationally inexpensive, this approach raises several questions such as (i) the influence of the seed; and (ii) the efficiency of single-path directed searches that may be affected by energy barriers in the mutational landscape. In this article, we present RNA-ensign, a novel paradigm for RNA design. Instead of taking a progressive adaptive walk driven by local search criteria, we use an efficient global sampling algorithm to examine large regions of the mutational landscape under structural and thermodynamical constraints until a solution is found. When considering the influence of the seeds and the target secondary structures, our results show that, compared to single-path directed searches, our approach is more robust, succeeds more often and generates more thermodynamically stable sequences. An ensemble approach to RNA design is thus well worth pursuing as a complement to existing approaches. RNA-ensign is available at http://csb.cs.mcgill.ca/RNAensign.
A global sampling approach to designing and reengineering RNA secondary structures
Levin, Alex; Lis, Mieszko; Ponty, Yann; O’Donnell, Charles W.; Devadas, Srinivas; Berger, Bonnie; Waldispühl, Jérôme
2012-01-01
The development of algorithms for designing artificial RNA sequences that fold into specific secondary structures has many potential biomedical and synthetic biology applications. To date, this problem remains computationally difficult, and current strategies to address it resort to heuristics and stochastic search techniques. The most popular methods consist of two steps: First a random seed sequence is generated; next, this seed is progressively modified (i.e. mutated) to adopt the desired folding properties. Although computationally inexpensive, this approach raises several questions such as (i) the influence of the seed; and (ii) the efficiency of single-path directed searches that may be affected by energy barriers in the mutational landscape. In this article, we present RNA-ensign, a novel paradigm for RNA design. Instead of taking a progressive adaptive walk driven by local search criteria, we use an efficient global sampling algorithm to examine large regions of the mutational landscape under structural and thermodynamical constraints until a solution is found. When considering the influence of the seeds and the target secondary structures, our results show that, compared to single-path directed searches, our approach is more robust, succeeds more often and generates more thermodynamically stable sequences. An ensemble approach to RNA design is thus well worth pursuing as a complement to existing approaches. RNA-ensign is available at http://csb.cs.mcgill.ca/RNAensign. PMID:22941632
Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items
ERIC Educational Resources Information Center
Penfield, Randall D.
2006-01-01
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
A Hybrid Approach for Supporting Adaptivity in E-Learning Environments
ERIC Educational Resources Information Center
Al-Omari, Mohammad; Carter, Jenny; Chiclana, Francisco
2016-01-01
Purpose: The purpose of this paper is to identify a framework to support adaptivity in e-learning environments. The framework reflects a novel hybrid approach incorporating the concept of the event-condition-action (ECA) model and intelligent agents. Moreover, a system prototype is developed reflecting the hybrid approach to supporting adaptivity…
Adapting to the Digital Age: A Narrative Approach
ERIC Educational Resources Information Center
Cousins, Sarah; Bissar, Dounia
2012-01-01
The article adopts a narrative inquiry approach to foreground informal learning and exposes a collection of stories from tutors about how they adapted comfortably to the digital age.We were concerned that despite substantial evidence that bringing about changes in pedagogic practices can be difficult, there is a gap in convincing approaches to…
Scribed transparency microplates mounted on a modified standard microplate.
Cheong, Brandon Huey-Ping; Chua, Wei Seong; Liew, Oi Wah; Ng, Tuck Wah
2014-08-01
The immense cost effectiveness of using transparencies as analyte handling implements in microplate instrumentation offers the possibility of application even in resource-limited laboratories. In this work, a standard microplate was adapted to serve as the permanent base for disposable scribed transparencies. The approach is shown to ameliorate evaporation, which can affect assay accuracy when analytes need to be incubated for some time. It also offers assurance against fluorescence measurement errors due to the cross-talk of samples from adjacent wells. Copyright © 2014 Elsevier Inc. All rights reserved.
Ion source design for industrial applications
NASA Technical Reports Server (NTRS)
Kaufman, H. R.; Robinson, R. S.
1981-01-01
The more frequently used design techniques for the components of broad-beam electron bombardment ion sources are discussed. The approach used emphasizes refractory metal cathodes and permanent-magnet multipole discharge chambers. Design procedures and sample calculations are given for the discharge chamber, ion optics, the cathodes, and the magnetic circuit. Hardware designs are included for the isolator, cathode supports, anode supports, pole-piece assembly, and ion-optics supports. A comparison is made between two-grid and three-grid optics. The designs presented are representative of current technology and are adaptable to a wide range of configurations.
Direct-push geochemical profiling for assessment of inorganic chemical heterogeneity in aquifers
Schulmeister, M.K.; Healey, J.M.; Butler, J.J.; McCall, G.W.
2004-01-01
Discrete-depth sampling of inorganic groundwater chemistry is essential for a variety of site characterization activities. Although the mobility and rapid sampling capabilities of direct-push techniques have led to their widespread use for evaluating the distribution of organic contaminants, complementary methods for the characterization of spatial variations in geochemical conditions have not been developed. In this study, a direct-push-based approach for high-resolution inorganic chemical profiling was developed at a site where sharp chemical contrasts and iron-reducing conditions had previously been observed. Existing multilevel samplers (MLSs) that span a fining-upward alluvial sequence were used for comparison with the direct-push profiling. Chemical profiles obtained with a conventional direct-push exposed-screen sampler differed from those obtained with an adjacent MLS because of sampler reactivity and mixing with water from previous sampling levels. The sampler was modified by replacing steel sampling components with stainless-steel and heat-treated parts, and adding an adapter that prevents mixing. Profiles obtained with the modified approach were in excellent agreement with those obtained from an adjacent MLS for all constituents and parameters monitored (Cl, NO3, Fe, Mn, DO, ORP, specific conductance and pH). Interpretations of site redox conditions based on field-measured parameters were supported by laboratory analysis of dissolved Fe. The discrete-depth capability of this approach allows inorganic chemical variations to be described at a level of detail that has rarely been possible. When combined with the mobility afforded by direct-push rigs and on-site methods of chemical analysis, the new approach is well suited for a variety of interactive site-characterization endeavors. ?? 2003 Elsevier B.V. All rights reserved.
Herrero-Hahn, Raquel; Rojas, Juan Guillermo; Ospina-Díaz, Juan Manuel; Montoya-Juárez, Rafael; Restrepo-Medrano, Juan Carlos; Hueso-Montoro, César
2017-03-01
The level of cultural self-efficacy indicates the degree of confidence nursing professionals possess for their ability to provide culturally competent care. Cultural adaptation and validation of the Cultural Self-Efficacy Scale was performed for nursing professionals in Colombia. A scale validation study was conducted. Cultural adaptation and validation of the Cultural Self-Efficacy Scale was performed using a sample of 190 nurses in Colombia, between September 2013 and April 2014. This sample was chosen via systematic random sampling from a finite population. The scale was culturally adapted. Cronbach's alpha for the revised scale was .978. Factor analysis revealed the existence of six factors grouped in three dimensions that explained 68% of the variance. The results demonstrated that the version of the Cultural Self-Efficacy Scale adapted to the Colombian context is a valid and reliable instrument for determining the level of cultural self-efficacy of nursing professionals.
Determination Of Ph Including Hemoglobin Correction
Maynard, John D.; Hendee, Shonn P.; Rohrscheib, Mark R.; Nunez, David; Alam, M. Kathleen; Franke, James E.; Kemeny, Gabor J.
2005-09-13
Methods and apparatuses of determining the pH of a sample. A method can comprise determining an infrared spectrum of the sample, and determining the hemoglobin concentration of the sample. The hemoglobin concentration and the infrared spectrum can then be used to determine the pH of the sample. In some embodiments, the hemoglobin concentration can be used to select an model relating infrared spectra to pH that is applicable at the determined hemoglobin concentration. In other embodiments, a model relating hemoglobin concentration and infrared spectra to pH can be used. An apparatus according to the present invention can comprise an illumination system, adapted to supply radiation to a sample; a collection system, adapted to collect radiation expressed from the sample responsive to the incident radiation; and an analysis system, adapted to relate information about the incident radiation, the expressed radiation, and the hemoglobin concentration of the sample to pH.
NASA Astrophysics Data System (ADS)
Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng
In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.
Zhou, Ping; Bai, Rongji
2014-01-01
Based on a new stability result of equilibrium point in nonlinear fractional-order systems for fractional-order lying in 1 < q < 2, one adaptive synchronization approach is established. The adaptive synchronization for the fractional-order Lorenz chaotic system with fractional-order 1 < q < 2 is considered. Numerical simulations show the validity and feasibility of the proposed scheme. PMID:25247207
NASA Astrophysics Data System (ADS)
Kust, German; Andreeva, Olga
2015-04-01
A number of new concepts and paradigms appeared during last decades, such as sustainable land management (SLM), climate change (CC) adaptation, environmental services, ecosystem health, and others. All of these initiatives still not having the common scientific platform although some agreements in terminology were reached, schemes of links and feedback loops created, and some models developed. Nevertheless, in spite of all these scientific achievements, the land related issues are still not in the focus of CC adaptation and mitigation. The last did not grow much beyond the "greenhouse gases" (GHG) concept, which makes land degradation as the "forgotten side of climate change" The possible decision to integrate concepts of climate and desertification/land degradation could be consideration of the "GHG" approach providing global solution, and "land" approach providing local solution covering other "locally manifesting" issues of global importance (biodiversity conservation, food security, disasters and risks, etc.) to serve as a central concept among those. SLM concept is a land-based approach, which includes the concepts of both ecosystem-based approach (EbA) and community-based approach (CbA). SLM can serve as in integral CC adaptation strategy, being based on the statement "the more healthy and resilient the system is, the less vulnerable and more adaptive it will be to any external changes and forces, including climate" The biggest scientific issue is the methods to evaluate the SLM and results of the SLM investments. We suggest using the approach based on the understanding of the balance or equilibrium of the land and nature components as the major sign of the sustainable system. Prom this point of view it is easier to understand the state of the ecosystem stress, size of the "health", range of adaptive capacity, drivers of degradation and SLM nature, as well as the extended land use, and the concept of environmental land management as the improved SLM approach. A number of case studies justify the schemes developed to explain this approach.
Development of a Voice Activity Controlled Noise Canceller
Abid Noor, Ali O.; Samad, Salina Abdul; Hussain, Aini
2012-01-01
In this paper, a variable threshold voice activity detector (VAD) is developed to control the operation of a two-sensor adaptive noise canceller (ANC). The VAD prohibits the reference input of the ANC from containing some strength of actual speech signal during adaptation periods. The novelty of this approach resides in using the residual output from the noise canceller to control the decisions made by the VAD. Thresholds of full-band energy and zero-crossing features are adjusted according to the residual output of the adaptive filter. Performance evaluation of the proposed approach is quoted in terms of signal to noise ratio improvements as well mean square error (MSE) convergence of the ANC. The new approach showed an improved noise cancellation performance when tested under several types of environmental noise. Furthermore, the computational power of the adaptive process is reduced since the output of the adaptive filter is efficiently calculated only during non-speech periods. PMID:22778667
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Oliveira, Thaís D; Costa, Danielle de S; Albuquerque, Maicon R; Malloy-Diniz, Leandro F; Miranda, Débora M; de Paula, Jonas J
2018-06-11
The Parenting Styles and Dimensions Questionnaire (PSDQ) is used worldwide to assess three styles (authoritative, authoritarian, and permissive) and seven dimensions of parenting. In this study, we adapted the short version of the PSDQ for use in Brazil and investigated its validity and reliability. Participants were 451 mothers of children aged 3 to 18 years, though sample size varied with analyses. The translation and adaptation of the PSDQ followed a rigorous methodological approach. Then, we investigated the content, criterion, and construct validity of the adapted instrument. The scale content validity index (S-CVI) was considered adequate (0.97). There was evidence of internal validity, with the PSDQ dimensions showing strong correlations with their higher-order parenting styles. Confirmatory factor analysis endorsed the three-factor, second-order solution (i.e., three styles consisting of seven dimensions). The PSDQ showed convergent validity with the validated Brazilian version of the Parenting Styles Inventory (Inventário de Estilos Parentais - IEP), as well as external validity, as it was associated with several instruments measuring sociodemographic and behavioral/emotional-problem variables. The PSDQ is an effective and reliable psychometric instrument to assess childrearing strategies according to Baumrind's model of parenting styles.
Patt, Anthony G; Tadross, Mark; Nussbaumer, Patrick; Asante, Kwabena; Metzger, Marc; Rafael, Jose; Goujon, Anne; Brundrit, Geoff
2010-01-26
When will least developed countries be most vulnerable to climate change, given the influence of projected socio-economic development? The question is important, not least because current levels of international assistance to support adaptation lag more than an order of magnitude below what analysts estimate to be needed, and scaling up support could take many years. In this paper, we examine this question using an empirically derived model of human losses to climate-related extreme events, as an indicator of vulnerability and the need for adaptation assistance. We develop a set of 50-year scenarios for these losses in one country, Mozambique, using high-resolution climate projections, and then extend the results to a sample of 23 least-developed countries. Our approach takes into account both potential changes in countries' exposure to climatic extreme events, and socio-economic development trends that influence countries' own adaptive capacities. Our results suggest that the effects of socio-economic development trends may begin to offset rising climate exposure in the second quarter of the century, and that it is in the period between now and then that vulnerability will rise most quickly. This implies an urgency to the need for international assistance to finance adaptation.
Patt, Anthony G.; Tadross, Mark; Nussbaumer, Patrick; Asante, Kwabena; Metzger, Marc; Rafael, Jose; Goujon, Anne; Brundrit, Geoff
2010-01-01
When will least developed countries be most vulnerable to climate change, given the influence of projected socio-economic development? The question is important, not least because current levels of international assistance to support adaptation lag more than an order of magnitude below what analysts estimate to be needed, and scaling up support could take many years. In this paper, we examine this question using an empirically derived model of human losses to climate-related extreme events, as an indicator of vulnerability and the need for adaptation assistance. We develop a set of 50-year scenarios for these losses in one country, Mozambique, using high-resolution climate projections, and then extend the results to a sample of 23 least-developed countries. Our approach takes into account both potential changes in countries’ exposure to climatic extreme events, and socio-economic development trends that influence countries’ own adaptive capacities. Our results suggest that the effects of socio-economic development trends may begin to offset rising climate exposure in the second quarter of the century, and that it is in the period between now and then that vulnerability will rise most quickly. This implies an urgency to the need for international assistance to finance adaptation. PMID:20080585
Wang, Jiexin; Uchibe, Eiji; Doya, Kenji
2017-01-01
EM-based policy search methods estimate a lower bound of the expected return from the histories of episodes and iteratively update the policy parameters using the maximum of a lower bound of expected return, which makes gradient calculation and learning rate tuning unnecessary. Previous algorithms like Policy learning by Weighting Exploration with the Returns, Fitness Expectation Maximization, and EM-based Policy Hyperparameter Exploration implemented the mechanisms to discard useless low-return episodes either implicitly or using a fixed baseline determined by the experimenter. In this paper, we propose an adaptive baseline method to discard worse samples from the reward history and examine different baselines, including the mean, and multiples of SDs from the mean. The simulation results of benchmark tasks of pendulum swing up and cart-pole balancing, and standing up and balancing of a two-wheeled smartphone robot showed improved performances. We further implemented the adaptive baseline with mean in our two-wheeled smartphone robot hardware to test its performance in the standing up and balancing task, and a view-based approaching task. Our results showed that with adaptive baseline, the method outperformed the previous algorithms and achieved faster, and more precise behaviors at a higher successful rate. PMID:28167910
Landscape genetic approaches to guide native plant restoration in the Mojave Desert
Shryock, Daniel F.; Havrilla, Caroline A.; DeFalco, Lesley; Esque, Todd C.; Custer, Nathan; Wood, Troy E.
2016-01-01
Restoring dryland ecosystems is a global challenge due to synergistic drivers of disturbance coupled with unpredictable environmental conditions. Dryland plant species have evolved complex life-history strategies to cope with fluctuating resources and climatic extremes. Although rarely quantified, local adaptation is likely widespread among these species and potentially influences restoration outcomes. The common practice of reintroducing propagules to restore dryland ecosystems, often across large spatial scales, compels evaluation of adaptive divergence within these species. Such evaluations are critical to understanding the consequences of large-scale manipulation of gene flow and to predicting success of restoration efforts. However, genetic information for species of interest can be difficult and expensive to obtain through traditional common garden experiments. Recent advances in landscape genetics offer marker-based approaches for identifying environmental drivers of adaptive genetic variability in non-model species, but tools are still needed to link these approaches with practical aspects of ecological restoration. Here, we combine spatially-explicit landscape genetics models with flexible visualization tools to demonstrate how cost-effective evaluations of adaptive genetic divergence can facilitate implementation of different seed sourcing strategies in ecological restoration. We apply these methods to Amplified Fragment Length Polymorphism (AFLP) markers genotyped in two Mojave Desert shrub species of high restoration importance: the long-lived, wind-pollinated gymnosperm Ephedra nevadensis, and the short-lived, insect-pollinated angiosperm Sphaeralcea ambigua. Mean annual temperature was identified as an important driver of adaptive genetic divergence for both species. Ephedra showed stronger adaptive divergence with respect to precipitation variability, while temperature variability and precipitation averages explained a larger fraction of adaptive divergence in Sphaeralcea. We describe multivariate statistical approaches for interpolating spatial patterns of adaptive divergence while accounting for potential bias due to neutral genetic structure. Through a spatial bootstrapping procedure, we also visualize patterns in the magnitude of model uncertainty. Finally, we introduce an interactive, distance-based mapping approach that explicitly links marker-based models of adaptive divergence with local or admixture seed sourcing strategies, promoting effective native plant restoration.
Resolving occlusion and segmentation errors in multiple video object tracking
NASA Astrophysics Data System (ADS)
Cheng, Hsu-Yung; Hwang, Jenq-Neng
2009-02-01
In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.
The MPLEx Protocol for Multi-omic Analyses of Soil Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicora, Carrie D.; Burnum-Johnson, Kristin E.; Nakayasu, Ernesto S.
Mass spectrometry (MS)-based integrated metaproteomic, metabolomic and lipidomic (multi-omic) studies are transforming our ability to understand and characterize microbial communities in environmental and biological systems. These measurements are even enabling enhanced analyses of complex soil microbial communities, which are the most complex microbial systems known to date. Multi-omic analyses, however, do have sample preparation challenges since separate extractions are typically needed for each omic study, thereby greatly amplifying the preparation time and amount of sample required. To address this limitation, a 3-in-1 method for simultaneous metabolite, protein, and lipid extraction (MPLEx) from the exact same soil sample was created bymore » adapting a solvent-based approach. This MPLEx protocol has proven to be simple yet robust for many sample types and even when utilized for limited quantities of complex soil samples. The MPLEx method also greatly enabled the rapid multi-omic measurements needed to gain a better understanding of the members of each microbial community, while evaluating the changes taking place upon biological and environmental perturbations.« less
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Hansen, S. E.; Papadopoulos, G. A.
2017-12-01
The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.
The Genetic Architecture of Adaptations to High Altitude in Ethiopia
Alkorta-Aranburu, Gorka; Beall, Cynthia M.; Witonsky, David B.; Gebremedhin, Amha; Pritchard, Jonathan K.; Di Rienzo, Anna
2012-01-01
Although hypoxia is a major stress on physiological processes, several human populations have survived for millennia at high altitudes, suggesting that they have adapted to hypoxic conditions. This hypothesis was recently corroborated by studies of Tibetan highlanders, which showed that polymorphisms in candidate genes show signatures of natural selection as well as well-replicated association signals for variation in hemoglobin levels. We extended genomic analysis to two Ethiopian ethnic groups: Amhara and Oromo. For each ethnic group, we sampled low and high altitude residents, thus allowing genetic and phenotypic comparisons across altitudes and across ethnic groups. Genome-wide SNP genotype data were collected in these samples by using Illumina arrays. We find that variants associated with hemoglobin variation among Tibetans or other variants at the same loci do not influence the trait in Ethiopians. However, in the Amhara, SNP rs10803083 is associated with hemoglobin levels at genome-wide levels of significance. No significant genotype association was observed for oxygen saturation levels in either ethnic group. Approaches based on allele frequency divergence did not detect outliers in candidate hypoxia genes, but the most differentiated variants between high- and lowlanders have a clear role in pathogen defense. Interestingly, a significant excess of allele frequency divergence was consistently detected for genes involved in cell cycle control and DNA damage and repair, thus pointing to new pathways for high altitude adaptations. Finally, a comparison of CpG methylation levels between high- and lowlanders found several significant signals at individual genes in the Oromo. PMID:23236293
Hilgarth, M; Fuertes-Pèrez, S; Ehrmann, M; Vogel, R F
2018-04-01
The genus Photobacterium comprises species of marine bacteria, commonly found in open-ocean and deep-sea environments. Some species (e.g. Photobacterium phosphoreum) are associated with fish spoilage. Recently, culture-independent studies have drawn attention to the presence of photobacteria on meat. This study employed a comparative isolation approach of Photobacterium spp. and aimed to develop an adapted isolation procedure for recovery from food samples, as demonstrated for different meats: Marine broth is used for resuspending and dilution of food samples, followed by aerobic cultivation on marine broth agar supplemented with meat extract and vancomycin at 15°C for 72 h. Identification of spoilage-associated microbiota was carried out via Matrix Assisted Laser Desorption/Ionization Time of Flight Mass Spectrometry using a database supplemented with additional mass spectrometry profiles of Photobacterium spp. This study provides evidence for the common abundance of multiple Photobacterium species in relevant quantities on various modified atmosphere packaged meats. Photobacterium carnosum was predominant on beef and chicken, while Photobacterium iliopiscarium represented the major species on pork and Photobacterium phosphoreum on salmon, respectively. This study demonstrates highly frequent isolation of multiple photobacteria (Photobacterium carnosum, Photobacterium phosphoreum, and Photobacterium iliopiscarium) from different modified-atmosphere packaged spoiled and unspoiled meats using an adapted isolation procedure. The abundance of photobacteria in high numbers provides evidence for the hitherto neglected importance and relevance of Photobacterium spp. to meat spoilage. © 2018 The Society for Applied Microbiology.
Klemm, Matthias; Schweitzer, Dietrich; Peters, Sven; Sauer, Lydia; Hammer, Martin; Haueisen, Jens
2015-01-01
Fluorescence lifetime imaging ophthalmoscopy (FLIO) is a new technique for measuring the in vivo autofluorescence intensity decays generated by endogenous fluorophores in the ocular fundus. Here, we present a software package called FLIM eXplorer (FLIMX) for analyzing FLIO data. Specifically, we introduce a new adaptive binning approach as an optimal tradeoff between the spatial resolution and the number of photons required per pixel. We also expand existing decay models (multi-exponential, stretched exponential, spectral global analysis, incomplete decay) to account for the layered structure of the eye and present a method to correct for the influence of the crystalline lens fluorescence on the retina fluorescence. Subsequently, the Holm-Bonferroni method is applied to FLIO measurements to allow for group comparisons between patients and controls on the basis of fluorescence lifetime parameters. The performance of the new approaches was evaluated in five experiments. Specifically, we evaluated static and adaptive binning in a diabetes mellitus patient, we compared the different decay models in a healthy volunteer and performed a group comparison between diabetes patients and controls. An overview of the visualization capabilities and a comparison of static and adaptive binning is shown for a patient with macular hole. FLIMX’s applicability to fluorescence lifetime imaging microscopy is shown in the ganglion cell layer of a porcine retina sample, obtained by a laser scanning microscope using two-photon excitation. PMID:26192624
Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa
2013-01-01
Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.
A data acquisition protocol for a reactive wireless sensor network monitoring application.
Aderohunmu, Femi A; Brunelli, Davide; Deng, Jeremiah D; Purvis, Martin K
2015-04-30
Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain.
A Data Acquisition Protocol for a Reactive Wireless Sensor Network Monitoring Application
Aderohunmu, Femi A.; Brunelli, Davide; Deng, Jeremiah D.; Purvis, Martin K.
2015-01-01
Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain. PMID:25942642
Integrated Resilient Aircraft Control Project Full Scale Flight Validation
NASA Technical Reports Server (NTRS)
Bosworth, John T.
2009-01-01
Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.
Automated visual inspection for polished stone manufacture
NASA Astrophysics Data System (ADS)
Smith, Melvyn L.; Smith, Lyndon N.
2003-05-01
Increased globalisation of the ornamental stone market has lead to increased competition and more rigorous product quality requirements. As such, there are strong motivators to introduce new, more effective, inspection technologies that will help enable stone processors to reduce costs, improve quality and improve productivity. Natural stone surfaces may contain a mixture of complex two-dimensional (2D) patterns and three-dimensional (3D) features. The challenge in terms of automated inspection is to develop systems able to reliably identify 3D topographic defects, either naturally occurring or resulting from polishing, in the presence of concomitant complex 2D stochastic colour patterns. The resulting real-time analysis of the defects may be used in adaptive process control, in order to avoid the wasteful production of defective product. An innovative approach, using structured light and based upon an adaptation of the photometric stereo method, has been pioneered and developed at UWE to isolate and characterize mixed 2D and 3D surface features. The method is able to undertake tasks considered beyond the capabilities of existing surface inspection techniques. The approach has been successfully applied to real stone samples, and a selection of experimental results is presented.
Tracking of multiple targets using online learning for reference model adaptation.
Pernkopf, Franz
2008-12-01
Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.
Active machine learning for rapid landslide inventory mapping with VHR satellite images (Invited)
NASA Astrophysics Data System (ADS)
Stumpf, A.; Lachiche, N.; Malet, J.; Kerle, N.; Puissant, A.
2013-12-01
VHR satellite images have become a primary source for landslide inventory mapping after major triggering events such as earthquakes and heavy rainfalls. Visual image interpretation is still the prevailing standard method for operational purposes but is time-consuming and not well suited to fully exploit the increasingly better supply of remote sensing data. Recent studies have addressed the development of more automated image analysis workflows for landslide inventory mapping. In particular object-oriented approaches that account for spatial and textural image information have been demonstrated to be more adequate than pixel-based classification but manually elaborated rule-based classifiers are difficult to adapt under changing scene characteristics. Machine learning algorithm allow learning classification rules for complex image patterns from labelled examples and can be adapted straightforwardly with available training data. In order to reduce the amount of costly training data active learning (AL) has evolved as a key concept to guide the sampling for many applications. The underlying idea of AL is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and data structure to iteratively select the most valuable samples that should be labelled by the user. With relatively few queries and labelled samples, an AL strategy yields higher accuracies than an equivalent classifier trained with many randomly selected samples. This study addressed the development of an AL method for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. Our approach [1] is based on the Random Forest algorithm and considers the classifier uncertainty as well as the variance of potential sampling regions to guide the user towards the most valuable sampling areas. The algorithm explicitly searches for compact regions and thereby avoids a spatially disperse sampling pattern inherent to most other AL methods. The accuracy, the sampling time and the computational runtime of the algorithm were evaluated on multiple satellite images capturing recent large scale landslide events. Sampling between 1-4% of the study areas the accuracies between 74% and 80% were achieved, whereas standard sampling schemes yielded only accuracies between 28% and 50% with equal sampling costs. Compared to commonly used point-wise AL algorithm the proposed approach significantly reduces the number of iterations and hence the computational runtime. Since the user can focus on relatively few compact areas (rather than on hundreds of distributed points) the overall labeling time is reduced by more than 50% compared to point-wise queries. An experimental evaluation of multiple expert mappings demonstrated strong relationships between the uncertainties of the experts and the machine learning model. It revealed that the achieved accuracies are within the range of the inter-expert disagreement and that it will be indispensable to consider ground truth uncertainties to truly achieve further enhancements in the future. The proposed method is generally applicable to a wide range of optical satellite images and landslide types. [1] A. Stumpf, N. Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, Active learning in the spatial domain for remote sensing image classification, IEEE Transactions on Geosciece and Remote Sensing. 2013, DOI 10.1109/TGRS.2013.2262052.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Interdisciplinary education approach to the human science
NASA Astrophysics Data System (ADS)
Szu, Harold; Zheng, Yufeng; Zhang, Nian
2012-06-01
We introduced human sciences as components, and integrated them together as an interdisciplinary endeavor over decades. This year, we built a website to maintain systematically the educational research service. We captured the human sciences in various components in the SPIE proceedings over the last decades, which included: (i) ears & eyes like adaptive wavelets, (ii) brain-like unsupervised learning independent component analysis (ICA); (iii) compressive sampling spatiotemporal sparse information processing, (iv) nanoengineering approach to sensing components, (v) systems biology measurements, and (vi) biomedical wellness applications. In order to serve the interdisciplinary community better, our system approach is based on that the former recipients invited the next recipients to deliver their review talks and panel discussions. Since only the former recipients of each component can lead the nomination committees and make the final selections, we also create a leadership award which may be nominated by any conference attendance, to be approved by the conference organization committee.
An interactive modular design for computerized photometry in spectrochemical analysis
NASA Technical Reports Server (NTRS)
Bair, V. L.
1980-01-01
A general functional description of totally automatic photometry of emission spectra is not available for an operating environment in which the sample compositions and analysis procedures are low-volume and non-routine. The advantages of using an interactive approach to computer control in such an operating environment are demonstrated. This approach includes modular subroutines selected at multiple-option, menu-style decision points. This style of programming is used to trace elemental determinations, including the automated reading of spectrographic plates produced by a 3.4 m Ebert mount spectrograph using a dc-arc in an argon atmosphere. The simplified control logic and modular subroutine approach facilitates innovative research and program development, yet is easily adapted to routine tasks. Operator confidence and control are increased by the built-in options including degree of automation, amount of intermediate data printed out, amount of user prompting, and multidirectional decision points.
Adaptive Role Playing Games: An Immersive Approach for Problem Based Learning
ERIC Educational Resources Information Center
Sancho, Pilar; Moreno-Ger, Pablo; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar
2009-01-01
In this paper we present a general framework, called NUCLEO, for the application of socio-constructive educational approaches in higher education. The underlying pedagogical approach relies on an adaptation model in order to improve group dynamics, as this has been identified as one of the key features in the success of collaborative learning…
Adaptive management of natural resources-framework and issues
Williams, B.K.
2011-01-01
Adaptive management, an approach for simultaneously managing and learning about natural resources, has been around for several decades. Interest in adaptive decision making has grown steadily over that time, and by now many in natural resources conservation claim that adaptive management is the approach they use in meeting their resource management responsibilities. Yet there remains considerable ambiguity about what adaptive management actually is, and how it is to be implemented by practitioners. The objective of this paper is to present a framework and conditions for adaptive decision making, and discuss some important challenges in its application. Adaptive management is described as a two-phase process of deliberative and iterative phases, which are implemented sequentially over the timeframe of an application. Key elements, processes, and issues in adaptive decision making are highlighted in terms of this framework. Special emphasis is given to the question of geographic scale, the difficulties presented by non-stationarity, and organizational challenges in implementing adaptive management. ?? 2010.
Wang, Ting; Chen, Zhencai; Zhao, Guang; Hitchman, Glenn; Liu, Congcong; Zhao, Xiaoyue; Liu, Yijun; Chen, Antao
2014-04-15
Conflict adaptation has been widely researched in normal and clinical populations. There are large individual differences in conflict adaptation, and it has been linked to the schizotypal trait. However, no study to date has examined how individual differences in spontaneous brain activity are related to behavioral conflict adaptation (performance). Resting-state functional magnetic resonance imaging (RS-fMRI) is a promising tool to investigate this issue. The present study evaluated the regional homogeneity (ReHo) of RS-fMRI signals in order to explore the neural basis of individual differences in conflict adaptation across two independent samples comprising a total of 67 normal subjects. A partial correlation analysis was carried out to examine the relationship between ReHo and behavioral conflict adaptation, while controlling for reaction time, standard deviation and flanker interference effects. This analysis was conducted on 39 subjects' data (sample 1); the results showed significant positive correlations in the left dorsolateral prefrontal cortex (DLPFC) and left ventrolateral prefrontal cortex. We then conducted a test-validation procedure on the remaining 28 subjects' data (sample 2) to examine the reliability of the results. Regions of interest were defined based on the correlation results. Regression analysis showed that variability in ReHo values in the DLPFC accounted for 48% of the individual differences in the conflict adaptation effect in sample 2. The present findings provide further support for the importance of the DLPFC in the conflict adaptation process. More importantly, we demonstrated that ReHo of RS-fMRI signals in the DLPFC can predict behavioral performance in conflict adaptation, which provides potential biomarkers for the early detection of cognitive control deterioration. Copyright © 2013 Elsevier Inc. All rights reserved.
Poe, Steven; de Oca, Adrián Nieto-Montes; Torres-Carvajal, Omar; de Queiroz, Kevin; Velasco, Julián A; Truett, Brad; Gray, Levi N; Ryan, Mason J; Köhler, Gunther; Ayala-Varela, Fernando; Latella, Ian
2018-06-01
Adaptive radiation is a widely recognized pattern of evolution wherein substantial phenotypic change accompanies rapid speciation. Adaptive radiation may be triggered by environmental opportunities resulting from dispersal to new areas or via the evolution of traits, called key innovations, that allow for invasion of new niches. Species sampling is a known source of bias in many comparative analyses, yet classic adaptive radiations have not been studied comparatively with comprehensively sampled phylogenies. In this study, we use unprecedented comprehensive phylogenetic sampling of Anolis lizard species to examine comparative evolution in this well-studied adaptive radiation. We compare adaptive radiation models within Anolis and in the Anolis clade and a potential sister lineage, the Corytophanidae. We find evidence for island (i.e., opportunity) effects and no evidence for trait (i.e., key innovation) effects causing accelerated body size evolution within Anolis. However, island effects are scale dependent: when Anolis and Corytophanidae are analyzed together, no island effect is evident. We find no evidence for an island effect on speciation rate and tenuous evidence for greater speciation rate due to trait effects. These results suggest the need for precision in treatments of classic adaptive radiations such as Anolis and further refinement of the concept of adaptive radiation.
Traffic-Adaptive, Flow-Specific Medium Access for Wireless Networks
2009-09-01
hybrid, contention and non-contention schemes are shown to be special cases. This work also compares the energy efficiency of centralized and distributed...solutions and proposes an energy efficient version of traffic-adaptive CWS-MAC that includes an adaptive sleep cycle coordinated through the use of...preamble sampling. A preamble sampling probability parameter is introduced to manage the trade-off between energy efficiency and throughput and delay