DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a modelmore » is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.« less
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
NASA Astrophysics Data System (ADS)
Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.
2016-12-01
Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.
"FearNot!": A Computer-Based Anti-Bullying-Programme Designed to Foster Peer Intervention
ERIC Educational Resources Information Center
Vannini, Natalie; Enz, Sibylle; Sapouna, Maria; Wolke, Dieter; Watson, Scott; Woods, Sarah; Dautenhahn, Kerstin; Hall, Lynne; Paiva, Ana; Andre, Elizabeth; Aylett, Ruth; Schneider, Wolfgang
2011-01-01
Bullying is widespread in European schools, despite multiple intervention strategies having been proposed over the years. The present study investigates the effects of a novel virtual learning strategy ("FearNot!") to tackle bullying in both UK and German samples. The approach is intended primarily for victims to increase their coping…
Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.
Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong
2017-01-01
Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.
LAWS simulation: Sampling strategies and wind computation algorithms
NASA Technical Reports Server (NTRS)
Emmitt, G. D. A.; Wood, S. A.; Houston, S. H.
1989-01-01
In general, work has continued on developing and evaluating algorithms designed to manage the Laser Atmospheric Wind Sounder (LAWS) lidar pulses and to compute the horizontal wind vectors from the line-of-sight (LOS) measurements. These efforts fall into three categories: Improvements to the shot management and multi-pair algorithms (SMA/MPA); observing system simulation experiments; and ground-based simulations of LAWS.
User-Driven Sampling Strategies in Image Exploitation
Harvey, Neal R.; Porter, Reid B.
2013-12-23
Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-drivenmore » sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. We discovered that in user-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. Furthermore, in preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.« less
User-driven sampling strategies in image exploitation
NASA Astrophysics Data System (ADS)
Harvey, Neal; Porter, Reid
2013-12-01
Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
System reliability of randomly vibrating structures: Computational modeling and laboratory testing
NASA Astrophysics Data System (ADS)
Sundar, V. S.; Ammanagi, S.; Manohar, C. S.
2015-09-01
The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.
Stow, Sarah M; Goodwin, Cody R; Kliman, Michal; Bachmann, Brian O; McLean, John A; Lybrand, Terry P
2014-12-04
Ion mobility-mass spectrometry (IM-MS) allows the separation of ionized molecules based on their charge-to-surface area (IM) and mass-to-charge ratio (MS), respectively. The IM drift time data that is obtained is used to calculate the ion-neutral collision cross section (CCS) of the ionized molecule with the neutral drift gas, which is directly related to the ion conformation and hence molecular size and shape. Studying the conformational landscape of these ionized molecules computationally provides interpretation to delineate the potential structures that these CCS values could represent, or conversely, structural motifs not consistent with the IM data. A challenge in the IM-MS community is the ability to rapidly compute conformations to interpret natural product data, a class of molecules exhibiting a broad range of biological activity. The diversity of biological activity is, in part, related to the unique structural characteristics often observed for natural products. Contemporary approaches to structurally interpret IM-MS data for peptides and proteins typically utilize molecular dynamics (MD) simulations to sample conformational space. However, MD calculations are computationally expensive, they require a force field that accurately describes the molecule of interest, and there is no simple metric that indicates when sufficient conformational sampling has been achieved. Distance geometry is a computationally inexpensive approach that creates conformations based on sampling different pairwise distances between the atoms within the molecule and therefore does not require a force field. Progressively larger distance bounds can be used in distance geometry calculations, providing in principle a strategy to assess when all plausible conformations have been sampled. Our results suggest that distance geometry is a computationally efficient and potentially superior strategy for conformational analysis of natural products to interpret gas-phase CCS data.
2015-01-01
Ion mobility-mass spectrometry (IM-MS) allows the separation of ionized molecules based on their charge-to-surface area (IM) and mass-to-charge ratio (MS), respectively. The IM drift time data that is obtained is used to calculate the ion-neutral collision cross section (CCS) of the ionized molecule with the neutral drift gas, which is directly related to the ion conformation and hence molecular size and shape. Studying the conformational landscape of these ionized molecules computationally provides interpretation to delineate the potential structures that these CCS values could represent, or conversely, structural motifs not consistent with the IM data. A challenge in the IM-MS community is the ability to rapidly compute conformations to interpret natural product data, a class of molecules exhibiting a broad range of biological activity. The diversity of biological activity is, in part, related to the unique structural characteristics often observed for natural products. Contemporary approaches to structurally interpret IM-MS data for peptides and proteins typically utilize molecular dynamics (MD) simulations to sample conformational space. However, MD calculations are computationally expensive, they require a force field that accurately describes the molecule of interest, and there is no simple metric that indicates when sufficient conformational sampling has been achieved. Distance geometry is a computationally inexpensive approach that creates conformations based on sampling different pairwise distances between the atoms within the molecule and therefore does not require a force field. Progressively larger distance bounds can be used in distance geometry calculations, providing in principle a strategy to assess when all plausible conformations have been sampled. Our results suggest that distance geometry is a computationally efficient and potentially superior strategy for conformational analysis of natural products to interpret gas-phase CCS data. PMID:25360896
A Comparative Evaluation of Computer Based and Non-Computer Based Instructional Strategies.
ERIC Educational Resources Information Center
Emerson, Ian
1988-01-01
Compares the computer assisted instruction (CAI) tutorial with its non-computerized pedagogical roots: the Socratic Dialog with Skinner's Programmed Instruction. Tests the effectiveness of a CAI tutorial on diffusion and osmosis against four other interactive and non-interactive instructional strategies. Notes computer based strategies were…
Stanczyk, Nicola Esther; Crutzen, Rik; Bolman, Catherine; Muris, Jean; de Vries, Hein
2013-02-06
Smoking tobacco is one of the most preventable causes of illness and death. Web-based tailored smoking cessation interventions have shown to be effective. Although these interventions have the potential to reach a large number of smokers, they often face high attrition rates, especially among lower educated smokers. A possible reason for the high attrition rates in the latter group is that computer-tailored smoking cessation interventions may not be attractive enough as they are mainly text-based. Video-based messages might be more effective in attracting attention and stimulating comprehension in people with a lower educational level and could therefore reduce attrition rates. The objective of the present study was to investigate whether differences exist in message-processing mechanisms (attention, comprehension, self-reference, appreciation, processing) and future adherence (intention to visit/use the website again, recommend the website to others), according to delivery strategy (video or text based messages) and educational level, to a Dutch computer-tailored smoking cessation program. Smokers who were motivated to quit within the following 6 months and who were aged over 16 were included in the program. Participants were randomly assigned to one of two conditions (video/text CT). The sample was stratified into 2 categories: lower and higher educated participants. In total, 139 participants completed the first session of the web-based tailored intervention and were subsequently asked to fill out a questionnaire assessing message-processing mechanisms and future adherence. ANOVAs and regression analyses were conducted to investigate the differences in message-processing mechanisms and future adherence with regard to delivery strategy and education. No interaction effects were found between delivery strategy (video vs text) and educational level on message-processing mechanisms and future adherence. Delivery strategy had no effect on future adherence and processing mechanisms. However, in both groups results indicated that lower educated participants showed higher attention (F(1,138)=3.97; P=.05) and processing levels (F(1,138)=4.58; P=.04). Results revealed also that lower educated participants were more inclined to visit the computer-tailored intervention website again (F(1,138)=4.43; P=.04). Computer-tailored programs have the potential to positively influence lower educated groups as they might be more involved in the computer-tailored intervention than higher educated smokers. Longitudinal studies with a larger sample are needed to gain more insight into the role of delivery strategy in tailored information and to investigate whether the intention to visit the intervention website again results in the ultimate goal of behavior change. Netherlands Trial Register (NTR3102).
Crutzen, Rik; Bolman, Catherine; Muris, Jean; de Vries, Hein
2013-01-01
Background Smoking tobacco is one of the most preventable causes of illness and death. Web-based tailored smoking cessation interventions have shown to be effective. Although these interventions have the potential to reach a large number of smokers, they often face high attrition rates, especially among lower educated smokers. A possible reason for the high attrition rates in the latter group is that computer-tailored smoking cessation interventions may not be attractive enough as they are mainly text-based. Video-based messages might be more effective in attracting attention and stimulating comprehension in people with a lower educational level and could therefore reduce attrition rates. Objective The objective of the present study was to investigate whether differences exist in message-processing mechanisms (attention, comprehension, self-reference, appreciation, processing) and future adherence (intention to visit/use the website again, recommend the website to others), according to delivery strategy (video or text based messages) and educational level, to a Dutch computer-tailored smoking cessation program. Methods Smokers who were motivated to quit within the following 6 months and who were aged over 16 were included in the program. Participants were randomly assigned to one of two conditions (video/text CT). The sample was stratified into 2 categories: lower and higher educated participants. In total, 139 participants completed the first session of the web-based tailored intervention and were subsequently asked to fill out a questionnaire assessing message-processing mechanisms and future adherence. ANOVAs and regression analyses were conducted to investigate the differences in message-processing mechanisms and future adherence with regard to delivery strategy and education. Results No interaction effects were found between delivery strategy (video vs text) and educational level on message-processing mechanisms and future adherence. Delivery strategy had no effect on future adherence and processing mechanisms. However, in both groups results indicated that lower educated participants showed higher attention (F 1,138=3.97; P=.05) and processing levels (F 1,138=4.58; P=.04). Results revealed also that lower educated participants were more inclined to visit the computer-tailored intervention website again (F 1,138=4.43; P=.04). Conclusions Computer-tailored programs have the potential to positively influence lower educated groups as they might be more involved in the computer-tailored intervention than higher educated smokers. Longitudinal studies with a larger sample are needed to gain more insight into the role of delivery strategy in tailored information and to investigate whether the intention to visit the intervention website again results in the ultimate goal of behavior change. Trial Registration Netherlands Trial Register (NTR3102). PMID:23388554
NASA Astrophysics Data System (ADS)
Sugiharti, Gulmah
2018-03-01
This study aims to see the improvement of student learning outcomes by independent learning using computer-based learning media in the course of STBM (Teaching and Learning Strategy) Chemistry. Population in this research all student of class of 2014 which take subject STBM Chemistry as many as 4 class. While the sample is taken by purposive as many as 2 classes, each 32 students, as control class and expriment class. The instrument used is the test of learning outcomes in the form of multiple choice with the number of questions as many as 20 questions that have been declared valid, and reliable. Data analysis techniques used one-sided t test and improved learning outcomes using a normalized gain test. Based on the learning result data, the average of normalized gain values for the experimental class is 0,530 and for the control class is 0,224. The result of the experimental student learning result is 53% and the control class is 22,4%. Hypothesis testing results obtained t count> ttable is 9.02> 1.6723 at the level of significance α = 0.05 and db = 58. This means that the acceptance of Ha is the use of computer-based learning media (CAI Computer) can improve student learning outcomes in the course Learning Teaching Strategy (STBM) Chemistry academic year 2017/2018.
NASA Astrophysics Data System (ADS)
Shrivastava, Sajal; Sohn, Il-Yung; Son, Young-Min; Lee, Won-Il; Lee, Nae-Eung
2015-11-01
Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules.Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr05839b
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Hall, Joel; Karelse, Robert N.
2017-11-01
Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Elia, M.; Edwards, H. C.; Hu, J.
Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less
D'Elia, M.; Edwards, H. C.; Hu, J.; ...
2018-01-18
Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less
NASA Astrophysics Data System (ADS)
Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas
2016-09-01
Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.
Ozçift, Akin
2011-05-01
Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao
2015-03-01
Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.
Sánchez, José; Guarnes, Miguel Ángel; Dormido, Sebastián
2009-01-01
This paper is an experimental study of the utilization of different event-based strategies for the automatic control of a simple but very representative industrial process: the level control of a tank. In an event-based control approach it is the triggering of a specific event, and not the time, that instructs the sensor to send the current state of the process to the controller, and the controller to compute a new control action and send it to the actuator. In the document, five control strategies based on different event-based sampling techniques are described, compared, and contrasted with a classical time-based control approach and a hybrid one. The common denominator in the time, the hybrid, and the event-based control approaches is the controller: a proportional-integral algorithm with adaptations depending on the selected control approach. To compare and contrast each one of the hybrid and the pure event-based control algorithms with the time-based counterpart, the two tasks that a control strategy must achieve (set-point following and disturbance rejection) are independently analyzed. The experimental study provides new proof concerning the ability of event-based control strategies to minimize the data exchange among the control agents (sensors, controllers, actuators) when an error-free control of the process is not a hard requirement. PMID:22399975
The Impact of Typhoons on the Ocean in the Pacific (ITOP) Field and Data Management Support
2011-12-16
in October o f 2009 to develop effective sampling strategies for 20 I 0. EOL /Computing Data and Software Facil ity (CDS) supported the !TO P Dry Run...measurement strategies necessitated a dry run experiment in October of 2009 to develop effective sampling strategies for 2010. EOL /Computing Data and...contains products from 21 September through 32 October 2009. The catalog remains accessible at EOL at the above mentioned uri. The products listed by
Efficient identification of context dependent subgroups of risk from genome wide association studies
Dyson, Greg; Sing, Charles F.
2014-01-01
We have developed a modified Patient Rule-Induction Method (PRIM) as an alternative strategy for analyzing representative samples of non-experimental human data to estimate and test the role of genomic variations as predictors of disease risk in etiologically heterogeneous sub-samples. A computational limit of the proposed strategy is encountered when the number of genomic variations (predictor variables) under study is large (> 500) because permutations are used to generate a null distribution to test the significance of a term (defined by values of particular variables) that characterizes a sub-sample of individuals through the peeling and pasting processes. As an alternative, in this paper we introduce a theoretical strategy that facilitates the quick calculation of Type I and Type II errors in the evaluation of terms in the peeling and pasting processes carried out in the execution of a PRIM analysis that are underestimated and non-existent, respectively, when a permutation-based hypothesis test is employed. The resultant savings in computational time makes possible the consideration of larger numbers of genomic variations (an example genome wide association study is given) in the selection of statistically significant terms in the formulation of PRIM prediction models. PMID:24570412
ERIC Educational Resources Information Center
Anderson-Inman, Lynne; Ditson, Mary
This final report describes activities and accomplishments of the four-year Computer-Based Study Strategies (CBSS) Outreach Project at the University of Oregon. This project disseminated information about using computer-based study strategies as an intervention for students with learning disabilities and provided teachers in participating outreach…
Self-similar slip distributions on irregular shaped faults
NASA Astrophysics Data System (ADS)
Herrero, A.; Murphy, S.
2018-06-01
We propose a strategy to place a self-similar slip distribution on a complex fault surface that is represented by an unstructured mesh. This is possible by applying a strategy based on the composite source model where a hierarchical set of asperities, each with its own slip function which is dependent on the distance from the asperity centre. Central to this technique is the efficient, accurate computation of distance between two points on the fault surface. This is known as the geodetic distance problem. We propose a method to compute the distance across complex non-planar surfaces based on a corollary of the Huygens' principle. The difference between this method compared to others sample-based algorithms which precede it is the use of a curved front at a local level to calculate the distance. This technique produces a highly accurate computation of the distance as the curvature of the front is linked to the distance from the source. Our local scheme is based on a sequence of two trilaterations, producing a robust algorithm which is highly precise. We test the strategy on a planar surface in order to assess its ability to keep the self-similarity properties of a slip distribution. We also present a synthetic self-similar slip distribution on a real slab topography for a M8.5 event. This method for computing distance may be extended to the estimation of first arrival times in both complex 3D surfaces or 3D volumes.
Lee, Richard; Ptolemy, Adam S; Niewczas, Liliana; Britz-McKibbin, Philip
2007-01-15
Characterization of unknown low-abundance metabolites in biological samples is one the most significant challenges in metabolomic research. In this report, an integrative strategy based on capillary electrophoresis-electrospray ionization-ion trap mass spectrometry (CE-ESI-ITMS) with computer simulations is examined as a multiplexed approach for studying the selective nutrient uptake behavior of E. coli within a complex broth medium. On-line sample preconcentration with desalting by CE-ESI-ITMS was performed directly without off-line sample pretreatment in order to improve detector sensitivity over 50-fold for cationic metabolites with nanomolar detection limits. The migration behavior of charged metabolites were also modeled in CE as a qualitative tool to support MS characterization based on two fundamental analyte physicochemical properties, namely, absolute mobility (muo) and acid dissociation constant (pKa). Computer simulations using Simul 5.0 were used to better understand the dynamics of analyte electromigration, as well as aiding de novo identification of unknown nutrients. There was excellent agreement between computer-simulated and experimental electropherograms for several classes of cationic metabolites as reflected by their relative migration times with an average error of <2.0%. Our studies revealed differential uptake of specific amino acids and nucleoside nutrients associated with distinct stages of bacterial growth. Herein, we demonstrate that CE can serve as an effective preconcentrator, desalter, and separator prior to ESI-MS, while providing additional qualitative information for unambiguous identification among isobaric and isomeric metabolites. The proposed strategy is particularly relevant for characterizing unknown yet biologically relevant metabolites that are not readily synthesized or commercially available.
ERIC Educational Resources Information Center
Zigic, Sasha; Lemckert, Charles J.
2007-01-01
The following paper presents a computer-based learning strategy to assist in introducing and teaching water quality modelling to undergraduate civil engineering students. As part of the learning strategy, an interactive computer-based instructional (CBI) aid was specifically developed to assist students to set up, run and analyse the output from a…
ERIC Educational Resources Information Center
Gambari, Isiaka Amosa; Yusuf, Mudasiru Olalere
2016-01-01
This study investigated the effects of computer-assisted Jigsaw II cooperative strategy on physics achievement and retention. The study also determined how moderating variables of achievement levels as it affects students' performance in physics when Jigsaw II cooperative learning is used as an instructional strategy. Purposive sampling technique…
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm
NASA Astrophysics Data System (ADS)
Mahdavi, Seyed Hossein; Razak, Hashim Abdul
2016-06-01
This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.
English Language Learners' Strategies for Reading Computer-Based Texts at Home and in School
ERIC Educational Resources Information Center
Park, Ho-Ryong; Kim, Deoksoon
2016-01-01
This study investigated four elementary-level English language learners' (ELLs') use of strategies for reading computer-based texts at home and in school. The ELLs in this study were in the fourth and fifth grades in a public elementary school. We identify the ELLs' strategies for reading computer-based texts in home and school environments. We…
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders
2017-10-01
Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Preventing smoking relapse via Web-based computer-tailored feedback: a randomized controlled trial.
Elfeddali, Iman; Bolman, Catherine; Candel, Math J J M; Wiers, Reinout W; de Vries, Hein
2012-08-20
Web-based computer-tailored approaches have the potential to be successful in supporting smoking cessation. However, the potential effects of such approaches for relapse prevention and the value of incorporating action planning strategies to effectively prevent smoking relapse have not been fully explored. The Stay Quit for You (SQ4U) study compared two Web-based computer-tailored smoking relapse prevention programs with different types of planning strategies versus a control group. To assess the efficacy of two Web-based computer-tailored programs in preventing smoking relapse compared with a control group. The action planning (AP) program provided tailored feedback at baseline and invited respondents to do 6 preparatory and coping planning assignments (the first 3 assignments prior to quit date and the final 3 assignments after quit date). The action planning plus (AP+) program was an extended version of the AP program that also provided tailored feedback at 11 time points after the quit attempt. Respondents in the control group only filled out questionnaires. The study also assessed possible dose-response relationships between abstinence and adherence to the programs. The study was a randomized controlled trial with three conditions: the control group, the AP program, and the AP+ program. Respondents were daily smokers (N = 2031), aged 18 to 65 years, who were motivated and willing to quit smoking within 1 month. The primary outcome was self-reported continued abstinence 12 months after baseline. Logistic regression analyses were conducted using three samples: (1) all respondents as randomly assigned, (2) a modified sample that excluded respondents who did not make a quit attempt in conformance with the program protocol, and (3) a minimum dose sample that also excluded respondents who did not adhere to at least one of the intervention elements. Observed case analyses and conservative analyses were conducted. In the observed case analysis of the randomized sample, abstinence rates were 22% (45/202) in the control group versus 33% (63/190) in the AP program and 31% (53/174) in the AP+ program. The AP program (odds ratio 1.95, P = .005) and the AP+ program (odds ratio 1.61, P = .049) were significantly more effective than the control condition. Abstinence rates and effects differed per sample. Finally, the results suggest a dose-response relationship between abstinence and the number of program elements completed by the respondents. Despite the differences in results caused by the variation in our analysis approaches, we can conclude that Web-based computer-tailored programs combined with planning strategy assignments and feedback after the quit attempt can be effective in preventing relapse 12 months after baseline. However, adherence to the intervention seems critical for effectiveness. Finally, our results also suggest that more research is needed to assess the optimum intervention dose. Dutch Trial Register: NTR1892; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=1892 (Archived by WebCite at http://www.webcitation.org/693S6uuPM).
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Wei
2016-10-01
An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.
Improvements to robotics-inspired conformational sampling in rosetta.
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.
Improvements to Robotics-Inspired Conformational Sampling in Rosetta
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new “next-generation KIC” method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions. PMID:23704889
Wofford, Marcia M; Spickard, Anderson W; Wofford, James L
2001-01-01
Advancing computer technology, cost-containment pressures, and desire to make innovative improvements in medical education argue for moving learning resources to the computer. A reasonable target for such a strategy is the traditional clinical lecture. The purpose of the lecture, the advantages and disadvantages of “live” versus computer-based lectures, and the technical options in computerizing the lecture deserve attention in developing a cost-effective, complementary learning strategy that preserves the teacher-learner relationship. Based on a literature review of the traditional clinical lecture, we build on the strengths of the lecture format and discuss strategies for converting the lecture to a computer-based learning presentation. PMID:11520384
NASA Astrophysics Data System (ADS)
Akmam, A.; Anshari, R.; Amir, H.; Jalinus, N.; Amran, A.
2018-04-01
Misconception is one of the factors causing students are not suitable in to choose a method for problem solving. Computational Physics course is a major subject in the Department of Physics FMIPA UNP Padang. The problem in Computational Physics learning lately is that students have difficulties in constructing knowledge. The indication of this problem was the student learning outcomes do not achieve mastery learning. The root of the problem is the ability of students to think critically weak. Student critical thinking can be improved using cognitive by conflict learning strategies. The research aims to determine the effect of cognitive conflict learning strategy to student misconception on the subject of Computational Physics Course at the Department of Physics, Faculty of Mathematics and Science, Universitas Negeri Padang. The experimental research design conducted after-before design cycles with a sample of 60 students by cluster random sampling. Data were analyzed using repeated Anova measurements. The cognitive conflict learning strategy has a significant effect on student misconception in the subject of Computational Physics Course.
Automated sample area definition for high-throughput microscopy.
Zeder, M; Ellrott, A; Amann, R
2011-04-01
High-throughput screening platforms based on epifluorescence microscopy are powerful tools in a variety of scientific fields. Although some applications are based on imaging geometrically defined samples such as microtiter plates, multiwell slides, or spotted gene arrays, others need to cope with inhomogeneously located samples on glass slides. The analysis of microbial communities in aquatic systems by sample filtration on membrane filters followed by multiple fluorescent staining, or the investigation of tissue sections are examples. Therefore, we developed a strategy for flexible and fast definition of sample locations by the acquisition of whole slide overview images and automated sample recognition by image analysis. Our approach was tested on different microscopes and the computer programs are freely available (http://www.technobiology.ch). Copyright © 2011 International Society for Advancement of Cytometry.
Bayesian Model Selection under Time Constraints
NASA Astrophysics Data System (ADS)
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
NASA Technical Reports Server (NTRS)
Seltzer, S. M.
1976-01-01
The problem discussed is to design a digital controller for a typical satellite. The controlled plant is considered to be a rigid body acting in a plane. The controller is assumed to be a digital computer which, when combined with the proposed control algorithm, can be represented as a sampled-data system. The objective is to present a design strategy and technique for selecting numerical values for the control gains (assuming position, integral, and derivative feedback) and the sample rate. The technique is based on the parameter plane method and requires that the system be amenable to z-transform analysis.
NASA Astrophysics Data System (ADS)
Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.
2015-03-01
We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Pattern-based integer sample motion search strategies in the context of HEVC
NASA Astrophysics Data System (ADS)
Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas
2015-09-01
The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Shrivastava, Sajal; Sohn, Il-Yung; Son, Young-Min; Lee, Won-Il; Lee, Nae-Eung
2015-12-14
Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules.
Donovan, Rory M.; Tapia, Jose-Juan; Sullivan, Devin P.; Faeder, James R.; Murphy, Robert F.; Dittrich, Markus; Zuckerman, Daniel M.
2016-01-01
The long-term goal of connecting scales in biological simulation can be facilitated by scale-agnostic methods. We demonstrate that the weighted ensemble (WE) strategy, initially developed for molecular simulations, applies effectively to spatially resolved cell-scale simulations. The WE approach runs an ensemble of parallel trajectories with assigned weights and uses a statistical resampling strategy of replicating and pruning trajectories to focus computational effort on difficult-to-sample regions. The method can also generate unbiased estimates of non-equilibrium and equilibrium observables, sometimes with significantly less aggregate computing time than would be possible using standard parallelization. Here, we use WE to orchestrate particle-based kinetic Monte Carlo simulations, which include spatial geometry (e.g., of organelles, plasma membrane) and biochemical interactions among mobile molecular species. We study a series of models exhibiting spatial, temporal and biochemical complexity and show that although WE has important limitations, it can achieve performance significantly exceeding standard parallel simulation—by orders of magnitude for some observables. PMID:26845334
NASA Astrophysics Data System (ADS)
Krell, Mario Michael; Wilshusen, Nils; Seeland, Anett; Kim, Su Kyoung
2017-04-01
Objective. Classifier transfers usually come with dataset shifts. To overcome dataset shifts in practical applications, we consider the limitations in computational resources in this paper for the adaptation of batch learning algorithms, like the support vector machine (SVM). Approach. We focus on data selection strategies which limit the size of the stored training data by different inclusion, exclusion, and further dataset manipulation criteria like handling class imbalance with two new approaches. We provide a comparison of the strategies with linear SVMs on several synthetic datasets with different data shifts as well as on different transfer settings with electroencephalographic (EEG) data. Main results. For the synthetic data, adding only misclassified samples performed astoundingly well. Here, balancing criteria were very important when the other criteria were not well chosen. For the transfer setups, the results show that the best strategy depends on the intensity of the drift during the transfer. Adding all and removing the oldest samples results in the best performance, whereas for smaller drifts, it can be sufficient to only add samples near the decision boundary of the SVM which reduces processing resources. Significance. For brain-computer interfaces based on EEG data, models trained on data from a calibration session, a previous recording session, or even from a recording session with another subject are used. We show, that by using the right combination of data selection criteria, it is possible to adapt the SVM classifier to overcome the performance drop from the transfer.
Use of English Corpora as a Primary Resource to Teach English to the Bengali Learners
ERIC Educational Resources Information Center
Dash, Niladri Sekhar
2011-01-01
In this paper we argue in favour of teaching English as a second language to the Bengali learners with direct utilisation of English corpora. The proposed strategy is meant to be assisted with computer and is based on data, information, and examples retrieved from the present-day English corpora developed with various text samples composed by…
Replica Exchange Improves Sampling in Low-Resolution Docking Stage of RosettaDock
Zhang, Zhe; Lange, Oliver F.
2013-01-01
Many protein-protein docking protocols are based on a shotgun approach, in which thousands of independent random-start trajectories minimize the rigid-body degrees of freedom. Another strategy is enumerative sampling as used in ZDOCK. Here, we introduce an alternative strategy, ReplicaDock, using a small number of long trajectories of temperature replica exchange. We compare replica exchange sampling as low-resolution stage of RosettaDock with RosettaDock's original shotgun sampling as well as with ZDOCK. A benchmark of 30 complexes starting from structures of the unbound binding partners shows improved performance for ReplicaDock and ZDOCK when compared to shotgun sampling at equal or less computational expense. ReplicaDock and ZDOCK consistently reach lower energies and generate significantly more near-native conformations than shotgun sampling. Accordingly, they both improve typical metrics of prediction quality of complex structures after refinement. Additionally, the refined ReplicaDock ensembles reach significantly lower interface energies and many previously hidden features of the docking energy landscape become visible when ReplicaDock is applied. PMID:24009670
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Sampling of temporal networks: Methods and biases
NASA Astrophysics Data System (ADS)
Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter
2017-11-01
Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.
Listening Strategy Use and Influential Factors in Web-Based Computer Assisted Language Learning
ERIC Educational Resources Information Center
Chen, L.; Zhang, R.; Liu, C.
2014-01-01
This study investigates second and foreign language (L2) learners' listening strategy use and factors that influence their strategy use in a Web-based computer assisted language learning (CALL) system. A strategy inventory, a factor questionnaire and a standardized listening test were used to collect data from a group of 82 Chinese students…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Juliane
MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.
ERIC Educational Resources Information Center
Gambari, Amosa Isiaka; Yusuf, Mudasiru Olalere
2015-01-01
This study investigated the effectiveness of computer-assisted Students' Team Achievement Division (STAD) cooperative learning strategy on physics problem solving, students' achievement and retention. It also examined if the student performance would vary with gender. Purposive sampling technique was used to select two senior secondary schools…
Case-based fracture image retrieval.
Zhou, Xin; Stern, Richard; Müller, Henning
2012-05-01
Case-based fracture image retrieval can assist surgeons in decisions regarding new cases by supplying visually similar past cases. This tool may guide fracture fixation and management through comparison of long-term outcomes in similar cases. A fracture image database collected over 10 years at the orthopedic service of the University Hospitals of Geneva was used. This database contains 2,690 fracture cases associated with 43 classes (based on the AO/OTA classification). A case-based retrieval engine was developed and evaluated using retrieval precision as a performance metric. Only cases in the same class as the query case are considered as relevant. The scale-invariant feature transform (SIFT) is used for image analysis. Performance evaluation was computed in terms of mean average precision (MAP) and early precision (P10, P30). Retrieval results produced with the GNU image finding tool (GIFT) were used as a baseline. Two sampling strategies were evaluated. One used a dense 40 × 40 pixel grid sampling, and the second one used the standard SIFT features. Based on dense pixel grid sampling, three unsupervised feature selection strategies were introduced to further improve retrieval performance. With dense pixel grid sampling, the image is divided into 1,600 (40 × 40) square blocks. The goal is to emphasize the salient regions (blocks) and ignore irrelevant regions. Regions are considered as important when a high variance of the visual features is found. The first strategy is to calculate the variance of all descriptors on the global database. The second strategy is to calculate the variance of all descriptors for each case. A third strategy is to perform a thumbnail image clustering in a first step and then to calculate the variance for each cluster. Finally, a fusion between a SIFT-based system and GIFT is performed. A first comparison on the selection of sampling strategies using SIFT features shows that dense sampling using a pixel grid (MAP = 0.18) outperformed the SIFT detector-based sampling approach (MAP = 0.10). In a second step, three unsupervised feature selection strategies were evaluated. A grid parameter search is applied to optimize parameters for feature selection and clustering. Results show that using half of the regions (700 or 800) obtains the best performance for all three strategies. Increasing the number of clusters in clustering can also improve the retrieval performance. The SIFT descriptor variance in each case gave the best indication of saliency for the regions (MAP = 0.23), better than the other two strategies (MAP = 0.20 and 0.21). Combining GIFT (MAP = 0.23) and the best SIFT strategy (MAP = 0.23) produced significantly better results (MAP = 0.27) than each system alone. A case-based fracture retrieval engine was developed and is available for online demonstration. SIFT is used to extract local features, and three feature selection strategies were introduced and evaluated. A baseline using the GIFT system was used to evaluate the salient point-based approaches. Without supervised learning, SIFT-based systems with optimized parameters slightly outperformed the GIFT system. A fusion of the two approaches shows that the information contained in the two approaches is complementary. Supervised learning on the feature space is foreseen as the next step of this study.
Accelerating assimilation development for new observing systems using EFSO
NASA Astrophysics Data System (ADS)
Lien, Guo-Yuan; Hotta, Daisuke; Kalnay, Eugenia; Miyoshi, Takemasa; Chen, Tse-Chun
2018-03-01
To successfully assimilate data from a new observing system, it is necessary to develop appropriate data selection strategies, assimilating only the generally useful data. This development work is usually done by trial and error using observing system experiments (OSEs), which are very time and resource consuming. This study proposes a new, efficient methodology to accelerate the development using ensemble forecast sensitivity to observations (EFSO). First, non-cycled assimilation of the new observation data is conducted to compute EFSO diagnostics for each observation within a large sample. Second, the average EFSO conditionally sampled in terms of various factors is computed. Third, potential data selection criteria are designed based on the non-cycled EFSO statistics, and tested in cycled OSEs to verify the actual assimilation impact. The usefulness of this method is demonstrated with the assimilation of satellite precipitation data. It is shown that the EFSO-based method can efficiently suggest data selection criteria that significantly improve the assimilation results.
Adaptive sampling strategies with high-throughput molecular dynamics
NASA Astrophysics Data System (ADS)
Clementi, Cecilia
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
A Computer Security Course in the Undergraduate Computer Science Curriculum.
ERIC Educational Resources Information Center
Spillman, Richard
1992-01-01
Discusses the importance of computer security and considers criminal, national security, and personal privacy threats posed by security breakdown. Several examples are given, including incidents involving computer viruses. Objectives, content, instructional strategies, resources, and a sample examination for an experimental undergraduate computer…
An Ethnographic Study of the Computational Strategies of a Group of Young Street Vendors in Beirut.
ERIC Educational Resources Information Center
Jurdak, Murad; Shahin, Iman
1999-01-01
Examines the computational strategies of 10 young street vendors in Beirut by describing, comparing, and analyzing computational strategies used in solving three types of problems: (1) transactions in the workplace; (2) word problems; and (3) computation exercises in a school-like setting. Indicates that vendors' use of semantically-based mental…
Computational Fragment-Based Drug Design: Current Trends, Strategies, and Applications.
Bian, Yuemin; Xie, Xiang-Qun Sean
2018-04-09
Fragment-based drug design (FBDD) has become an effective methodology for drug development for decades. Successful applications of this strategy brought both opportunities and challenges to the field of Pharmaceutical Science. Recent progress in the computational fragment-based drug design provide an additional approach for future research in a time- and labor-efficient manner. Combining multiple in silico methodologies, computational FBDD possesses flexibilities on fragment library selection, protein model generation, and fragments/compounds docking mode prediction. These characteristics provide computational FBDD superiority in designing novel and potential compounds for a certain target. The purpose of this review is to discuss the latest advances, ranging from commonly used strategies to novel concepts and technologies in computational fragment-based drug design. Particularly, in this review, specifications and advantages are compared between experimental and computational FBDD, and additionally, limitations and future prospective are discussed and emphasized.
Interfacing geographic information systems and remote sensing for rural land-use analysis
NASA Technical Reports Server (NTRS)
Nellis, M. Duane; Lulla, Kamlesh; Jensen, John
1990-01-01
Recent advances in computer-based geographic information systems (GISs) are briefly reviewed, with an emphasis on the incorporation of remote-sensing data in GISs for rural applications. Topics addressed include sampling procedures for rural land-use analyses; GIS-based mapping of agricultural land use and productivity; remote sensing of land use and agricultural, forest, rangeland, and water resources; monitoring the dynamics of irrigation agriculture; GIS methods for detecting changes in land use over time; and the development of land-use modeling strategies.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe
2015-01-01
This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038
Strategy generalization across orientation tasks: testing a computational cognitive model.
Gunzelmann, Glenn
2008-07-08
Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human performance was measured on an orientation task requiring participants to identify the location of a target either on a map (find-on-map) or within an egocentric view of a space (find-in-scene). A general strategy instantiated in a computational cognitive model of the find-on-map task, based on the results from Gunzelmann and Anderson (2006), was adapted to perform both tasks and used to generate performance predictions for a new study. The qualitative fit of the model to the human data supports the view that participants were able to tailor a general strategy to the requirements of particular spatial tasks. The quantitative differences between the predictions of the model and the performance of human participants in the new experiment expose individual differences in sample populations. The model provides a means of accounting for those differences and a framework for understanding how human spatial abilities are applied to naturalistic spatial tasks that involve reasoning with maps. 2008 Cognitive Science Society, Inc.
An efficient sampling strategy for selection of biobank samples using risk scores.
Björk, Jonas; Malmqvist, Ebba; Rylander, Lars; Rignell-Hydbom, Anna
2017-07-01
The aim of this study was to suggest a new sample-selection strategy based on risk scores in case-control studies with biobank data. An ongoing Swedish case-control study on fetal exposure to endocrine disruptors and overweight in early childhood was used as the empirical example. Cases were defined as children with a body mass index (BMI) ⩾18 kg/m 2 ( n=545) at four years of age, and controls as children with a BMI of ⩽17 kg/m 2 ( n=4472 available). The risk of being overweight was modelled using logistic regression based on available covariates from the health examination and prior to selecting samples from the biobank. A risk score was estimated for each child and categorised as low (0-5%), medium (6-13%) or high (⩾14%) risk of being overweight. The final risk-score model, with smoking during pregnancy ( p=0.001), birth weight ( p<0.001), BMI of both parents ( p<0.001 for both), type of residence ( p=0.04) and economic situation ( p=0.12), yielded an area under the receiver operating characteristic curve of 67% ( n=3945 with complete data). The case group ( n=416) had the following risk-score profile: low (12%), medium (46%) and high risk (43%). Twice as many controls were selected from each risk group, with further matching on sex. Computer simulations showed that the proposed selection strategy with stratification on risk scores yielded consistent improvements in statistical precision. Using risk scores based on available survey or register data as a basis for sample selection may improve possibilities to study heterogeneity of exposure effects in biobank-based studies.
NASA Astrophysics Data System (ADS)
Saqib, Najam us; Faizan Mysorewala, Muhammad; Cheded, Lahouari
2017-12-01
In this paper, we propose a novel monitoring strategy for a wireless sensor networks (WSNs)-based water pipeline network. Our strategy uses a multi-pronged approach to reduce energy consumption based on the use of two types of vibration sensors and pressure sensors, all having different energy levels, and a hierarchical adaptive sampling mechanism to determine the sampling frequency. The sampling rate of the sensors is adjusted according to the bandwidth of the vibration signal being monitored by using a wavelet-based adaptive thresholding scheme that calculates the new sampling frequency for the following cycle. In this multimodal sensing scheme, the duty-cycling approach is used for all sensors to reduce the sampling instances, such that the high-energy, high-precision (HE-HP) vibration sensors have low duty cycles, and the low-energy, low-precision (LE-LP) vibration sensors have high duty cycles. The low duty-cycling (HE-HP) vibration sensor adjusts the sampling frequency of the high duty-cycling (LE-LP) vibration sensor. The simulated test bed considered here consists of a water pipeline network which uses pressure and vibration sensors, with the latter having different energy consumptions and precision levels, at various locations in the network. This is all the more useful for energy conservation for extended monitoring. It is shown that by using the novel features of our proposed scheme, a significant reduction in energy consumption is achieved and the leak is effectively detected by the sensor node that is closest to it. Finally, both the total energy consumed by monitoring as well as the time to detect the leak by a WSN node are computed, and show the superiority of our proposed hierarchical adaptive sampling algorithm over a non-adaptive sampling approach.
The Influence of Computer-Based Text Editors on the Revision Strategies of Inexperienced Writers.
ERIC Educational Resources Information Center
Collier, Richard M.
A study sought to determine the effect of computer-based text editing on the revision strategies of inexperienced writers. Four subjects, none of whom had experience with computers or word processors, were selected from an introductory college composition course and required to master the basic terminal functions that would be necessary for…
When Does Model-Based Control Pay Off?
Kool, Wouter; Cushman, Fiery A; Gershman, Samuel J
2016-08-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.
ERIC Educational Resources Information Center
Proske, Antje; Roscoe, Rod D.; McNamara, Danielle S.
2014-01-01
Achieving sustained student engagement with practice in computer-based writing strategy training can be a challenge. One potential solution is to foster engagement by embedding practice in educational games; yet there is currently little research comparing the effectiveness of game-based practice versus more traditional forms of practice. In this…
When Does Model-Based Control Pay Off?
2016-01-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to “model-free” and “model-based” strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand. PMID:27564094
Management of Computer-Based Instruction: Design of an Adaptive Control Strategy.
ERIC Educational Resources Information Center
Tennyson, Robert D.; Rothen, Wolfgang
1979-01-01
Theoretical and research literature on learner, program, and adaptive control as forms of instructional management are critiqued in reference to the design of computer-based instruction. An adaptive control strategy using an online, iterative algorithmic model is proposed. (RAO)
ERIC Educational Resources Information Center
Gambari, Amosa Isiaka; Yusuf, Mudasiru Olalere
2017-01-01
This study investigated the relative effectiveness of computer-supported cooperative learning strategies on the performance, attitudes, and retention of secondary school students in physics. A purposive sampling technique was used to select four senior secondary schools from Minna, Nigeria. The students were allocated to one of four groups:…
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Wang, Qing; Xue, Tuo; Song, Chunnian; Wang, Yan; Chen, Guangju
2016-01-01
Free energy calculations of the potential of mean force (PMF) based on the combination of targeted molecular dynamics (TMD) simulations and umbrella samplings as a function of physical coordinates have been applied to explore the detailed pathways and the corresponding free energy profiles for the conformational transition processes of the butane molecule and the 35-residue villin headpiece subdomain (HP35). The accurate PMF profiles for describing the dihedral rotation of butane under both coordinates of dihedral rotation and root mean square deviation (RMSD) variation were obtained based on the different umbrella samplings from the same TMD simulations. The initial structures for the umbrella samplings can be conveniently selected from the TMD trajectories. For the application of this computational method in the unfolding process of the HP35 protein, the PMF calculation along with the coordinate of the radius of gyration (Rg) presents the gradual increase of free energies by about 1 kcal/mol with the energy fluctuations. The feature of conformational transition for the unfolding process of the HP35 protein shows that the spherical structure extends and the middle α-helix unfolds firstly, followed by the unfolding of other α-helices. The computational method for the PMF calculations based on the combination of TMD simulations and umbrella samplings provided a valuable strategy in investigating detailed conformational transition pathways for other allosteric processes. PMID:27171075
Nuclear Magnetic Resonance Spectroscopy-Based Identification of Yeast.
Himmelreich, Uwe; Sorrell, Tania C; Daniel, Heide-Marie
2017-01-01
Rapid and robust high-throughput identification of environmental, industrial, or clinical yeast isolates is important whenever relatively large numbers of samples need to be processed in a cost-efficient way. Nuclear magnetic resonance (NMR) spectroscopy generates complex data based on metabolite profiles, chemical composition and possibly on medium consumption, which can not only be used for the assessment of metabolic pathways but also for accurate identification of yeast down to the subspecies level. Initial results on NMR based yeast identification where comparable with conventional and DNA-based identification. Potential advantages of NMR spectroscopy in mycological laboratories include not only accurate identification but also the potential of automated sample delivery, automated analysis using computer-based methods, rapid turnaround time, high throughput, and low running costs.We describe here the sample preparation, data acquisition and analysis for NMR-based yeast identification. In addition, a roadmap for the development of classification strategies is given that will result in the acquisition of a database and analysis algorithms for yeast identification in different environments.
Baker, Nancy A; Rubinstein, Elaine N; Rogers, Joan C
2012-09-01
Little is known about the problems experienced by and the accommodation strategies used by computer users with rheumatoid arthritis (RA) or fibromyalgia (FM). This study (1) describes specific problems and accommodation strategies used by people with RA and FM during computer use; and (2) examines if there were significant differences in the problems and accommodation strategies between the different equipment items for each diagnosis. Subjects were recruited from the Arthritis Network Disease Registry. Respondents completed a self-report survey, the Computer Problems Survey. Data were analyzed descriptively (percentages; 95% confidence intervals). Differences in the number of problems and accommodation strategies were calculated using nonparametric tests (Friedman's test and Wilcoxon Signed Rank Test). Eighty-four percent of respondents reported at least one problem with at least one equipment item (RA = 81.5%; FM = 88.9%), with most respondents reporting problems with their chair. Respondents most commonly used timing accommodation strategies to cope with mouse and keyboard problems, personal accommodation strategies to cope with chair problems and environmental accommodation strategies to cope with monitor problems. The number of problems during computer use was substantial in our sample, and our respondents with RA and FM may not implement the most effective strategies to deal with their chair, keyboard, or mouse problems. This study suggests that workers with RA and FM might potentially benefit from education and interventions to assist with the development of accommodation strategies to reduce problems related to computer use.
Real-time computational photon-counting LiDAR
NASA Astrophysics Data System (ADS)
Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles
2018-03-01
The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.
NASA Astrophysics Data System (ADS)
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
A COTS-Based Replacement Strategy for Aging Avionics Computers
2001-12-01
Communication Control Unit. A COTS-Based Replacement Strategy for Aging Avionics Computers COTS Microprocessor Real Time Operating System New Native Code...Native Code Objec ts Native Code Thread Real - Time Operating System Legacy Function x Virtual Component Environment Context Switch Thunk Add-in Replace
Cost-Benefit Analysis of Computer Resources for Machine Learning
Champion, Richard A.
2007-01-01
Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
ERIC Educational Resources Information Center
Hatcher, Myron E.; Miller, William
1987-01-01
The study compared effects of two direct-mail educational software marketing strategies: (1) brochure only and (2) brochure and sample program provided by a journal. Those responding to Strategy 1 rated the programs higher because of less knowledge. However, those responding to Strategy 2 exhibited more respect for the journal. (CH)
ERIC Educational Resources Information Center
Kim, Jieun; Ryu, Hokyoung; Katuk, Norliza; Wang, Ruili; Choi, Gyunghyun
2014-01-01
The present study aims to show if a skill-challenge balancing (SCB) instruction strategy can assist learners to motivationally engage in computer-based learning. Csikszentmihalyi's flow theory (self-control, curiosity, focus of attention, and intrinsic interest) was applied to an account of the optimal learning experience in SCB-based learning…
New color-based tracking algorithm for joints of the upper extremities
NASA Astrophysics Data System (ADS)
Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang
2007-11-01
To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.
Generalized Ensemble Sampling of Enzyme Reaction Free Energy Pathways
Wu, Dongsheng; Fajer, Mikolai I.; Cao, Liaoran; Cheng, Xiaolin; Yang, Wei
2016-01-01
Free energy path sampling plays an essential role in computational understanding of chemical reactions, particularly those occurring in enzymatic environments. Among a variety of molecular dynamics simulation approaches, the generalized ensemble sampling strategy is uniquely attractive for the fact that it not only can enhance the sampling of rare chemical events but also can naturally ensure consistent exploration of environmental degrees of freedom. In this review, we plan to provide a tutorial-like tour on an emerging topic: generalized ensemble sampling of enzyme reaction free energy path. The discussion is largely focused on our own studies, particularly ones based on the metadynamics free energy sampling method and the on-the-path random walk path sampling method. We hope that this mini presentation will provide interested practitioners some meaningful guidance for future algorithm formulation and application study. PMID:27498634
Individual Differences in Strategy Use on Division Problems: Mental versus Written Computation
ERIC Educational Resources Information Center
Hickendorff, Marian; van Putten, Cornelis M.; Verhelst, Norman D.; Heiser, Willem J.
2010-01-01
Individual differences in strategy use (choice and accuracy) were analyzed. A sample of 362 Grade 6 students solved complex division problems under 2 different conditions. In the choice condition students were allowed to use either a mental or a written strategy. In the subsequent no-choice condition, they were required to use a written strategy.…
NASA Astrophysics Data System (ADS)
Sheikholeslami, R.; Hosseini, N.; Razavi, S.
2016-12-01
Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).
Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2018-02-01
The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.
Quantifying and correcting motion artifacts in MRI
NASA Astrophysics Data System (ADS)
Bones, Philip J.; Maclaren, Julian R.; Millane, Rick P.; Watts, Richard
2006-08-01
Patient motion during magnetic resonance imaging (MRI) can produce significant artifacts in a reconstructed image. Since measurements are made in the spatial frequency domain ('k-space'), rigid-body translational motion results in phase errors in the data samples while rotation causes location errors. A method is presented to detect and correct these errors via a modified sampling strategy, thereby achieving more accurate image reconstruction. The strategy involves sampling vertical and horizontal strips alternately in k-space and employs phase correlation within the overlapping segments to estimate translational motion. An extension, also based on correlation, is employed to estimate rotational motion. Results from simulations with computer-generated phantoms suggest that the algorithm is robust up to realistic noise levels. The work is being extended to physical phantoms. Provided that a reference image is available and the object is of limited extent, it is shown that a measure related to the amount of energy outside the support can be used to objectively compare the severity of motion-induced artifacts.
NASA Astrophysics Data System (ADS)
Bhosale, Parag; Staring, Marius; Al-Ars, Zaid; Berendsen, Floris F.
2018-03-01
Currently, non-rigid image registration algorithms are too computationally intensive to use in time-critical applications. Existing implementations that focus on speed typically address this by either parallelization on GPU-hardware, or by introducing methodically novel techniques into CPU-oriented algorithms. Stochastic gradient descent (SGD) optimization and variations thereof have proven to drastically reduce the computational burden for CPU-based image registration, but have not been successfully applied in GPU hardware due to its stochastic nature. This paper proposes 1) NiftyRegSGD, a SGD optimization for the GPU-based image registration tool NiftyReg, 2) random chunk sampler, a new random sampling strategy that better utilizes the memory bandwidth of GPU hardware. Experiments have been performed on 3D lung CT data of 19 patients, which compared NiftyRegSGD (with and without random chunk sampler) with CPU-based elastix Fast Adaptive SGD (FASGD) and NiftyReg. The registration runtime was 21.5s, 4.4s and 2.8s for elastix-FASGD, NiftyRegSGD without, and NiftyRegSGD with random chunk sampling, respectively, while similar accuracy was obtained. Our method is publicly available at https://github.com/SuperElastix/NiftyRegSGD.
Marking Strategies in Metacognition-Evaluated Computer-Based Testing
ERIC Educational Resources Information Center
Chen, Li-Ju; Ho, Rong-Guey; Yen, Yung-Chin
2010-01-01
This study aimed to explore the effects of marking and metacognition-evaluated feedback (MEF) in computer-based testing (CBT) on student performance and review behavior. Marking is a strategy, in which students place a question mark next to a test item to indicate an uncertain answer. The MEF provided students with feedback on test results…
ERIC Educational Resources Information Center
McCredie, John W., Ed.
Ten case studies that describe the planning process and strategies employed by colleges who use computing and communication systems are presented, based on a 1981-1982 study conducted by EDUCOM. An introduction by John W. McCredie summarizes several current and future effects of the rapid spread and integration of computing and communication…
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Konovalov, Arkady; Krajbich, Ian
2016-01-01
Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383
Dols, W. Stuart; Persily, Andrew K.; Morrow, Jayne B.; Matzke, Brett D.; Sego, Landon H.; Nuffer, Lisa L.; Pulsipher, Brent A.
2010-01-01
In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones. PMID:27134782
Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A
2010-01-01
In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.
Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations
NASA Astrophysics Data System (ADS)
Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.
2017-12-01
Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.
NASA Astrophysics Data System (ADS)
Imada, Keita; Nakamura, Katsuhiko
This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.
ERIC Educational Resources Information Center
Kunsting, Josef; Wirth, Joachim; Paas, Fred
2011-01-01
Using a computer-based scientific discovery learning environment on buoyancy in fluids we investigated the "effects of goal specificity" (nonspecific goals vs. specific goals) for two goal types (problem solving goals vs. learning goals) on "strategy use" and "instructional efficiency". Our empirical findings close an important research gap,…
ERIC Educational Resources Information Center
Ponce, Hector R.; Lopez, Mario J.; Mayer, Richard E.
2012-01-01
This article examines the effectiveness of a computer-based instructional program (e-PELS) aimed at direct instruction in a collection of reading comprehension strategies. In e-PELS, students learn to highlight and outline expository passages based on various types of text structures (such as comparison or cause-and-effect) as well as to…
ERIC Educational Resources Information Center
Wang, Xiao-Ming; Hwang, Gwo-Jen
2017-01-01
Computer programming is a subject that requires problem-solving strategies and involves a great number of programming logic activities which pose challenges for learners. Therefore, providing learning support and guidance is important. Collaborative learning is widely believed to be an effective teaching approach; it can enhance learners' social…
Computational modeling of carbohydrate recognition in protein complex
NASA Astrophysics Data System (ADS)
Ishida, Toyokazu
2017-11-01
To understand the mechanistic principle of carbohydrate recognition in proteins, we propose a systematic computational modeling strategy to identify complex carbohydrate chain onto the reduced 2D free energy surface (2D-FES), determined by MD sampling combined with QM/MM energy corrections. In this article, we first report a detailed atomistic simulation study of the norovirus capsid proteins with carbohydrate antigens based on ab initio QM/MM combined with MD-FEP simulations. The present result clearly shows that the binding geometries of complex carbohydrate antigen are determined not by one single, rigid carbohydrate structure, but rather by the sum of averaged conformations mapped onto the minimum free energy region of QM/MM 2D-FES.
A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.
Xie, Zhiqiang; Shao, Xia; Xin, Yu
2016-01-01
To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.
A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path
Xie, Zhiqiang; Shao, Xia; Xin, Yu
2016-01-01
To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901
NASA Astrophysics Data System (ADS)
Ravishankar, Bharani
Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timchalk, Charles; Weber, Thomas J.; Smith, Jordan N.
Advancements in Exposure Science involving the development and deployment of biomarkers of exposure and biological response are anticipated to significantly (and positively) influence health outcomes associated with occupational, environmental and clinical exposure to chemicals/drugs. To achieve this vision, innovative strategies are needed to develop multiplex sensor platforms capable of quantifying individual and mixed exposures (i.e. systemic dose) by measuring biomarkers of dose and biological response in readily obtainable (non-invasive) biofluids. Secondly, the use of saliva (alternative to blood) for biomonitoring coupled with the ability to rapidly analyze multiple samples in real-time offers an innovative opportunity to revolutionize biomonitoring assessments. Inmore » this regard, the timing and number of samples taken for biomonitoring will not be limited as is currently the case. In addition, real-time analysis will facilitate identification of work practices or conditions that are contributing to increased exposures and will make possible a more rapid and successful intervention strategy. The initial development and application of computational models for evaluation of saliva/blood analyte concentration at anticipated exposure levels represents an important opportunity to establish the limits of quantification and robustness of multiplex sensor systems by exploiting a unique computational modeling framework. The use of these pharmacokinetic models will also enable prediction of an exposure dose based on the saliva/blood measurement. This novel strategy will result in a more accurate prediction of exposures and, once validated, can be employed to assess dosimetry to a broad range of chemicals in support of biomonitoring and epidemiology studies.« less
A generalized least-squares framework for rare-variant analysis in family data.
Li, Dalin; Rotter, Jerome I; Guo, Xiuqing
2014-01-01
Rare variants may, in part, explain some of the hereditability missing in current genome-wide association studies. Many gene-based rare-variant analysis approaches proposed in recent years are aimed at population-based samples, although analysis strategies for family-based samples are clearly warranted since the family-based design has the potential to enhance our ability to enrich for rare causal variants. We have recently developed the generalized least squares, sequence kernel association test, or GLS-SKAT, approach for the rare-variant analyses in family samples, in which the kinship matrix that was computed from the high dimension genetic data was used to decorrelate the family structure. We then applied the SKAT-O approach for gene-/region-based inference in the decorrelated data. In this study, we applied this GLS-SKAT method to the systolic blood pressure data in the simulated family sample distributed by the Genetic Analysis Workshop 18. We compared the GLS-SKAT approach to the rare-variant analysis approach implemented in family-based association test-v1 and demonstrated that the GLS-SKAT approach provides superior power and good control of type I error rate.
ERIC Educational Resources Information Center
Clearing: Nature and Learning in the Pacific Northwest, 1985
1985-01-01
Presents an activity in which students create a computer program capable of recording and projecting paper use at school. Includes instructional strategies and background information such as requirements for pounds of paper/tree, energy needs, water consumption, and paper value at the recycling center. A sample program is included. (DH)
Information Science Research: The Search for the Nature of Information.
ERIC Educational Resources Information Center
Kochen, Manfred
1984-01-01
High-level scientific research in the information sciences is illustrated by sampling of recent discoveries involving adaptive information processing strategies, computer and information systems, centroid scaling, economic growth of computer and communication industries, and information flow in biological systems. Relationship of information…
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning
Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron
2014-01-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.
Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron
2014-05-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.
Tam, S F
2000-10-15
The aim of this controlled, quasi-experimental study was to evaluate the effects of both self-efficacy enhancement and social comparison training strategy on computer skills learning and self-concept outcome of trainees with physical disabilities. The self-efficacy enhancement group comprised 16 trainees, the tutorial training group comprised 15 trainees, and there were 25 subjects in the control group. Both the self-efficacy enhancement group and the tutorial training group received a 15 week computer skills training course, including generic Chinese computer operation, Chinese word processing and Chinese desktop publishing skills. The self-efficacy enhancement group received training with tutorial instructions that incorporated self-efficacy enhancement strategies and experienced self-enhancing social comparisons. The tutorial training group received behavioural learning-based tutorials only, and the control group did not receive any training. The following measurements were employed to evaluate the outcomes: the Self-Concept Questionnaire for the Physically Disabled Hong Kong Chinese (SCQPD), the computer self-efficacy rating scale and the computer performance rating scale. The self-efficacy enhancement group showed significantly better computer skills learning outcome, total self-concept, and social self-concept than the tutorial training group. The self-efficacy enhancement group did not show significant changes in their computer self-efficacy: however, the tutorial training group showed a significant lowering of their computer self-efficacy. The training strategy that incorporated self-efficacy enhancement and positive social comparison experiences maintained the computer self-efficacy of trainees with physical disabilities. This strategy was more effective in improving the learning outcome (p = 0.01) and self-concept (p = 0.05) of the trainees than the conventional tutorial-based training strategy.
Computerized Measurement of Negative Symptoms in Schizophrenia
Cohen, Alex S.; Alpert, Murray; Nienow, Tasha M.; Dinzeo, Thomas J.; Docherty, Nancy M.
2008-01-01
Accurate measurement of negative symptoms is crucial for understanding and treating schizophrenia. However, current measurement strategies are reliant on subjective symptom rating scales which often have psychometric and practical limitations. Computerized analysis of patients’ speech offers a sophisticated and objective means of evaluating negative symptoms. The present study examined the feasibility and validity of using widely-available acoustic and lexical-analytic software to measure flat affect, alogia and anhedonia (via positive emotion). These measures were examined in their relationships to clinically-rated negative symptoms and social functioning. Natural speech samples were collected and analyzed for 14 patients with clinically-rated flat affect, 46 patients without flat affect and 19 healthy controls. The computer-based inflection and speech rate measures significantly discriminated patients with flat affect from controls, and the computer-based measure of alogia and negative emotion significantly discriminated the flat and non-flat patients. Both the computer and clinical measures of positive emotion/anhedonia corresponded to functioning impairments. The computerized method of assessing negative symptoms offered a number of advantages over the symptom scale-based approach. PMID:17920078
NASA Astrophysics Data System (ADS)
Mora, A.; Han, F.; Lubineau, G.
2018-04-01
One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in the remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite’s conductivity based on these parameters.
Gibbs sampling on large lattice with GMRF
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Allard, Denis
2018-02-01
Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Quality based approach for adaptive face recognition
NASA Astrophysics Data System (ADS)
Abboud, Ali J.; Sellahewa, Harin; Jassim, Sabah A.
2009-05-01
Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary restore image quality according to the need of the intended application. In this paper, we present no-reference image quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance of adaptive face recognition system over the corresponding non-adaptive scheme.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Large-scale quantum photonic circuits in silicon
NASA Astrophysics Data System (ADS)
Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk
2016-08-01
Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards large-scale source integration. Finally, we review monolithic integration strategies for single-photon detectors and their essential role in on-chip feed forward operations.
The impact of countermeasure propagation on the prevalence of computer viruses.
Chen, Li-Chiou; Carley, Kathleen M
2004-04-01
Countermeasures such as software patches or warnings can be effective in helping organizations avert virus infection problems. However, current strategies for disseminating such countermeasures have limited their effectiveness. We propose a new approach, called the Countermeasure Competing (CMC) strategy, and use computer simulation to formally compare its relative effectiveness with three antivirus strategies currently under consideration. CMC is based on the idea that computer viruses and countermeasures spread through two separate but interlinked complex networks-the virus-spreading network and the countermeasure-propagation network, in which a countermeasure acts as a competing species against the computer virus. Our results show that CMC is more effective than other strategies based on the empirical virus data. The proposed CMC reduces the size of virus infection significantly when the countermeasure-propagation network has properties that favor countermeasures over viruses, or when the countermeasure-propagation rate is higher than the virus-spreading rate. In addition, our work reveals that CMC can be flexibly adapted to different uncertainties in the real world, enabling it to be tuned to a greater variety of situations than other strategies.
Godinez, William J; Rohr, Karl
2015-02-01
Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.
ERIC Educational Resources Information Center
Ruiz-Iniesta, Almudena; Jiménez-Díaz, Guillermo; Gómez-Albarrán, Mercedes
2014-01-01
This paper describes a knowledge-based strategy for recommending educational resources-worked problems, exercises, quiz questions, and lecture notes-to learners in the first two courses in the introductory sequence of a computer science major (CS1 and CS2). The goal of the recommendation strategy is to provide support for personalized access to…
An implementation of differential evolution algorithm for inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Balkaya, Çağlayan
2013-11-01
Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Technologies for imaging neural activity in large volumes
Ji, Na; Freeman, Jeremy; Smith, Spencer L.
2017-01-01
Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
Thonusin, Chanisa; IglayReger, Heidi B; Soni, Tanu; Rothberg, Amy E; Burant, Charles F; Evans, Charles R
2017-11-10
In recent years, mass spectrometry-based metabolomics has increasingly been applied to large-scale epidemiological studies of human subjects. However, the successful use of metabolomics in this context is subject to the challenge of detecting biologically significant effects despite substantial intensity drift that often occurs when data are acquired over a long period or in multiple batches. Numerous computational strategies and software tools have been developed to aid in correcting for intensity drift in metabolomics data, but most of these techniques are implemented using command-line driven software and custom scripts which are not accessible to all end users of metabolomics data. Further, it has not yet become routine practice to assess the quantitative accuracy of drift correction against techniques which enable true absolute quantitation such as isotope dilution mass spectrometry. We developed an Excel-based tool, MetaboDrift, to visually evaluate and correct for intensity drift in a multi-batch liquid chromatography - mass spectrometry (LC-MS) metabolomics dataset. The tool enables drift correction based on either quality control (QC) samples analyzed throughout the batches or using QC-sample independent methods. We applied MetaboDrift to an original set of clinical metabolomics data from a mixed-meal tolerance test (MMTT). The performance of the method was evaluated for multiple classes of metabolites by comparison with normalization using isotope-labeled internal standards. QC sample-based intensity drift correction significantly improved correlation with IS-normalized data, and resulted in detection of additional metabolites with significant physiological response to the MMTT. The relative merits of different QC-sample curve fitting strategies are discussed in the context of batch size and drift pattern complexity. Our drift correction tool offers a practical, simplified approach to drift correction and batch combination in large metabolomics studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Problems with sampling desert tortoises: A simulation analysis based on field data
Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.
2005-01-01
The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.
Random sphere packing model of heterogeneous propellants
NASA Astrophysics Data System (ADS)
Kochevets, Sergei Victorovich
It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.
Wei, Qichao; Zhao, Weilong; Yang, Yang; Cui, Beiliang; Xu, Zhijun; Yang, Xiaoning
2018-03-19
Considerable interest in characterizing protein/peptide-surface interactions has prompted extensive computational studies on calculations of adsorption free energy. However, in many cases, each individual study has focused on the application of free energy calculations to a specific system; therefore, it is difficult to combine the results into a general picture for choosing an appropriate strategy for the system of interest. Herein, three well-established computational algorithms are systemically compared and evaluated to compute the adsorption free energy of small molecules on two representative surfaces. The results clearly demonstrate that the characteristics of studied interfacial systems have crucial effects on the accuracy and efficiency of the adsorption free energy calculations. For the hydrophobic surface, steered molecular dynamics exhibits the highest efficiency, which appears to be a favorable method of choice for enhanced sampling simulations. However, for the charged surface, only the umbrella sampling method has the ability to accurately explore the adsorption free energy surface. The affinity of the water layer to the surface significantly affects the performance of free energy calculation methods, especially at the region close to the surface. Therefore, a general principle of how to discriminate between methodological and sampling issues based on the interfacial characteristics of the system under investigation is proposed. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A finite element approach for solution of the 3D Euler equations
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Ramakrishnan, R.; Dechaumphai, P.
1986-01-01
Prediction of thermal deformations and stresses has prime importance in the design of the next generation of high speed flight vehicles. Aerothermal load computations for complex three-dimensional shapes necessitate development of procedures to solve the full Navier-Stokes equations. This paper details the development of a three-dimensional inviscid flow approach which can be extended for three-dimensional viscous flows. A finite element formulation, based on a Taylor series expansion in time, is employed to solve the compressible Euler equations. Model generation and results display are done using a commercially available program, PATRAN, and vectorizing strategies are incorporated to ensure computational efficiency. Sample problems are presented to demonstrate the validity of the approach for analyzing high speed compressible flows.
Graveley, E; Fullerton, J T
1998-04-01
The use of electronic technology allows faculty to improve their course offerings. Four graduate courses in nursing administration were contemporized to incorporate fundamental computer-based skills that would be expected of graduates in the work setting. Principles of adult learning offered a philosophical foundation that guided course development and revision. Course delivery strategies included computer-assisted instructional modules, e-mail interactive discussion groups, and use of the electronic classroom. Classroom seminar discussions and two-way interactive video conferencing focused on group resolution of problems derived from employment settings and assigned readings. Using these electronic technologies, a variety of courses can be revised to accommodate the learners' needs.
Promoting Technology-Assisted Active Learning in Computer Science Education
ERIC Educational Resources Information Center
Gao, Jinzhu; Hargis, Jace
2010-01-01
This paper describes specific active learning strategies for teaching computer science, integrating both instructional technologies and non-technology-based strategies shown to be effective in the literature. The theoretical learning components addressed include an intentional method to help students build metacognitive abilities, as well as…
Qin, Chao; Sun, Yongqi; Dong, Yadong
2017-01-01
Essential proteins are the proteins that are indispensable to the survival and development of an organism. Deleting a single essential protein will cause lethality or infertility. Identifying and analysing essential proteins are key to understanding the molecular mechanisms of living cells. There are two types of methods for predicting essential proteins: experimental methods, which require considerable time and resources, and computational methods, which overcome the shortcomings of experimental methods. However, the prediction accuracy of computational methods for essential proteins requires further improvement. In this paper, we propose a new computational strategy named CoTB for identifying essential proteins based on a combination of topological properties, subcellular localization information and orthologous protein information. First, we introduce several topological properties of the protein-protein interaction (PPI) network. Second, we propose new methods for measuring orthologous information and subcellular localization and a new computational strategy that uses a random forest prediction model to obtain a probability score for the proteins being essential. Finally, we conduct experiments on four different Saccharomyces cerevisiae datasets. The experimental results demonstrate that our strategy for identifying essential proteins outperforms traditional computational methods and the most recently developed method, SON. In particular, our strategy improves the prediction accuracy to 89, 78, 79, and 85 percent on the YDIP, YMIPS, YMBD and YHQ datasets at the top 100 level, respectively.
ERIC Educational Resources Information Center
Taylor, William; And Others
The effects of the Attention Directing Strategy and Imagery Cue Strategy as program embedded learning strategies for microcomputer-based instruction (MCBI) were examined in this study. Eight learning conditions with identical instructional content on the parts and operation of the human heart were designed: either self-paced or externally-paced,…
Applying active learning to supervised word sense disambiguation in MEDLINE.
Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua
2013-01-01
This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models.
Applying active learning to supervised word sense disambiguation in MEDLINE
Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua
2013-01-01
Objectives This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. Methods We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Results Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. Conclusions This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models. PMID:23364851
Assessment of Adaptive PBL's Impact on HOT Development of Computer Science Students
ERIC Educational Resources Information Center
Raiyn, Jamal; Tilchin, Oleg
2015-01-01
Meaningful learning based on PBL is new learning strategy. Compared to traditional learning strategy, the meaningful learning strategy put the student in center of the learning process. The roles of the student in the meaningful learning strategy will be increased. The Problem-based Learning (PBL) model is considered the most productive way to…
Strategic Planning for Computer-Based Educational Technology.
ERIC Educational Resources Information Center
Bozeman, William C.
1984-01-01
Offers educational practitioners direction for the development of a master plan for the implementation and application of computer-based educational technology by briefly examining computers in education, discussing organizational change from a theoretical perspective, and presenting an overview of the planning strategy known as the planning and…
A Software Laboratory Environment for Computer-Based Problem Solving.
ERIC Educational Resources Information Center
Kurtz, Barry L.; O'Neal, Micheal B.
This paper describes a National Science Foundation-sponsored project at Louisiana Technological University to develop computer-based laboratories for "hands-on" introductions to major topics of computer science. The underlying strategy is to develop structured laboratory environments that present abstract concepts through the use of…
Effect of Computer-Based Video Games on Children: An Experimental Study
ERIC Educational Resources Information Center
Chuang, Tsung-Yen; Chen, Wei-Fan
2009-01-01
This experimental study investigated whether computer-based video games facilitate children's cognitive learning. In comparison to traditional computer-assisted instruction (CAI), this study explored the impact of the varied types of instructional delivery strategies on children's learning achievement. One major research null hypothesis was…
Cloud Based Applications and Platforms (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brodt-Giles, D.
2014-05-15
Presentation to the Cloud Computing East 2014 Conference, where we are highlighting our cloud computing strategy, describing the platforms on the cloud (including Smartgrid.gov), and defining our process for implementing cloud based applications.
Cruz-Roa, Angel; Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant; González, Fabio
2018-01-01
Precise detection of invasive cancer on whole-slide images (WSI) is a critical first step in digital pathology tasks of diagnosis and grading. Convolutional neural network (CNN) is the most popular representation learning method for computer vision tasks, which have been successfully applied in digital pathology, including tumor and mitosis detection. However, CNNs are typically only tenable with relatively small image sizes (200 × 200 pixels). Only recently, Fully convolutional networks (FCN) are able to deal with larger image sizes (500 × 500 pixels) for semantic segmentation. Hence, the direct application of CNNs to WSI is not computationally feasible because for a WSI, a CNN would require billions or trillions of parameters. To alleviate this issue, this paper presents a novel method, High-throughput Adaptive Sampling for whole-slide Histopathology Image analysis (HASHI), which involves: i) a new efficient adaptive sampling method based on probability gradient and quasi-Monte Carlo sampling, and, ii) a powerful representation learning classifier based on CNNs. We applied HASHI to automated detection of invasive breast cancer on WSI. HASHI was trained and validated using three different data cohorts involving near 500 cases and then independently tested on 195 studies from The Cancer Genome Atlas. The results show that (1) the adaptive sampling method is an effective strategy to deal with WSI without compromising prediction accuracy by obtaining comparative results of a dense sampling (∼6 million of samples in 24 hours) with far fewer samples (∼2,000 samples in 1 minute), and (2) on an independent test dataset, HASHI is effective and robust to data from multiple sites, scanners, and platforms, achieving an average Dice coefficient of 76%.
Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant; González, Fabio
2018-01-01
Precise detection of invasive cancer on whole-slide images (WSI) is a critical first step in digital pathology tasks of diagnosis and grading. Convolutional neural network (CNN) is the most popular representation learning method for computer vision tasks, which have been successfully applied in digital pathology, including tumor and mitosis detection. However, CNNs are typically only tenable with relatively small image sizes (200 × 200 pixels). Only recently, Fully convolutional networks (FCN) are able to deal with larger image sizes (500 × 500 pixels) for semantic segmentation. Hence, the direct application of CNNs to WSI is not computationally feasible because for a WSI, a CNN would require billions or trillions of parameters. To alleviate this issue, this paper presents a novel method, High-throughput Adaptive Sampling for whole-slide Histopathology Image analysis (HASHI), which involves: i) a new efficient adaptive sampling method based on probability gradient and quasi-Monte Carlo sampling, and, ii) a powerful representation learning classifier based on CNNs. We applied HASHI to automated detection of invasive breast cancer on WSI. HASHI was trained and validated using three different data cohorts involving near 500 cases and then independently tested on 195 studies from The Cancer Genome Atlas. The results show that (1) the adaptive sampling method is an effective strategy to deal with WSI without compromising prediction accuracy by obtaining comparative results of a dense sampling (∼6 million of samples in 24 hours) with far fewer samples (∼2,000 samples in 1 minute), and (2) on an independent test dataset, HASHI is effective and robust to data from multiple sites, scanners, and platforms, achieving an average Dice coefficient of 76%. PMID:29795581
NASA Astrophysics Data System (ADS)
Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc
2016-03-01
One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.
A prioritization and analysis strategy for environmental surveillance results.
Shyr, L J; Herrera, H; Haaker, R
1997-11-01
DOE facilities are required to conduct environmental surveillance to verify that facility operations are operated within the approved risk envelope and have not caused undue risk to the public and the environment. Given a reduced budget, a strategy for analyzing environmental surveillance data was developed to set priorities for sampling needs. The radiological and metal data collected at Sandia National Laboratories, New Mexico, were used to demonstrate the analysis strategy. Sampling locations were prioritized for further investigation and the needs for routine sampling. The process of data management, analysis, prioritization, and presentation has been automated through a custom-designed computer tool. Data collected over years can be analyzed and summarized in a short table format for prioritization and decision making.
Material identification based upon energy-dependent attenuation of neutrons
Marleau, Peter
2015-10-06
Various technologies pertaining to identifying a material in a sample and imaging the sample are described herein. The material is identified by computing energy-dependent attenuation of neutrons that is caused by presence of the sample in travel paths of the neutrons. A mono-energetic neutron generator emits the neutron, which is downscattered in energy by a first detector unit. The neutron exits the first detector unit and is detected by a second detector unit subsequent to passing through the sample. Energy-dependent attenuation of neutrons passing through the sample is computed based upon a computed energy of the neutron, wherein such energy can be computed based upon 1) known positions of the neutron generator, the first detector unit, and the second detector unit; or 2) computed time of flight of neutrons between the first detector unit and the second detector unit.
Modeling Cognitive Strategies during Complex Task Performing Process
ERIC Educational Resources Information Center
Mazman, Sacide Guzin; Altun, Arif
2012-01-01
The purpose of this study is to examine individuals' computer based complex task performing processes and strategies in order to determine the reasons of failure by cognitive task analysis method and cued retrospective think aloud with eye movement data. Study group was five senior students from Computer Education and Instructional Technologies…
Remediating Physics Misconceptions Using an Analogy-Based Computer Tutor. Draft.
ERIC Educational Resources Information Center
Murray, Tom; And Others
Described is a computer tutor designed to help students gain a qualitative understanding of important physics concepts. The tutor simulates a teaching strategy called "bridging analogies" that previous research has demonstrated to be successful in one-on-one tutoring and written explanation studies. The strategy is designed to remedy…
Simulating and assessing boson sampling experiments with phase-space representations
NASA Astrophysics Data System (ADS)
Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.
2018-04-01
The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.
I Use the Computer to ADVANCE Advances in Comprehension-Strategy Research.
ERIC Educational Resources Information Center
Blohm, Paul J.
Merging the instructional implications drawn from theory and research in the interactive reading model, schemata, and metacognition with computer based instruction seems a natural approach for actively involving students' participation in reading and learning from text. Computer based graphic organizers guide students' preview or review of lengthy…
Strategies, Challenges and Prospects for Active Learning in the Computer-Based Classroom
ERIC Educational Resources Information Center
Holbert, K. E.; Karady, G. G.
2009-01-01
The introduction of computer-equipped classrooms into engineering education has brought with it a host of opportunities and issues. Herein, some of the challenges and successes for creating an environment for active learning within computer-based classrooms are described. The particular teaching approach developed for undergraduate electrical…
Automated storm water sampling on small watersheds
Harmel, R.D.; King, K.W.; Slade, R.M.
2003-01-01
Few guidelines are currently available to assist in designing appropriate automated storm water sampling strategies for small watersheds. Therefore, guidance is needed to develop strategies that achieve an appropriate balance between accurate characterization of storm water quality and loads and limitations of budget, equipment, and personnel. In this article, we explore the important sampling strategy components (minimum flow threshold, sampling interval, and discrete versus composite sampling) and project-specific considerations (sampling goal, sampling and analysis resources, and watershed characteristics) based on personal experiences and pertinent field and analytical studies. These components and considerations are important in achieving the balance between sampling goals and limitations because they determine how and when samples are taken and the potential sampling error. Several general recommendations are made, including: setting low minimum flow thresholds, using flow-interval or variable time-interval sampling, and using composite sampling to limit the number of samples collected. Guidelines are presented to aid in selection of an appropriate sampling strategy based on user's project-specific considerations. Our experiences suggest these recommendations should allow implementation of a successful sampling strategy for most small watershed sampling projects with common sampling goals.
Overview of Risk Mitigation for Safety-Critical Computer-Based Systems
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report presents a high-level overview of a general strategy to mitigate the risks from threats to safety-critical computer-based systems. In this context, a safety threat is a process or phenomenon that can cause operational safety hazards in the form of computational system failures. This report is intended to provide insight into the safety-risk mitigation problem and the characteristics of potential solutions. The limitations of the general risk mitigation strategy are discussed and some options to overcome these limitations are provided. This work is part of an ongoing effort to enable well-founded assurance of safety-related properties of complex safety-critical computer-based aircraft systems by developing an effective capability to model and reason about the safety implications of system requirements and design.
New computing systems and their impact on structural analysis and design
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
A review is given of the recent advances in computer technology that are likely to impact structural analysis and design. The computational needs for future structures technology are described. The characteristics of new and projected computing systems are summarized. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism. The strategy is designed for computers with a shared memory and a small number of powerful processors (or a small number of clusters of medium-range processors). It is based on approximating the response of the structure by a combination of symmetric and antisymmetric response vectors, each obtained using a fraction of the degrees of freedom of the original finite element model. The strategy was implemented on the CRAY X-MP/4 and the Alliant FX/8 computers. For nonlinear dynamic problems on the CRAY X-MP with four CPUs, it resulted in an order of magnitude reduction in total analysis time, compared with the direct analysis on a single-CPU CRAY X-MP machine.
Derivative Trade Optimizing Model Utilizing GP Based on Behavioral Finance Theory
NASA Astrophysics Data System (ADS)
Matsumura, Koki; Kawamoto, Masaru
This paper proposed a new technique which makes the strategy trees for the derivative (option) trading investment decision based on the behavioral finance theory and optimizes it using evolutionary computation, in order to achieve high profitability. The strategy tree uses a technical analysis based on a statistical, experienced technique for the investment decision. The trading model is represented by various technical indexes, and the strategy tree is optimized by the genetic programming(GP) which is one of the evolutionary computations. Moreover, this paper proposed a method using the prospect theory based on the behavioral finance theory to set psychological bias for profit and deficit and attempted to select the appropriate strike price of option for the higher investment efficiency. As a result, this technique produced a good result and found the effectiveness of this trading model by the optimized dealings strategy.
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785
A Novel Approach for Creating Activity-Aware Applications in a Hospital Environment
NASA Astrophysics Data System (ADS)
Bardram, Jakob E.
Context-aware and activity-aware computing has been proposed as a way to adapt the computer to the user’s ongoing activity. However, deductively moving from physical context - like location - to establishing human activity has proved difficult. This paper proposes a novel approach to activity-aware computing. Instead of inferring activities, this approach enables the user to explicitly model their activity, and then use sensor-based events to create, manage, and use these computational activities adjusted to a specific context. This approach was crafted through a user-centered design process in collaboration with a hospital department. We propose three strategies for activity-awareness: context-based activity matching, context-based activity creation, and context-based activity adaptation. We present the implementation of these strategies and present an experimental evaluation of them. The experiments demonstrate that rather than considering context as information, context can be a relational property that links ’real-world activities’ with their ’computational activities’.
ERIC Educational Resources Information Center
Dahlgren, Madeleine Abrandt
2000-01-01
Compares the role of course objectives in relation to students' study strategies in problem-based learning (PBL). Results comprise data from three PBL programs at Linkopings University (Sweden), in physiotherapy, psychology, and computer engineering. Faculty provided course objectives to function as supportive structures and guides for students'…
An Analogy-Based Computer Tutor for Remediating Physics Misconceptions. Draft.
ERIC Educational Resources Information Center
Murray, Tom; And Others
This paper evaluates the strengths and limitations of a computer tutor designed to help students understand physics concepts. The tutor uses a teaching strategy called "bridging analogies" that previous research has demonstrated to be successful in one-to-one tutoring. The strategy is designed to remedy misconceptions by appealing to existing…
Technology and Current Reading/Literacy Assessment Strategies
ERIC Educational Resources Information Center
Balajthy, Ernest
2007-01-01
Computer-based technologies offer promise as a means to assess students and provide teachers with better understandings of their students' achievement. This article describes recent developments in computer-based and web-based reading and literacy assessment, focusing on assessment administration, information management, and report creation. In…
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
ERIC Educational Resources Information Center
Shadiev, Rustam; Hwang, Wu-Yuin; Yeh, Shih-Ching; Yang, Stephen J. H.; Wang, Jing-Liang; Han, Lin; Hsu, Guo-Liang
2014-01-01
This study aimed to investigate an effectiveness of unidirectional and reciprocal teaching strategies on programming learning supported by web-based learning system (VPen); particularly, how differently effective these two teaching strategies would work. In this study novice programmers were exposed to three different conditions: 1) applying no…
NASA Astrophysics Data System (ADS)
Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter
2017-01-01
The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-01-01
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-02-24
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90-94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7-5.6% per millisecond, with most satellites acquired successfully.
ERIC Educational Resources Information Center
Worrell, Jamie; Duffy, Mary Lou; Brady, Michael P.; Dukes, Charles; Gonzalez-DeHass, Alyssa
2016-01-01
Many schools use computer-based testing to measure students' progress for end-of-the-year and statewide assessments. There is little research to support whether computer-based testing accurately reflects student progress, particularly among students with learning, performance, and generalization difficulties. This article summarizes an…
ERIC Educational Resources Information Center
Hannafin, Robert D.; Foshay, Wellesley R.
2008-01-01
Patriot High School (PHS) adopted a remediation strategy to help its 10th-grade students at risk of failing the Math portion of MCAS, the state's end of year competency exam. The centerpiece of that strategy was a computer-based instructional (CBI) course. PHS used a commercially available CBI product to align the course content with the…
Visual Displays and Contextual Presentations in Computer-Based Instruction.
ERIC Educational Resources Information Center
Park, Ok-choon
1998-01-01
Investigates the effects of two instructional strategies, visual display (animation, and static graphics with and without motion cues) and contextual presentation, in the acquisition of electronic troubleshooting skills using computer-based instruction. Study concludes that use of visual displays and contextual presentation be based on the…
A hybrid computational strategy to address WGS variant analysis in >5000 samples.
Huang, Zhuoyi; Rustagi, Navin; Veeraraghavan, Narayanan; Carroll, Andrew; Gibbs, Richard; Boerwinkle, Eric; Venkata, Manjunath Gorentla; Yu, Fuli
2016-09-10
The decreasing costs of sequencing are driving the need for cost effective and real time variant calling of whole genome sequencing data. The scale of these projects are far beyond the capacity of typical computing resources available with most research labs. Other infrastructures like the cloud AWS environment and supercomputers also have limitations due to which large scale joint variant calling becomes infeasible, and infrastructure specific variant calling strategies either fail to scale up to large datasets or abandon joint calling strategies. We present a high throughput framework including multiple variant callers for single nucleotide variant (SNV) calling, which leverages hybrid computing infrastructure consisting of cloud AWS, supercomputers and local high performance computing infrastructures. We present a novel binning approach for large scale joint variant calling and imputation which can scale up to over 10,000 samples while producing SNV callsets with high sensitivity and specificity. As a proof of principle, we present results of analysis on Cohorts for Heart And Aging Research in Genomic Epidemiology (CHARGE) WGS freeze 3 dataset in which joint calling, imputation and phasing of over 5300 whole genome samples was produced in under 6 weeks using four state-of-the-art callers. The callers used were SNPTools, GATK-HaplotypeCaller, GATK-UnifiedGenotyper and GotCloud. We used Amazon AWS, a 4000-core in-house cluster at Baylor College of Medicine, IBM power PC Blue BioU at Rice and Rhea at Oak Ridge National Laboratory (ORNL) for the computation. AWS was used for joint calling of 180 TB of BAM files, and ORNL and Rice supercomputers were used for the imputation and phasing step. All other steps were carried out on the local compute cluster. The entire operation used 5.2 million core hours and only transferred a total of 6 TB of data across the platforms. Even with increasing sizes of whole genome datasets, ensemble joint calling of SNVs for low coverage data can be accomplished in a scalable, cost effective and fast manner by using heterogeneous computing platforms without compromising on the quality of variants.
Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling
2016-05-01
Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2 = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.
St. Onge, K. R.; Palmé, A. E.; Wright, S. I.; Lascoux, M.
2012-01-01
Most species have at least some level of genetic structure. Recent simulation studies have shown that it is important to consider population structure when sampling individuals to infer past population history. The relevance of the results of these computer simulations for empirical studies, however, remains unclear. In the present study, we use DNA sequence datasets collected from two closely related species with very different histories, the selfing species Capsella rubella and its outcrossing relative C. grandiflora, to assess the impact of different sampling strategies on summary statistics and the inference of historical demography. Sampling strategy did not strongly influence the mean values of Tajima’s D in either species, but it had some impact on the variance. The general conclusions about demographic history were comparable across sampling schemes even when resampled data were analyzed with approximate Bayesian computation (ABC). We used simulations to explore the effects of sampling scheme under different demographic models. We conclude that when sequences from modest numbers of loci (<60) are analyzed, the sampling strategy is generally of limited importance. The same is true under intermediate or high levels of gene flow (4Nm > 2–10) in models in which global expansion is combined with either local expansion or hierarchical population structure. Although we observe a less severe effect of sampling than predicted under some earlier simulation models, our results should not be seen as an encouragement to neglect this issue. In general, a good coverage of the natural range, both within and between populations, will be needed to obtain a reliable reconstruction of a species’s demographic history, and in fact, the effect of sampling scheme on polymorphism patterns may itself provide important information about demographic history. PMID:22870403
Cisco Networking Academy Program for high school students: Formative & summative evaluation
NASA Astrophysics Data System (ADS)
Cranford-Wesley, Deanne
This study examined the effectiveness of the Cisco Network Technology Program in enhancing students' technology skills as measured by classroom strategies, student motivation, student attitude, and student learning. Qualitative and quantitative methods were utilized to determine the effectiveness of this program. The study focused on two 11th grade classrooms at Hamtramck High School. Hamtramck, an inner-city community located in Detroit, is racially and ethnically diverse. The majority of students speak English as a second language; more than 20 languages are represented in the school district. More than 70% of the students are considered to be economically at risk. Few students have computers at home, and their access to the few computers at school is limited. Purposive sampling was conducted for this study. The sample consisted of 40 students, all of whom were trained in Cisco Networking Technologies. The researcher examined viable learning strategies in teaching a Cisco Networking class that focused on a web-based approach. Findings revealed that the Cisco Networking Academy Program was an excellent vehicle for teaching networking skills and, therefore, helping to enhance computer skills for the participating students. However, only a limited number of students were able to participate in the program, due to limited computer labs and lack of qualified teaching personnel. In addition, the cumbersome technical language posed an obstacle to students' success in networking. Laboratory assignments were preferred by 90% of the students over lecture and PowerPoint presentations. Practical applications, lab projects, interactive assignments, PowerPoint presentations, lectures, discussions, readings, research, and assessment all helped to increase student learning and proficiency and to enrich the classroom experience. Classroom strategies are crucial to student success in the networking program. Equipment must be updated and utilized to ensure that students are applying practical skills to networking concepts. The results also suggested a high level of motivation and retention in student participants. Students in both classes scored 80% proficiency on the Achievement Motivation Profile Assessment. The identified standard proficiency score was 70%, and both classes exceeded the standard.
A vectorized Lanczos eigensolver for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1990-01-01
The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.
NASA Astrophysics Data System (ADS)
Wang, Tao; He, Bin
2004-03-01
The recognition of mental states during motor imagery tasks is crucial for EEG-based brain computer interface research. We have developed a new algorithm by means of frequency decomposition and weighting synthesis strategy for recognizing imagined right- and left-hand movements. A frequency range from 5 to 25 Hz was divided into 20 band bins for each trial, and the corresponding envelopes of filtered EEG signals for each trial were extracted as a measure of instantaneous power at each frequency band. The dimensionality of the feature space was reduced from 200 (corresponding to 2 s) to 3 by down-sampling of envelopes of the feature signals, and subsequently applying principal component analysis. The linear discriminate analysis algorithm was then used to classify the features, due to its generalization capability. Each frequency band bin was weighted by a function determined according to the classification accuracy during the training process. The present classification algorithm was applied to a dataset of nine human subjects, and achieved a success rate of classification of 90% in training and 77% in testing. The present promising results suggest that the present classification algorithm can be used in initiating a general-purpose mental state recognition based on motor imagery tasks.
ERIC Educational Resources Information Center
Bianco, Manuela; Collis, Betty
2004-01-01
This study reports the results of the formative evaluations of two computer-supported tools and the associated strategies for their use. Tools and strategies embedded in web-based courses can increase a supervisor's involvement in helping employees transfer learning onto the workplace. Issues relating to characteristics of the tools and strategies…
ERIC Educational Resources Information Center
Khalil, Mohammed K.; Paas, Fred; Johnson, Tristan E.; Su, Yung K.; Payer, Andrew F.
2008-01-01
This research is an effort to best utilize the interactive anatomical images for instructional purposes based on cognitive load theory. Three studies explored the differential effects of three computer-based instructional strategies that use anatomical cross-sections to enhance the interpretation of radiological images. These strategies include:…
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
2015-01-01
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268
ERIC Educational Resources Information Center
Yang, Tzu-Chi; Hwang, Gwo-Jen; Yang, Stephen J. H.; Hwang, Gwo-Haur
2015-01-01
Computer programming is an important skill for engineering and computer science students. However, teaching and learning programming concepts and skills has been recognized as a great challenge to both teachers and students. Therefore, the development of effective learning strategies and environments for programming courses has become an important…
The Effect of Teacher Involvement on Student Performance in a Computer-Based Science Simulation.
ERIC Educational Resources Information Center
Waugh, Michael L.
Designed to investigate whether or not science teachers can positively influence student achievement in, and attitude toward, science, this study focused on a specific teaching strategy and utilization of a computer-based simulation. The software package used in the study was the simulation, Volcanoes, by Earthware Computer Services. The sample…
Jodice, Patrick G.R.; Garman, S.L.; Collopy, Michael W.
2001-01-01
Marbled Murrelets (Brachyramphus marmoratus) are threatened seabirds that nest in coastal old-growth coniferous forests throughout much of their breeding range. Currently, observer-based audio-visual surveys are conducted at inland forest sites during the breeding season primarily to determine nesting distribution and breeding status and are being used to estimate temporal or spatial trends in murrelet detections. Our goal was to assess the feasibility of using audio-visual survey data for such monitoring. We used an intensive field-based survey effort to record daily murrelet detections at seven survey stations in the Oregon Coast Range. We then used computer-aided resampling techniques to assess the effectiveness of twelve survey strategies with varying scheduling and a sampling intensity of 4-14 surveys per breeding season to estimate known means and SDs of murrelet detections. Most survey strategies we tested failed to provide estimates of detection means and SDs that were within A?20% of actual means and SDs. Estimates of daily detections were, however, frequently estimated to within A?50% of field data with sampling efforts of 14 days/breeding season. Additional resampling analyses with statistically generated detection data indicated that the temporal variability in detection data had a great effect on the reliability of the mean and SD estimates calculated from the twelve survey strategies, while the value of the mean had little effect. Effectiveness at estimating multi-year trends in detection data was similarly poor, indicating that audio-visual surveys might be reliably used to estimate annual declines in murrelet detections of the order of 50% per year.
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
Conway, J; Sharkey, R
2002-10-01
The Faculty of Nursing, University of Newcastle, Australia, has been keen to initiate strategies that enhance student learning and nursing practice. Two strategies are problem based learning (PBL) and clinical practice. The Faculty has maintained a comparatively high proportion of the undergraduate hours in the clinical setting in times when financial constraints suggest that simulations and on campus laboratory experiences may be less expensive.Increasingly, computer based technologies are becoming sufficiently refined to support the exploration of nursing practice in a non-traditional lecture/tutorial environment. In 1998, a group of faculty members proposed that computer mediated instruction would provide an opportunity for partnership between students, academics and clinicians that would promote more positive outcomes for all and maintain the integrity of the PBL approach. This paper discusses the similarities between problem based and practice based learning and presents the findings of an evaluative study of the implementation of a practice based learning model that uses computer mediated communication to promote integration of practice experiences with the broader goals of the undergraduate curriculum.
Sample allocation balancing overall representativeness and stratum precision.
Diaz-Quijano, Fredi Alexander
2018-05-07
In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.
An Analogy-Based Computer Tutor for Remediating Physics Misconceptions.
ERIC Educational Resources Information Center
Murray, Tom; And Others
1990-01-01
Describes an intelligent tutoring system designed to help students remedy misconceptions of physics concepts based on a teaching strategy called bridging analogies. Highlights include tutoring strategies; misconceptions in science education; the example situation network; confidence checking; formative evaluation with college students, including…
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.
2018-05-01
We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.
Computer-Based Career Interventions.
ERIC Educational Resources Information Center
Mau, Wei-Cheng
The possible utilities and limitations of computer-assisted career guidance systems (CACG) have been widely discussed although the effectiveness of CACG has not been systematically considered. This paper investigates the effectiveness of a theory-based CACG program, integrating Sequential Elimination and Expected Utility strategies. Three types of…
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
Iterative Importance Sampling Algorithms for Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less
A computer program for sample size computations for banding studies
Wilson, K.R.; Nichols, J.D.; Hines, J.E.
1989-01-01
Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.
Gradient-free MCMC methods for dynamic causal modelling
Sengupta, Biswa; Friston, Karl J.; Penny, Will D.
2015-03-14
Here, we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density -- albeit at almost 1000% increase in computational time, in comparisonmore » to the most efficient algorithm (i.e., the adaptive MCMC sampler).« less
ERIC Educational Resources Information Center
Hung, Yu-Wan; Higgins, Steve
2016-01-01
This study investigates the different learning opportunities enabled by text-based and video-based synchronous computer-mediated communication (SCMC) from an interactionist perspective. Six Chinese-speaking learners of English and six English-speaking learners of Chinese were paired up as tandem (reciprocal) learning dyads. Each dyad participated…
Why CBI? An Examination of the Case for Computer-Based Instruction.
ERIC Educational Resources Information Center
Dean, Peter M.
1977-01-01
Discussion of the use of computers in instruction includes the relationship of theory to practice, the interactive nature of computer instruction, an overview of the Keller Plan, cost considerations, strategy for use of computers in instruction and training, and a look at examination procedure. (RAO)
Provable classically intractable sampling with measurement-based computation in constant time
NASA Astrophysics Data System (ADS)
Sanders, Stephen; Miller, Jacob; Miyake, Akimasa
We present a constant-time measurement-based quantum computation (MQC) protocol to perform a classically intractable sampling problem. We sample from the output probability distribution of a subclass of the instantaneous quantum polynomial time circuits introduced by Bremner, Montanaro and Shepherd. In contrast with the usual circuit model, our MQC implementation includes additional randomness due to byproduct operators associated with the computation. Despite this additional randomness we show that our sampling task cannot be efficiently simulated by a classical computer. We extend previous results to verify the quantum supremacy of our sampling protocol efficiently using only single-qubit Pauli measurements. Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131, USA.
A Computer Assisted Method to Track Listening Strategies in Second Language Learning
ERIC Educational Resources Information Center
Roussel, Stephanie
2011-01-01
Many studies about listening strategies are based on what learners report while listening to an oral message in the second language (Vandergrift, 2003; Graham, 2006). By recording a video of the computer screen while L2 learners (L1 French) were listening to an MP3-track in German, this study uses a novel approach and recent developments in…
ERIC Educational Resources Information Center
Yue, Siwei; Wang, Xuefei
2014-01-01
Based on a corpus of 296 authentic business emails produced in computer-mediated business communication from 7 Chinese international trade enterprises, this paper addresses the language strategy applied in CMC (Computer-mediated Communication) by examining the use of hedges. With the emergence of internet, a wider range of hedges are applied…
Intelligent Computer-Aided Instruction for Medical Diagnosis
Clancey, William J.; Shortliffe, Edward H.; Buchanan, Bruce G.
1979-01-01
An intelligent computer-aided instruction (ICAI) program, named GUIDON, has been developed for teaching infectious disease diagnosis.* ICAI programs use artificial intelligence techniques for representing both subject material and teaching strategies. This paper briefly outlines the difference between traditional instructional programs and ICAI. We then illustrate how GUIDON makes contributions in areas important to medical CAI: interacting with the student in a mixed-initiative dialogue (including the problems of feedback and realism), teaching problem-solving strategies, and assembling a computer-based curriculum.
SIGI: A Computer-Based System of Interactive Guidance and Information.
ERIC Educational Resources Information Center
Educational Testing Service, Princeton, NJ.
This pamphlet describes SIGI, a computer-based System of Interactive Guidance and Information designed to help students in community and junior colleges make career decisions. SIGI is based on a humanistic philosophy, a theory of guidance that emphasizes individual values, a vast store of occupational data, and a strategy for processing…
ERIC Educational Resources Information Center
Culp, G. H.; And Others
Over 100 interactive computer programs for use in general and organic chemistry at the University of Texas at Austin have been prepared. The rationale for the programs is based upon the belief that computer-assisted instruction (CAI) can improve education by, among other things, freeing teachers from routine tasks, measuring entry skills,…
Farreny, Aida; Aguado, Jaume; Corbera, Silvia; Ochoa, Susana; Huerta-Ramos, Elena; Usall, Judith
2016-08-01
Our aim was to examine predictive variables associated with the improvement in cognitive, clinical, and functional outcomes after outpatient participation in REPYFLEC strategy-based Cognitive Remediation (CR) group training. In addition, we investigated which factors might be associated with some long-lasting effects at 6 months' follow-up. Predictors of improvement after CR were studied in a sample of 29 outpatients with schizophrenia. Partial correlations were computed between targeted variables and outcomes of response to explore significant associations. Subsequently, we built linear regression models for each outcome variable and predictors of improvement. The improvement in negative symptoms at posttreatment was linked to faster performance in the Trail Making Test B. Disorganization and cognitive symptoms were related to changes in executive function at follow-up. Lower levels of positive symptoms were related to durable improvements in life skills. Levels of symptoms and cognition were associated with improvements following CR, but the pattern of resulting associations was nonspecific.
Research on Rigid Body Motion Tracing in Space based on NX MCD
NASA Astrophysics Data System (ADS)
Wang, Junjie; Dai, Chunxiang; Shi, Karen; Qin, Rongkang
2018-03-01
In the use of MCD (Mechatronics Concept Designer) which is a module belong to SIEMENS Ltd industrial design software UG (Unigraphics NX), user can define rigid body and kinematic joint to make objects move according to the existing plan in simulation. At this stage, user may have the desire to see the path of some points in the moving object intuitively. In response to this requirement, this paper will compute the pose through the transformation matrix which can be available from the solver engine, and then fit these sampling points through B-spline curve. Meanwhile, combined with the actual constraints of rigid bodies, the traditional equal interval sampling strategy was optimized. The result shown that this method could satisfy the demand and make up for the deficiency in traditional sampling method. User can still edit and model on this 3D curve. Expected result has been achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
A Computer-Based Simulation of an Acid-Base Titration
ERIC Educational Resources Information Center
Boblick, John M.
1971-01-01
Reviews the advantages of computer simulated environments for experiments, referring in particular to acid-base titrations. Includes pre-lab instructions and a sample computer printout of a student's use of an acid-base simulation. Ten references. (PR)
NASA Astrophysics Data System (ADS)
Zacharias, Marios; Giustino, Feliciano
Electron-phonon interactions are of fundamental importance in the study of the optical properties of solids at finite temperatures. Here we present a new first-principles computational technique based on the Williams-Lax theory for performing predictive calculations of the optical spectra, including quantum zero-point renormalization and indirect absorption. The calculation of the Williams-Lax optical spectra is computationally challenging, as it involves the sampling over all possible nuclear quantum states. We develop an efficient computational strategy for performing ''one-shot'' finite-temperature calculations. These require only a single optimal configuration of the atomic positions. We demonstrate our methodology for the case of Si, C, and GaAs, yielding absorption coefficients in good agreement with experiment. This work opens the way for systematic calculations of optical spectra at finite temperature. This work was supported by the UK EPSRC (EP/J009857/1 and EP/M020517/) and the Leverhulme Trust (RL-2012-001), and the Graphene Flagship (EU-FP7-604391).
AdaBoost-based algorithm for network intrusion detection.
Hu, Weiming; Hu, Wei; Maybank, Steve
2008-04-01
Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.
ERIC Educational Resources Information Center
Dalgarno, Barney; Kennedy, Gregor; Bennett, Sue
2014-01-01
Discovery-based learning designs incorporating active exploration are common within instructional software. However, researchers have highlighted empirical evidence showing that "pure" discovery learning is of limited value and strategies which reduce complexity and provide guidance to learners are important if potential learning…
Practical considerations in experimental computational sensing
NASA Astrophysics Data System (ADS)
Poon, Phillip K.
Computational sensing has demonstrated the ability to ameliorate or eliminate many trade-offs in traditional sensors. Rather than attempting to form a perfect image, then sampling at the Nyquist rate, and reconstructing the signal of interest prior to post-processing, the computational sensor attempts to utilize a priori knowledge, active or passive coding of the signal-of-interest combined with a variety of algorithms to overcome the trade-offs or to improve various task-specific metrics. While it is a powerful approach to radically new sensor architectures, published research tends to focus on architecture concepts and positive results. Little attention is given towards the practical issues when faced with implementing computational sensing prototypes. I will discuss the various practical challenges that I encountered while developing three separate applications of computational sensors. The first is a compressive sensing based object tracking camera, the SCOUT, which exploits the sparsity of motion between consecutive frames while using no moving parts to create a psuedo-random shift variant point-spread function. The second is a spectral imaging camera, the AFSSI-C, which uses a modified version of Principal Component Analysis with a Bayesian strategy to adaptively design spectral filters for direct spectral classification using a digital micro-mirror device (DMD) based architecture. The third demonstrates two separate architectures to perform spectral unmixing by using an adaptive algorithm or a hybrid techniques of using Maximum Noise Fraction and random filter selection from a liquid crystal on silicon based computational spectral imager, the LCSI. All of these applications demonstrate a variety of challenges that have been addressed or continue to challenge the computational sensing community. One issue is calibration, since many computational sensors require an inversion step and in the case of compressive sensing, lack of redundancy in the measurement data. Another issue is over multiplexing, as more light is collected per sample, the finite amount of dynamic range and quantization resolution can begin to degrade the recovery of the relevant information. A priori knowledge of the sparsity and or other statistics of the signal or noise is often used by computational sensors to outperform their isomorphic counterparts. This is demonstrated in all three of the sensors I have developed. These challenges and others will be discussed using a case-study approach through these three applications.
Gradient-free MCMC methods for dynamic causal modelling.
Sengupta, Biswa; Friston, Karl J; Penny, Will D
2015-05-15
In this technical note we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density - albeit at almost 1000% increase in computational time, in comparison to the most efficient algorithm (i.e., the adaptive MCMC sampler). Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems
Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...
2017-11-07
We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less
Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk
We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less
Early detection of Alzheimer disease: methods, markers, and misgivings.
Green, R C; Clarke, V C; Thompson, N J; Woodard, J L; Letz, R
1997-01-01
There is at present no reliable predictive test for most forms of Alzheimer disease (AD). Although some information about future risk for disease is available in theory through ApoE genotyping, it is of limited accuracy and utility. Once neuroprotective treatments are available for AD, reliable early detection will become a key component of the treatment strategy. We recently conducted a pilot survey eliciting attitudes and beliefs toward an unspecified and hypothetical predictive test for AD. The survey was completed by a convenience sample of 176 individuals, aged 22-77, which was 75% female, 30% African-American, and of which 33% had a family member with AD. The survey revealed that 69% of this sample would elect to obtain predictive testing for AD if the test were 100% accurate. Individuals were more likely to desire predictive testing if they had an a priori belief that they would develop AD (p = 0.0001), had a lower educational level (p = 0.003), were worried that they would develop AD (p = 0.02), had a self-defined history of depression (p = 0.04), and had a family member with AD (p = 0.04). However, the desire for predictive testing was not significantly associated with age, gender, ethnicity, or income. The desire to obtain predictive testing for AD decreased as the assumed accuracy of the hypothetical test decreased. A better short-term strategy for early detection of AD may be computer-based neuropsychological screening of at-risk (older aged) individuals to identify very early cognitive impairment. Individuals identified in this manner could be referred for diagnostic evaluation and early cases of AD could be identified and treated. A new self-administered, touch-screen, computer-based, neuropsychological screening instrument called Neurobehavioral Evaluation System-3 is described, which may facilitate this type of screening.
Primary School Students' Attitudes towards Computer Based Testing and Assessment in Turkey
ERIC Educational Resources Information Center
Yurdabakan, Irfan; Uzunkavak, Cicek
2012-01-01
This study investigated the attitudes of primary school students towards computer based testing and assessment in terms of different variables. The sample for this research is primary school students attending a computer based testing and assessment application via CITO-OIS. The "Scale on Attitudes towards Computer Based Testing and…
Learning Strategies in Web-Supported Collaborative Project
ERIC Educational Resources Information Center
ChanLin, Lih-Juan
2012-01-01
Web-based learning promotes computer-mediated interaction and student-centred learning in most higher education institutions. To fulfil their academic requirements, students develop appropriate strategies to support learning. Purposes of this study were to: (1) examine the relationship between students study strategies (assessed by Learning and…
The multimedia computer for office-based patient education: a systematic review.
Wofford, James L; Smith, Edward D; Miller, David P
2005-11-01
Use of the multimedia computer for education is widespread in schools and businesses, and yet computer-assisted patient education is rare. In order to explore the potential use of computer-assisted patient education in the office setting, we performed a systematic review of randomized controlled trials (search date April 2004 using MEDLINE and Cochrane databases). Of the 26 trials identified, outcome measures included clinical indicators (12/26, 46.1%), knowledge retention (12/26, 46.1%), health attitudes (15/26, 57.7%), level of shared decision-making (5/26, 19.2%), health services utilization (4/26, 17.6%), and costs (5/26, 19.2%), respectively. Four trials targeted patients with breast cancer, but the clinical issues were otherwise diverse. Reporting of the testing of randomization (76.9%) and appropriate analysis of main effect variables (70.6%) were more common than reporting of a reliable randomization process (35.3%), blinding of outcomes assessment (17.6%), or sample size definition (29.4%). We concluded that the potential for improving the efficiency of the office through computer-assisted patient education has been demonstrated, but better proof of the impact on clinical outcomes is warranted before this strategy is accepted in the office setting.
An improved multi-domain convolution tracking algorithm
NASA Astrophysics Data System (ADS)
Sun, Xin; Wang, Haiying; Zeng, Yingsen
2018-04-01
Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.
NASA Astrophysics Data System (ADS)
Resdiansyah; O. K Rahmat, R. A.; Ismail, A.
2018-03-01
Green transportation refers to a sustainable transport that gives the least impact in terms of social and environmental but at the same time is able to supply energy sources globally that includes non-motorized transport strategies deployment to promote healthy lifestyles, also known as Mobility Management Scheme (MMS). As construction of road infrastructure cannot help solve the problem of congestion, past research has shown that MMS is an effective measure to mitigate congestion and to achieve green transportation. MMS consists of different strategies and policies that subdivided into categories according to how they are able to influence travel behaviour. Appropriate selection of mobility strategies will ensure its effectiveness in mitigating congestion problems. Nevertheless, determining appropriate strategies requires human expert and depends on a number of success factors. This research has successfully developed a computer clone system based on human expert, called E-MMS. The process of knowledge acquisition for MMS strategies and the next following process to selection of strategy has been encode in a knowledge-based system using a shell expert system. The newly developed computer cloning system was successfully verified, validated and evaluated (VV&E) by comparing the result output with the real transportation expert recommendation in which the findings suggested Introduction
Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less
The role of fragment-based and computational methods in polypharmacology.
Bottegoni, Giovanni; Favia, Angelo D; Recanatini, Maurizio; Cavalli, Andrea
2012-01-01
Polypharmacology-based strategies are gaining increased attention as a novel approach to obtaining potentially innovative medicines for multifactorial diseases. However, some within the pharmaceutical community have resisted these strategies because they can be resource-hungry in the early stages of the drug discovery process. Here, we report on fragment-based and computational methods that might accelerate and optimize the discovery of multitarget drugs. In particular, we illustrate that fragment-based approaches can be particularly suited for polypharmacology, owing to the inherent promiscuous nature of fragments. In parallel, we explain how computer-assisted protocols can provide invaluable insights into how to unveil compounds theoretically able to bind to more than one protein. Furthermore, several pragmatic aspects related to the use of these approaches are covered, thus offering the reader practical insights on multitarget-oriented drug discovery projects. Copyright © 2011 Elsevier Ltd. All rights reserved.
Cognitive Load Theory vs. Constructivist Approaches: Which Best Leads to Efficient, Deep Learning?
ERIC Educational Resources Information Center
Vogel-Walcutt, J. J.; Gebrim, J. B.; Bowers, C.; Carper, T. M.; Nicholson, D.
2011-01-01
Computer-assisted learning, in the form of simulation-based training, is heavily focused upon by the military. Because computer-based learning offers highly portable, reusable, and cost-efficient training options, the military has dedicated significant resources to the investigation of instructional strategies that improve learning efficiency…
Enhancing Learning Outcomes in Computer-Based Training via Self-Generated Elaboration
ERIC Educational Resources Information Center
Cuevas, Haydee M.; Fiore, Stephen M.
2014-01-01
The present study investigated the utility of an instructional strategy known as the "query method" for enhancing learning outcomes in computer-based training. The query method involves an embedded guided, sentence generation task requiring elaboration of key concepts in the training material that encourages learners to "stop and…
Advanced Navigation Strategies for an Asteroid Sample Return Mission
NASA Technical Reports Server (NTRS)
Bauman, J.; Getzandanner, K.; Williams, B.; Williams, K.
2011-01-01
The proximity operations phases of a sample return mission to an asteroid have been analyzed using advanced navigation techniques derived from experience gained in planetary exploration. These techniques rely on tracking types such as Earth-based radio metric Doppler and ranging, spacecraft-based ranging, and optical navigation using images of landmarks on the asteroid surface. Navigation strategies for the orbital phases leading up to sample collection, the touch down for collecting the sample, and the post sample collection phase at the asteroid are included. Options for successfully executing the phases are studied using covariance analysis and Monte Carlo simulations of an example mission to the near Earth asteroid 4660 Nereus. Two landing options were studied including trajectories with either one or two bums from orbit to the surface. Additionally, a comparison of post-sample collection strategies is presented. These strategies include remaining in orbit about the asteroid or standing-off a given distance until departure to Earth.
Nuclear Ensemble Approach with Importance Sampling.
Kossoski, Fábris; Barbatti, Mario
2018-06-12
We show that the importance sampling technique can effectively augment the range of problems where the nuclear ensemble approach can be applied. A sampling probability distribution function initially determines the collection of initial conditions for which calculations are performed, as usual. Then, results for a distinct target distribution are computed by introducing compensating importance sampling weights for each sampled point. This mapping between the two probability distributions can be performed whenever they are both explicitly constructed. Perhaps most notably, this procedure allows for the computation of temperature dependent observables. As a test case, we investigated the UV absorption spectra of phenol, which has been shown to have a marked temperature dependence. Application of the proposed technique to a range that covers 500 K provides results that converge to those obtained with conventional sampling. We further show that an overall improved rate of convergence is obtained when sampling is performed at intermediate temperatures. The comparison between calculated and the available measured cross sections is very satisfactory, as the main features of the spectra are correctly reproduced. As a second test case, one of Tully's classical models was revisited, and we show that the computation of dynamical observables also profits from the importance sampling technique. In summary, the strategy developed here can be employed to assess the role of temperature for any property calculated within the nuclear ensemble method, with the same computational cost as doing so for a single temperature.
Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.
Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P
2010-12-22
Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.
Optimal reconfiguration strategy for a degradable multimodule computing system
NASA Technical Reports Server (NTRS)
Lee, Yann-Hang; Shin, Kang G.
1987-01-01
The present quantitative approach to the problem of reconfiguring a degradable multimode system assigns some modules to computation and arranges others for reliability. By using expected total reward as the optimal criterion, there emerges an active reconfiguration strategy based not only on the occurrence of failure but the progression of the given mission. This reconfiguration strategy requires specification of the times at which the system should undergo reconfiguration, and the configurations to which the system should change. The optimal reconfiguration problem is converted to integer nonlinear knapsack and fractional programming problems.
Concept Learning through Image Processing.
ERIC Educational Resources Information Center
Cifuentes, Lauren; Yi-Chuan, Jane Hsieh
This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…
IUWare and Computing Tools: Indiana University's Approach to Low-Cost Software.
ERIC Educational Resources Information Center
Sheehan, Mark C.; Williams, James G.
1987-01-01
Describes strategies for providing low-cost microcomputer-based software for classroom use on college campuses. Highlights include descriptions of the software (IUWare and Computing Tools); computing center support; license policies; documentation; promotion; distribution; staff, faculty, and user training; problems; and future plans. (LRW)
Computerized LCC/ORLA methodology. [Life cycle cost/optimum repair level analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, J.T.
1979-01-01
The effort by Sandia Laboratories in developing CDC6600 computer programs for Optimum Repair Level Analysis (ORLA) and Life Cycle Cost (LCC) analysis is described. Investigation of the three repair-level strategies referenced in AFLCM/AFSCM 800-4 (base discard of subassemblies, base repair of subassemblies, and depot repair of subassemblies) was expanded to include an additional three repair-level strategies (base discard of complete assemblies and, upon shipment of complete assemblies to the depot, depot repair of assemblies by subassembly repair, and depot repair of assemblies by subassembly discard). The expanded ORLA was used directly in an LCC model that was procedurally altered tomore » accommodate the ORLA input data. Available from the LCC computer run was an LCC value corresponding to the strategy chosen from the ORLA. 2 figures.« less
Radiation protective effects of baclofen predicted by a computational drug repurposing strategy.
Ren, Lei; Xie, Dafei; Li, Peng; Qu, Xinyan; Zhang, Xiujuan; Xing, Yaling; Zhou, Pingkun; Bo, Xiaochen; Zhou, Zhe; Wang, Shengqi
2016-11-01
Exposure to ionizing radiation causes damage to living tissues; however, only a small number of agents have been approved for use in radiation injuries. Radioprotector is the primary countermeasure to radiation injury and none radioprotector has indeed reached the drug development stage. Repurposing the long list of approved, non-radioprotective drugs is an attractive strategy to find new radioprotective agents. Here, we applied a computational approach to discover new radioprotectors in silico by comparing publicly available gene expression data of ionizing radiation-treated samples from the Gene Expression Omnibus (GEO) database with gene expression signatures of more than 1309 small-molecule compounds from the Connectivity Map (cmap) dataset. Among the best compounds predicted to be therapeutic for ionizing radiation damage by this approach were some previously reported radioprotectors and baclofen (P<0.01), a chemical that was not previously used as radioprotector. Validation using a cell-based model and a rodent in vivo model demonstrated that treatment with baclofen reduced radiation-induced cytotoxicity in vitro (P<0.01), attenuated bone marrow damage and increased survival in vivo (P<0.05). These findings suggest that baclofen might serve as a radioprotector. The drug repurposing strategy by connecting the GEO data and cmap can be used to identify known drugs as potential radioprotective agents. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mental Time Travel, Memory and the Social Learning Strategies Tournament
ERIC Educational Resources Information Center
Fogarty, L.; Rendell, L.; Laland, K. N.
2012-01-01
The social learning strategies tournament was an open computer-based tournament investigating the best way to learn in a changing environment. Here we present an analysis of the impact of memory on the ability of strategies entered into the social learning strategies tournament (Rendell, Boyd, et al., 2010) to modify their own behavior to suit a…
NASA Astrophysics Data System (ADS)
Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf
2018-06-01
For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.
NASA Astrophysics Data System (ADS)
Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf
2018-04-01
For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Shegog, Ross; Bartholomew, L Kay; Gold, Robert S; Pierrel, Elaine; Parcel, Guy S; Sockrider, Marianna M; Czyzewski, Danita I; Fernandez, Maria E; Berlin, Nina J; Abramson, Stuart
2006-01-01
Translating behavioral theories, models, and strategies to guide the development and structure of computer-based health applications is well recognized, although a continued challenge for program developers. A stepped approach to translate behavioral theory in the design of simulations to teach chronic disease management to children is described. This includes the translation steps to: 1) define target behaviors and their determinants, 2) identify theoretical methods to optimize behavioral change, and 3) choose educational strategies to effectively apply these methods and combine these into a cohesive computer-based simulation for health education. Asthma is used to exemplify a chronic health management problem and a computer-based asthma management simulation (Watch, Discover, Think and Act) that has been evaluated and shown to effect asthma self-management in children is used to exemplify the application of theory to practice. Impact and outcome evaluation studies have indicated the effectiveness of these steps in providing increased rigor and accountability, suggesting their utility for educators and developers seeking to apply simulations to enhance self-management behaviors in patients.
1998-08-07
cognitive flexibility theory and generative learning theory which focus primarily on the individual student’s cognitive development , collaborative... develop "Handling Transfusion Hazards," a computer program based upon cognitive flexibility theory principles. The Program: Handling Transfusion Hazards...computer program was developed according to cognitive flexibility theory principles. A generative version was then developed by embedding
ERIC Educational Resources Information Center
Psycharis, Sarantos
2016-01-01
In this study, an instructional design model, based on the computational experiment approach, was employed in order to explore the effects of the formative assessment strategies and scientific abilities rubrics on students' engagement in the development of inquiry-based pedagogical scenario. In the following study, rubrics were used during the…
Esche, Carol Ann; Warren, Joan I; Woods, Anne B; Jesada, Elizabeth C; Iliuta, Ruth
2015-01-01
The goal of the Nurse Professional Development specialist is to utilize the most effective educational strategies when educating staff nurses about pressure ulcer prevention. More information is needed about the effect of computer-based learning and traditional classroom learning on pressure ulcer education for the staff nurse. This study compares computer-based learning and traditional classroom learning on immediate and long-term knowledge while evaluating the impact of education on pressure ulcer risk assessment, staging, and documentation.
How Do Students Experience Testing on the University Computer?
ERIC Educational Resources Information Center
Whittington, Dale; And Others
1995-01-01
Reports a study of the administration mode, scores, and testing experiences of students taking the PreProfessional Skills Test (PPST) under differing conditions (computer based and paper and pencil). PPST scores and surveys of the students revealed varied test-taking strategies and computer-related alterations in test difficulty, construct,…
ERIC Educational Resources Information Center
Sung, Han-Yu; Hwang, Gwo-Jen
2018-01-01
Researchers have recognized the potential of educational computer games in improving students' learning engagement and outcomes; however, facilitating effective learning behaviors during the gaming process remains an important and challenging issue. In this paper, a collaborative knowledge construction strategy was incorporated into an educational…
ERIC Educational Resources Information Center
Zhang, Jianwei; Chen, Qi; Sun, Yanquing; Reid, David J.
2004-01-01
Learning support studies involving simulation-based scientific discovery learning have tended to adopt an ad hoc strategies-oriented approach in which the support strategies are typically pre-specified according to learners' difficulties in particular activities. This article proposes a more integrated approach, a triple scheme for learning…
Arslan, Burcu; Taatgen, Niels A; Verbrugge, Rineke
2017-01-01
The focus of studies on second-order false belief reasoning generally was on investigating the roles of executive functions and language with correlational studies. Different from those studies, we focus on the question how 5-year-olds select and revise reasoning strategies in second-order false belief tasks by constructing two computational cognitive models of this process: an instance-based learning model and a reinforcement learning model. Unlike the reinforcement learning model, the instance-based learning model predicted that children who fail second-order false belief tasks would give answers based on first-order theory of mind (ToM) reasoning as opposed to zero-order reasoning. This prediction was confirmed with an empirical study that we conducted with 72 5- to 6-year-old children. The results showed that 17% of the answers were correct and 83% of the answers were wrong. In line with our prediction, 65% of the wrong answers were based on a first-order ToM strategy, while only 29% of them were based on a zero-order strategy (the remaining 6% of subjects did not provide any answer). Based on our instance-based learning model, we propose that when children get feedback "Wrong," they explicitly revise their strategy to a higher level instead of implicitly selecting one of the available ToM strategies. Moreover, we predict that children's failures are due to lack of experience and that with exposure to second-order false belief reasoning, children can revise their wrong first-order reasoning strategy to a correct second-order reasoning strategy.
Arslan, Burcu; Taatgen, Niels A.; Verbrugge, Rineke
2017-01-01
The focus of studies on second-order false belief reasoning generally was on investigating the roles of executive functions and language with correlational studies. Different from those studies, we focus on the question how 5-year-olds select and revise reasoning strategies in second-order false belief tasks by constructing two computational cognitive models of this process: an instance-based learning model and a reinforcement learning model. Unlike the reinforcement learning model, the instance-based learning model predicted that children who fail second-order false belief tasks would give answers based on first-order theory of mind (ToM) reasoning as opposed to zero-order reasoning. This prediction was confirmed with an empirical study that we conducted with 72 5- to 6-year-old children. The results showed that 17% of the answers were correct and 83% of the answers were wrong. In line with our prediction, 65% of the wrong answers were based on a first-order ToM strategy, while only 29% of them were based on a zero-order strategy (the remaining 6% of subjects did not provide any answer). Based on our instance-based learning model, we propose that when children get feedback “Wrong,” they explicitly revise their strategy to a higher level instead of implicitly selecting one of the available ToM strategies. Moreover, we predict that children’s failures are due to lack of experience and that with exposure to second-order false belief reasoning, children can revise their wrong first-order reasoning strategy to a correct second-order reasoning strategy. PMID:28293206
Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry.
Planatscher, Hannes; Supper, Jochen; Poetz, Oliver; Stoll, Dieter; Joos, Thomas; Templin, Markus F; Zell, Andreas
2010-06-25
Mass spectrometry (MS) based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. For small datasets (a few hundred proteins) it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.
Drug-target interaction prediction from PSSM based evolutionary information.
Mousavian, Zaynab; Khakabimamaghani, Sahand; Kavousi, Kaveh; Masoudi-Nejad, Ali
2016-01-01
The labor-intensive and expensive experimental process of drug-target interaction prediction has motivated many researchers to focus on in silico prediction, which leads to the helpful information in supporting the experimental interaction data. Therefore, they have proposed several computational approaches for discovering new drug-target interactions. Several learning-based methods have been increasingly developed which can be categorized into two main groups: similarity-based and feature-based. In this paper, we firstly use the bi-gram features extracted from the Position Specific Scoring Matrix (PSSM) of proteins in predicting drug-target interactions. Our results demonstrate the high-confidence prediction ability of the Bigram-PSSM model in terms of several performance indicators specifically for enzymes and ion channels. Moreover, we investigate the impact of negative selection strategy on the performance of the prediction, which is not widely taken into account in the other relevant studies. This is important, as the number of non-interacting drug-target pairs are usually extremely large in comparison with the number of interacting ones in existing drug-target interaction data. An interesting observation is that different levels of performance reduction have been attained for four datasets when we change the sampling method from the random sampling to the balanced sampling. Copyright © 2015 Elsevier Inc. All rights reserved.
Use of Humorous Visuals To Enhance Computer-Based-Instruction.
ERIC Educational Resources Information Center
Snetsinger, Wendy; Grabowski, Barbara
It was hypothesized that a visual strategy that incorporates a humorous theme and cartoons with humorous comments relevant to the content helps motivate students to focus on and retain computer-based instructional material. An experiment to assess this hypothesis was undertaken with 43 college students who received a humorous presentation on…
Emphasizing Planning for Essay Writing with a Computer-Based Graphic Organizer
ERIC Educational Resources Information Center
Evmenova, Anya S.; Regan, Kelley; Boykin, Andrea; Good, Kevin; Hughes, Melissa; MacVittie, Nichole; Sacco, Donna; Ahn, Soo Y.; Chirinos, David
2016-01-01
The authors conducted a multiple-baseline study to investigate the effects of a computer-based graphic organizer (CBGO) with embedded self-regulated learning strategies on the quantity and quality of persuasive essay writing by students with high-incidence disabilities. Ten seventh- and eighth-grade students with learning disabilities, emotional…
Improving Learning in Computer-Based Instruction through Questioning and Grouping Strategies
ERIC Educational Resources Information Center
Niemczyk, Mary; Savenye, Wilhelmina
2010-01-01
This study investigated the comparative effects of adjunct questions, student self-generated questions, and note taking on learning from a multimedia database. High school students worked individually or in cooperative dyads on a computer-based multimedia unit using a study guide to answer either adjunct questions, generate self-questions, or take…
Aerobraking strategies for the sample of comet coma earth return mission
NASA Astrophysics Data System (ADS)
Abe, Takashi; Kawaguchi, Jun'ichiro; Uesugi, Kuninori; Yen, Chen-Wan L.
The results of a study to the validate the applicability of the aerobraking concept to the SOCCER (sample of comet coma earth return) mission using a six-DOF computer simulation of the aerobraking process are presented. The SOCCER spacecraft and the aerobraking scenario and power supply problem are briefly described. Results are presented for the spin effect, payload exposure problem, and sun angle effect.
Aerobraking strategies for the sample of comet coma earth return mission
NASA Technical Reports Server (NTRS)
Abe, Takashi; Kawaguchi, Jun'ichiro; Uesugi, Kuninori; Yen, Chen-Wan L.
1990-01-01
The results of a study to the validate the applicability of the aerobraking concept to the SOCCER (sample of comet coma earth return) mission using a six-DOF computer simulation of the aerobraking process are presented. The SOCCER spacecraft and the aerobraking scenario and power supply problem are briefly described. Results are presented for the spin effect, payload exposure problem, and sun angle effect.
Bertoldi, Eduardo G; Stella, Steffan F; Rohde, Luis E; Polanczyk, Carisi A
2016-05-01
Several tests exist for diagnosing coronary artery disease, with varying accuracy and cost. We sought to provide cost-effectiveness information to aid physicians and decision-makers in selecting the most appropriate testing strategy. We used the state-transitions (Markov) model from the Brazilian public health system perspective with a lifetime horizon. Diagnostic strategies were based on exercise electrocardiography (Ex-ECG), stress echocardiography (ECHO), single-photon emission computed tomography (SPECT), computed tomography coronary angiography (CTA), or stress cardiac magnetic resonance imaging (C-MRI) as the initial test. Systematic review provided input data for test accuracy and long-term prognosis. Cost data were derived from the Brazilian public health system. Diagnostic test strategy had a small but measurable impact in quality-adjusted life-years gained. Switching from Ex-ECG to CTA-based strategies improved outcomes at an incremental cost-effectiveness ratio of 3100 international dollars per quality-adjusted life-year. ECHO-based strategies resulted in cost and effectiveness almost identical to CTA, and SPECT-based strategies were dominated because of their much higher cost. Strategies based on stress C-MRI were most effective, but the incremental cost-effectiveness ratio vs CTA was higher than the proposed willingness-to-pay threshold. Invasive strategies were dominant in the high pretest probability setting. Sensitivity analysis showed that results were sensitive to costs of CTA, ECHO, and C-MRI. Coronary CT is cost-effective for the diagnosis of coronary artery disease and should be included in the Brazilian public health system. Stress ECHO has a similar performance and is an acceptable alternative for most patients, but invasive strategies should be reserved for patients at high risk. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.
Description of Student’s Metacognitive Ability in Understanding and Solving Mathematics Problem
NASA Astrophysics Data System (ADS)
Ahmad, Herlina; Febryanti, Fatimah; Febryanti, Fatimah; Muthmainnah
2018-01-01
This research was conducted qualitative which was aim to describe metacognitive ability to understand and solve the problems of mathematics. The subject of the research was the first year students at computer and networking department of SMK Mega Link Majene. The sample was taken by purposive sampling technique. The data obtained used the research instrument based on the form of students achievements were collected by using test of student’s achievement and interview guidance. The technique of collecting data researcher had observation to ascertain the model that used by teacher was teaching model of developing metacognitive. The technique of data analysis in this research was reduction data, presentation and conclusion. Based on the whole findings in this study it was shown that student’s metacognitive ability generally not develops optimally. It was because of limited scope of the materials, and cognitive teaching strategy handled by verbal presentation and trained continuously in facing cognitive tasks, such as understanding and solving problem.
Cotes-Ruiz, Iván Tomás; Prado, Rocío P.; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás
2017-01-01
Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique. PMID:28085932
Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás
2017-01-01
Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.
Comparability of Computer-Based and Paper-Based Science Assessments
ERIC Educational Resources Information Center
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E.
2018-01-01
We compared students' performance on a paper-based test (PBT) and three computer-based tests (CBTs). The three computer-based tests used different test navigation and answer selection features, allowing us to examine how these features affect student performance. The study sample consisted of 9,698 fourth through twelfth grade students from across…
Evaluation of Deep Learning Based Stereo Matching Methods: from Ground to Aerial Images
NASA Astrophysics Data System (ADS)
Liu, J.; Ji, S.; Zhang, C.; Qin, Z.
2018-05-01
Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.
Ohkuma, Takahiro; Kremer, Kurt; Daoulas, Kostas
2018-05-02
Understanding properties of polymer alloys with computer simulations frequently requires equilibration of samples comprised of microscopically described long molecules. We present the extension of an efficient hierarchical backmapping strategy, initially developed for homopolymer melts, to equilibrate high-molecular-weight binary blends. These mixtures present significant interest for practical applications and fundamental polymer physics. In our approach, the blend is coarse-grained into models representing polymers as chains of soft blobs. Each blob stands for a subchain with N b microscopic monomers. A hierarchy of blob-based models with different resolution is obtained by varying N b . First the model with the largest N b is used to obtain an equilibrated blend. This configuration is sequentially fine-grained, reinserting at each step the degrees of freedom of the next in the hierarchy blob-based model. Once the blob-based description is sufficiently detailed, the microscopic monomers are reinserted. The hard excluded volume is recovered through a push-off procedure and the sample is re-equilibrated with molecular dynamics (MD), requiring relaxation on the order of the entanglement time. For the initial method development we focus on miscible blends described on microscopic level through a generic bead-spring model, which reproduces hard excluded volume, strong covalent bonds, and realistic liquid density. The blended homopolymers are symmetric with respect to molecular architecture and liquid structure. To parameterize the blob-based models and validate equilibration of backmapped samples, we obtain reference data from independent hybrid simulations combining MD and identity exchange Monte Carlo moves, taking advantage of the symmetry of the blends. The potential of the backmapping strategy is demonstrated by equilibrating blend samples with different degree of miscibility, containing 500 chains with 1000 monomers each. Equilibration is verified by comparing chain conformations and liquid structure in backmapped blends with the reference data. Possible directions for further methodological developments are discussed.
NASA Astrophysics Data System (ADS)
Ohkuma, Takahiro; Kremer, Kurt; Daoulas, Kostas
2018-05-01
Understanding properties of polymer alloys with computer simulations frequently requires equilibration of samples comprised of microscopically described long molecules. We present the extension of an efficient hierarchical backmapping strategy, initially developed for homopolymer melts, to equilibrate high-molecular-weight binary blends. These mixtures present significant interest for practical applications and fundamental polymer physics. In our approach, the blend is coarse-grained into models representing polymers as chains of soft blobs. Each blob stands for a subchain with N b microscopic monomers. A hierarchy of blob-based models with different resolution is obtained by varying N b. First the model with the largest N b is used to obtain an equilibrated blend. This configuration is sequentially fine-grained, reinserting at each step the degrees of freedom of the next in the hierarchy blob-based model. Once the blob-based description is sufficiently detailed, the microscopic monomers are reinserted. The hard excluded volume is recovered through a push-off procedure and the sample is re-equilibrated with molecular dynamics (MD), requiring relaxation on the order of the entanglement time. For the initial method development we focus on miscible blends described on microscopic level through a generic bead-spring model, which reproduces hard excluded volume, strong covalent bonds, and realistic liquid density. The blended homopolymers are symmetric with respect to molecular architecture and liquid structure. To parameterize the blob-based models and validate equilibration of backmapped samples, we obtain reference data from independent hybrid simulations combining MD and identity exchange Monte Carlo moves, taking advantage of the symmetry of the blends. The potential of the backmapping strategy is demonstrated by equilibrating blend samples with different degree of miscibility, containing 500 chains with 1000 monomers each. Equilibration is verified by comparing chain conformations and liquid structure in backmapped blends with the reference data. Possible directions for further methodological developments are discussed.
Bayesian inference based on stationary Fokker-Planck sampling.
Berrones, Arturo
2010-06-01
A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.
NASA Astrophysics Data System (ADS)
Yao, W.; Poleswki, P.; Krzystek, P.
2016-06-01
The recent success of deep convolutional neural networks (CNN) on a large number of applications can be attributed to large amounts of available training data and increasing computing power. In this paper, a semantic pixel labelling scheme for urban areas using multi-resolution CNN and hand-crafted spatial-spectral features of airborne remotely sensed data is presented. Both CNN and hand-crafted features are applied to image/DSM patches to produce per-pixel class probabilities with a L1-norm regularized logistical regression classifier. The evidence theory infers a degree of belief for pixel labelling from different sources to smooth regions by handling the conflicts present in the both classifiers while reducing the uncertainty. The aerial data used in this study were provided by ISPRS as benchmark datasets for 2D semantic labelling tasks in urban areas, which consists of two data sources from LiDAR and color infrared camera. The test sites are parts of a city in Germany which is assumed to consist of typical object classes including impervious surfaces, trees, buildings, low vegetation, vehicles and clutter. The evaluation is based on the computation of pixel-based confusion matrices by random sampling. The performance of the strategy with respect to scene characteristics and method combination strategies is analyzed and discussed. The competitive classification accuracy could be not only explained by the nature of input data sources: e.g. the above-ground height of nDSM highlight the vertical dimension of houses, trees even cars and the nearinfrared spectrum indicates vegetation, but also attributed to decision-level fusion of CNN's texture-based approach with multichannel spatial-spectral hand-crafted features based on the evidence combination theory.
Fenimore, Paul W.; Foley, Brian T.; Bakken, Russell R.; Thurmond, James R.; Yusim, Karina; Yoon, Hyejin; Parker, Michael; Hart, Mary Kate; Dye, John M.; Korber, Bette; Kuiken, Carla
2012-01-01
We report the rational design and in vivo testing of mosaic proteins for a polyvalent pan-filoviral vaccine using a computational strategy designed for the Human Immunodeficiency Virus type 1 (HIV-1) but also appropriate for Hepatitis C virus (HCV) and potentially other diverse viruses. Mosaics are sets of artificial recombinant proteins that are based on natural proteins. The recombinants are computationally selected using a genetic algorithm to optimize the coverage of potential cytotoxic T lymphocyte (CTL) epitopes. Because evolutionary history differs markedly between HIV-1 and filoviruses, we devised an adapted computational technique that is effective for sparsely sampled taxa; our first significant result is that the mosaic technique is effective in creating high-quality mosaic filovirus proteins. The resulting coverage of potential epitopes across filovirus species is superior to coverage by any natural variants, including current vaccine strains with demonstrated cross-reactivity. The mosaic cocktails are also robust: mosaics substantially outperformed natural strains when computationally tested against poorly sampled species and more variable genes. Furthermore, in a computational comparison of cross-reactive potential a design constructed prior to the Bundibugyo outbreak performed nearly as well against all species as an updated design that included Bundibugyo. These points suggest that the mosaic designs would be more resilient than natural-variant vaccines against future Ebola outbreaks dominated by novel viral variants. We demonstrate in vivo immunogenicity and protection against a heterologous challenge in a mouse model. This design work delineates the likely requirements and limitations on broadly-protective filoviral CTL vaccines. PMID:23056184
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
NASA Astrophysics Data System (ADS)
Chartin, Caroline; Stevens, Antoine; Kruger, Inken; Esther, Goidts; Carnol, Monique; van Wesemael, Bas
2016-04-01
As many other countries, Belgium complies with Annex I of the United Nations Framework Convention on Climate Change (UNFCCC). Belgium thus reports its annual greenhouse gas emissions in its national inventory report (NIR), with a distinction between emissions/sequestration in cropland and grassland (EU decision 529/2013). The CO2 fluxes are then based on changes in SOC stocks computed for each of these two types of landuse. These stocks are specified for each of the agricultural regions which correspond to areas with similar agricultural practices (rotations and/or livestock) and yield potentials. For Southern Belgium (Wallonia) consisting of ten agricultural regions, the Soil Monitoring Network (SMN) 'CARBOSOL' has been developed this last decade to survey the state of agricultural soils by quantifying SOC stocks and their evolution in a reasonable number of locations complying with the time and funds allocated. Unfortunately, the 592 points of the CARBOSOL network do not allow a representative and a sound estimation of SOC stocks and its uncertainties for the 20 possible combinations of land use/agricultural regions. Moreover, the SMN CARBIOSOL is based on a legacy database following a convenience scheme sampling strategy rather than a statistical scheme defined by design-based or model-based strategies. Here, we aim to both quantify SOC budgets (i.e., How much?) and spatialize SOC stocks (i.e., Where?) at regional scale (Southern Belgium) based on data from the SMN described above. To this end, we developed a computation procedure based on Digital Soil Mapping techniques and stochastic simulations (Monte-Carlo) allowing the estimation of multiple (10,000) independent spatialized datasets. This procedure accounts for the uncertainties associated to estimations of both i) SOC stock at the pixelscale and ii) parameters of the models. Based on these 10,000 individual realizations of the spatial model, mean SOC stocks and confidence intervals can be then computed at the pixel scale, for selected sub-areas (i.e., the 20 landuse/agricultural region combinations) and for the entire study area.
Computing border bases using mutant strategies
NASA Astrophysics Data System (ADS)
Ullah, E.; Abbas Khan, S.
2014-01-01
Border bases, a generalization of Gröbner bases, have actively been addressed during recent years due to their applicability to industrial problems. In cryptography and coding theory a useful application of border based is to solve zero-dimensional systems of polynomial equations over finite fields, which motivates us for developing optimizations of the algorithms that compute border bases. In 2006, Kehrein and Kreuzer formulated the Border Basis Algorithm (BBA), an algorithm which allows the computation of border bases that relate to a degree compatible term ordering. In 2007, J. Ding et al. introduced mutant strategies bases on finding special lower degree polynomials in the ideal. The mutant strategies aim to distinguish special lower degree polynomials (mutants) from the other polynomials and give them priority in the process of generating new polynomials in the ideal. In this paper we develop hybrid algorithms that use the ideas of J. Ding et al. involving the concept of mutants to optimize the Border Basis Algorithm for solving systems of polynomial equations over finite fields. In particular, we recall a version of the Border Basis Algorithm which is actually called the Improved Border Basis Algorithm and propose two hybrid algorithms, called MBBA and IMBBA. The new mutants variants provide us space efficiency as well as time efficiency. The efficiency of these newly developed hybrid algorithms is discussed using standard cryptographic examples.
NASA Astrophysics Data System (ADS)
Sherley, Patrick L.; Pujol, Alfonso, Jr.; Meadow, John S.
1990-07-01
To provide a means of rendering complex computer architectures languages and input/output modalities transparent to experienced and inexperienced users research is being conducted to develop a voice driven/voice response computer graphics imaging system. The system will be used for reconstructing and displaying computed tomography and magnetic resonance imaging scan data. In conjunction with this study an artificial intelligence (Al) control strategy was developed to interface the voice components and support software to the computer graphics functions implemented on the Sun Microsystems 4/280 color graphics workstation. Based on generated text and converted renditions of verbal utterances by the user the Al control strategy determines the user''s intent and develops and validates a plan. The program type and parameters within the plan are used as input to the graphics system for reconstructing and displaying medical image data corresponding to that perceived intent. If the plan is not valid the control strategy queries the user for additional information. The control strategy operates in a conversation mode and vocally provides system status reports. A detailed examination of the various AT techniques is presented with major emphasis being placed on their specific roles within the total control strategy structure. 1.
H. T. Schreuder; M. S. Williams; C. Aguirre-Bravo; P. L. Patterson
2003-01-01
The sampling strategy is presented for the initial phase of the natural resources pilot project in the Mexican States of Jalisco and Colima. The sampling design used is ground-based cluster sampling with poststratification based on Landsat Thematic Mapper imagery. The data collected will serve as a basis for additional data collection, mapping, and spatial modeling...
United States Air Force Training Management 2010. Volume 2. A Strategy for Superiority
1989-03-01
decontaminate the ramp area with the remote robotic Chemical- Biological Warfare (CBW) sterilizers. If the Chemical- Biological (CB) attacks continue, she will be...Of Skilled Craftsmen Troubles Some Firms." The Wall Street Journal. 14 September 1987, pp. 1,8. 44. Naisbitt, John. Megatrends : Ten New Directions...Computer Based Training CBW Chemical- Biological Warfare CMAS Computer-Based Maintenance Aids System CMI Computer-Managed Instruction DOD Department of
Olson, Mark A; Feig, Michael; Brooks, Charles L
2008-04-15
This article examines ab initio methods for the prediction of protein loops by a computational strategy of multiscale conformational sampling and physical energy scoring functions. Our approach consists of initial sampling of loop conformations from lattice-based low-resolution models followed by refinement using all-atom simulations. To allow enhanced conformational sampling, the replica exchange method was implemented. Physical energy functions based on CHARMM19 and CHARMM22 parameterizations with generalized Born (GB) solvent models were applied in scoring loop conformations extracted from the lattice simulations and, in the case of all-atom simulations, the ensemble of conformations were generated and scored with these models. Predictions are reported for 25 loop segments, each eight residues long and taken from a diverse set of 22 protein structures. We find that the simulations generally sampled conformations with low global root-mean-square-deviation (RMSD) for loop backbone coordinates from the known structures, whereas clustering conformations in RMSD space and scoring detected less favorable loop structures. Specifically, the lattice simulations sampled basins that exhibited an average global RMSD of 2.21 +/- 1.42 A, whereas clustering and scoring the loop conformations determined an RMSD of 3.72 +/- 1.91 A. Using CHARMM19/GB to refine the lattice conformations improved the sampling RMSD to 1.57 +/- 0.98 A and detection to 2.58 +/- 1.48 A. We found that further improvement could be gained from extending the upper temperature in the all-atom refinement from 400 to 800 K, where the results typically yield a reduction of approximately 1 A or greater in the RMSD of the detected loop. Overall, CHARMM19 with a simple pairwise GB solvent model is more efficient at sampling low-RMSD loop basins than CHARMM22 with a higher-resolution modified analytical GB model; however, the latter simulation method provides a more accurate description of the all-atom energy surface, yet demands a much greater computational cost. (c) 2007 Wiley Periodicals, Inc.
Optimal social-networking strategy is a function of socioeconomic conditions.
Oishi, Shigehiro; Kesebir, Selin
2012-12-01
In the two studies reported here, we examined the relation among residential mobility, economic conditions, and optimal social-networking strategy. In study 1, a computer simulation showed that regardless of economic conditions, having a broad social network with weak friendship ties is advantageous when friends are likely to move away. By contrast, having a small social network with deep friendship ties is advantageous when the economy is unstable but friends are not likely to move away. In study 2, we examined the validity of the computer simulation using a sample of American adults. Results were consistent with the simulation: American adults living in a zip code where people are residentially stable but economically challenged were happier if they had a narrow but deep social network, whereas in other socioeconomic conditions, people were generally happier if they had a broad but shallow networking strategy. Together, our studies demonstrate that the optimal social-networking strategy varies as a function of socioeconomic conditions.
Pain Assessment and Management in Nursing Education Using Computer-based Simulations.
Romero-Hall, Enilda
2015-08-01
It is very important for nurses to have a clear understanding of the patient's pain experience and of management strategies. However, a review of the nursing literature shows that one of the main barriers to proper pain management practice is lack of knowledge. Nursing schools are in a unique position to address the gap in pain management knowledge by facilitating the acquisition and use of knowledge by the next generation of nurses. The purpose of this article is to discuss the role of computer-based simulations as a reliable educational technology strategy that can enhance the learning experience of nursing students acquiring pain management knowledge and practice. Computer-based simulations provide a significant number of learning affordances that can help change nursing students' attitudes and behaviors toward and practice of pain assessment and management. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
Efficient Strategies for Estimating the Spatial Coherence of Backscatter
Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.
2017-01-01
The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342
The purpose of this manuscript is to describe the practical strategies developed for the implementation of the Minnesota Children's Pesticide Exposure Study (MNCPES), which is one of the first probability-based samples of multi-pathway and multi-pesticide exposures in children....
NASA Astrophysics Data System (ADS)
Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Santos-Martins, Diogo; Olson, Arthur J.
2014-04-01
To rigorously assess the tools and protocols that can be used to understand and predict macromolecular recognition, and to gain more structural insight into three newly discovered allosteric binding sites on a critical drug target involved in the treatment of HIV infections, the Olson and Levy labs collaborated on the SAMPL4 challenge. This computational blind challenge involved predicting protein-ligand binding against the three allosteric sites of HIV integrase (IN), a viral enzyme for which two drugs (that target the active site) have been approved by the FDA. Positive control cross-docking experiments were utilized to select 13 receptor models out of an initial ensemble of 41 different crystal structures of HIV IN. These 13 models of the targets were selected using our new "Rank Difference Ratio" metric. The first stage of SAMPL4 involved using virtual screens to identify 62 active, allosteric IN inhibitors out of a set of 321 compounds. The second stage involved predicting the binding site(s) and crystallographic binding mode(s) for 57 of these inhibitors. Our team submitted four entries for the first stage that utilized: (1) AutoDock Vina (AD Vina) plus visual inspection; (2) a new common pharmacophore engine; (3) BEDAM replica exchange free energy simulations, and a Consensus approach that combined the predictions of all three strategies. Even with the SAMPL4's very challenging compound library that displayed a significantly lower amount of structural diversity than most libraries that are conventionally employed in prospective virtual screens, these approaches produced hit rates of 24, 25, 34, and 27 %, respectively, on a set with 19 % declared binders. Our only entry for the second stage challenge was based on the results of AD Vina plus visual inspection, and it ranked third place overall according to several different metrics provided by the SAMPL4 organizers. The successful results displayed by these approaches highlight the utility of the computational structure-based drug discovery tools and strategies that are being developed to advance the goals of the newly created, multi-institution, NIH-funded center called the "HIV Interaction and Viral Evolution Center".
Perryman, Alexander L; Santiago, Daniel N; Forli, Stefano; Martins, Diogo Santos; Olson, Arthur J
2014-04-01
To rigorously assess the tools and protocols that can be used to understand and predict macromolecular recognition, and to gain more structural insight into three newly discovered allosteric binding sites on a critical drug target involved in the treatment of HIV infections, the Olson and Levy labs collaborated on the SAMPL4 challenge. This computational blind challenge involved predicting protein-ligand binding against the three allosteric sites of HIV integrase (IN), a viral enzyme for which two drugs (that target the active site) have been approved by the FDA. Positive control cross-docking experiments were utilized to select 13 receptor models out of an initial ensemble of 41 different crystal structures of HIV IN. These 13 models of the targets were selected using our new "Rank Difference Ratio" metric. The first stage of SAMPL4 involved using virtual screens to identify 62 active, allosteric IN inhibitors out of a set of 321 compounds. The second stage involved predicting the binding site(s) and crystallographic binding mode(s) for 57 of these inhibitors. Our team submitted four entries for the first stage that utilized: (1) AutoDock Vina (AD Vina) plus visual inspection; (2) a new common pharmacophore engine; (3) BEDAM replica exchange free energy simulations, and a Consensus approach that combined the predictions of all three strategies. Even with the SAMPL4's very challenging compound library that displayed a significantly lower amount of structural diversity than most libraries that are conventionally employed in prospective virtual screens, these approaches produced hit rates of 24, 25, 34, and 27 %, respectively, on a set with 19 % declared binders. Our only entry for the second stage challenge was based on the results of AD Vina plus visual inspection, and it ranked third place overall according to several different metrics provided by the SAMPL4 organizers. The successful results displayed by these approaches highlight the utility of the computational structure-based drug discovery tools and strategies that are being developed to advance the goals of the newly created, multi-institution, NIH-funded center called the "HIV Interaction and Viral Evolution Center".
Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem
NASA Astrophysics Data System (ADS)
Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa
A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.
Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith
2011-01-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089
Recruitment of hard-to-reach population subgroups via adaptations of the snowball sampling strategy.
Sadler, Georgia Robins; Lee, Hau-Chen; Lim, Rod Seung-Hwan; Fullerton, Judith
2010-09-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author's program of research are provided to demonstrate how adaptations of snowball sampling can be used effectively in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more-vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or for research studies when the recruitment of a population-based sample is not essential.
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
Sampling strategies exploiting multi-pumping flow systems.
Prior, João A V; Santos, João L M; Lima, José L F C
2003-04-01
In this work new strategies were exploited to implement multi-pumping flow systems relying on the utilisation of multiple devices that act simultaneously as sample-insertion, reagent-introduction, and solution-propelling units. The solenoid micro-pumps that were initially used as the only active elements of multi-pumping systems, and which were able to produce pulses of 3 to 25 microL, were replaced by syringe pumps with the aim of producing pulses between 1 and 4 microL. The performance of the developed flow system was assessed by using distinct sample-insertion strategies like single sample volume, merging zones, and binary sampling in the spectrophotometric determination of isoniazid in pharmaceutical formulations upon reaction with 1,2-naphthoquinone-4-sulfonate, in alkaline medium. The results obtained showed that enhanced sample/reagent mixing could be obtained with binary sampling and by using a 1 microL per step pump, even in limited dispersion conditions. Moreover, syringe pumps produce very reproducible flowing streams and are easily manipulated and controlled by a computer program, which is greatly simplified since they are the only active manifold component. Linear calibration plots up to 18.0 microg mL(-1), with a relative standard deviation of less than 1.48% (n=10) and a throughput of about 20 samples per hour, were obtained.
NASA Astrophysics Data System (ADS)
Lu, Xinguo; Chen, Dan
2017-08-01
Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.
ERIC Educational Resources Information Center
Gao, Li
2013-01-01
With the advent of computer-assisted autonomous learning, English listening has become more challenging to Chinese college English learners. Metacognitive strategies, often adopted in process-based approach emphasizes more on the listening process. This paper discusses the feasibility of metacognitive strategies in English listening instruction in…
ERIC Educational Resources Information Center
Lin, Huifen
2011-01-01
The purpose of this study was to investigate the relative effectiveness of different types of visuals (static and animated) and instructional strategies (no strategy, questions, and questions plus feedback) used to complement visualized materials on students' learning of different educational objectives in a computer-based instructional (CBI)…
Orbit determination strategy and results for the Pioneer 10 Jupiter mission
NASA Technical Reports Server (NTRS)
Wong, S. K.; Lubeley, A. J.
1974-01-01
Pioneer 10 is the first earth-based vehicle to encounter Jupiter and occult its moon, Io. In contributing to the success of the mission, the Orbit Determination Group evaluated the effects of the dominant error sources on the spacecraft's computed orbit and devised an encounter strategy minimizing the effects of these error sources. The encounter results indicated that: (1) errors in the satellite model played a very important role in the accuracy of the computed orbit, (2) encounter strategy was sound, (3) all mission objectives were met, and (4) Jupiter-Saturn mission for Pioneer 11 is within the navigation capability.
n-body simulations using message passing parallel computers.
NASA Astrophysics Data System (ADS)
Grama, A. Y.; Kumar, V.; Sameh, A.
The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.
The Role of Context-Related Parameters in Adults' Mental Computational Acts
ERIC Educational Resources Information Center
Naresh, Nirmala; Presmeg, Norma
2012-01-01
Researchers who have carried out studies pertaining to mental computation and everyday mathematics point out that adults and children reason intuitively based upon experiences within specific contexts; they use invented strategies of their own to solve real-life problems. We draw upon research areas of mental computation and everyday mathematics…
NASA Technical Reports Server (NTRS)
Chu, Y. Y.
1978-01-01
A unified formulation of computer-aided, multi-task, decision making is presented. Strategy for the allocation of decision making responsibility between human and computer is developed. The plans of a flight management systems are studied. A model based on the queueing theory was implemented.
Baranes, Adrien F; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline
2014-01-01
Devising efficient strategies for exploration in large open-ended spaces is one of the most difficult computational problems of intelligent organisms. Because the available rewards are ambiguous or unknown during the exploratory phase, subjects must act in intrinsically motivated fashion. However, a vast majority of behavioral and neural studies to date have focused on decision making in reward-based tasks, and the rules guiding intrinsically motivated exploration remain largely unknown. To examine this question we developed a paradigm for systematically testing the choices of human observers in a free play context. Adult subjects played a series of short computer games of variable difficulty, and freely choose which game they wished to sample without external guidance or physical rewards. Subjects performed the task in three distinct conditions where they sampled from a small or a large choice set (7 vs. 64 possible levels of difficulty), and where they did or did not have the possibility to sample new games at a constant level of difficulty. We show that despite the absence of external constraints, the subjects spontaneously adopted a structured exploration strategy whereby they (1) started with easier games and progressed to more difficult games, (2) sampled the entire choice set including extremely difficult games that could not be learnt, (3) repeated moderately and high difficulty games much more frequently than was predicted by chance, and (4) had higher repetition rates and chose higher speeds if they could generate new sequences at a constant level of difficulty. The results suggest that intrinsically motivated exploration is shaped by several factors including task difficulty, novelty and the size of the choice set, and these come into play to serve two internal goals-maximize the subjects' knowledge of the available tasks (exploring the limits of the task set), and maximize their competence (performance and skills) across the task set.
Expert Systems: Tutors, Tools, and Tutees.
ERIC Educational Resources Information Center
Lippert, Renate C.
1989-01-01
Discusses the current status, research, and practical implications of artificial intelligence and expert systems in education. Topics discussed include computer-assisted instruction; intelligent computer-assisted instruction; intelligent tutoring systems; instructional strategies involving the creation of knowledge bases; decision aids;…
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-08
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.
Strategies for Balanced Rural-Urban Growth. Agricultural Information Bulletin No. 392.
ERIC Educational Resources Information Center
Edwards, Clark
Summarizing an Economic Research Service (ERS) publication, this guide to a balanced rural-urban growth describes the results of a computer based ERS model which examined seven strategies to improve rural economic development. Based on 1960-70 trends, the model is described as asking how much would be required of each of the following strategies…
ERIC Educational Resources Information Center
Sedig, Kamran
2008-01-01
Many children do not like learning mathematics. They do not find mathematics fun, motivating, and engaging, and they think it is difficult to learn. Computer-based games have the potential and possibility of addressing this problem. This paper proposes a strategy for designing game-based learning environments that takes advantage of the…
Prykhozhij, Sergey V; Rajan, Vinothkumar; Berman, Jason N
2016-02-01
The development of clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 technology for mainstream biotechnological use based on its discovery as an adaptive immune mechanism in bacteria has dramatically improved the ability of molecular biologists to modify genomes of model organisms. The zebrafish is highly amenable to applications of CRISPR/Cas9 for mutation generation and a variety of DNA insertions. Cas9 protein in complex with a guide RNA molecule recognizes where to cut the homologous DNA based on a short stretch of DNA termed the protospacer-adjacent motif (PAM). Rapid and efficient identification of target sites immediately preceding PAM sites, quantification of genomic occurrences of similar (off target) sites and predictions of cutting efficiency are some of the features where computational tools play critical roles in CRISPR/Cas9 applications. Given the rapid advent and development of this technology, it can be a challenge for researchers to remain up to date with all of the important technological developments in this field. We have contributed to the armamentarium of CRISPR/Cas9 bioinformatics tools and trained other researchers in the use of appropriate computational programs to develop suitable experimental strategies. Here we provide an in-depth guide on how to use CRISPR/Cas9 and other relevant computational tools at each step of a host of genome editing experimental strategies. We also provide detailed conceptual outlines of the steps involved in the design and execution of CRISPR/Cas9-based experimental strategies, such as generation of frameshift mutations, larger chromosomal deletions and inversions, homology-independent insertion of gene cassettes and homology-based knock-in of defined point mutations and larger gene constructs.
More efficient evolutionary strategies for model calibration with watershed model for demonstration
NASA Astrophysics Data System (ADS)
Baggett, J. S.; Skahill, B. E.
2008-12-01
Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75-102, Springer Kern, S., N. Hansen and P. Koumoutsakos (2006). Local Meta-Models for Optimization Using Evolution Strategies. In Ninth International Conference on Parallel Problem Solving from Nature PPSN IX, Proceedings, pp.939-948, Berlin: Springer. Tahk, M., Woo, H., and Park. M, (2007). A hybrid optimization of evolutionary and gradient search. Engineering Optimization, (39), 87-104.
ERIC Educational Resources Information Center
Adesope, Olusola O.; Cavagnetto, Andy; Hunsu, Nathaniel J.; Anguiano, Carlos; Lloyd, Joshua
2017-01-01
This study used a between-subjects experimental design to examine the effects of three different computer-based instructional strategies (concept map, refutation text, and expository scientific text) on science learning. Concept maps are node-link diagrams that show concepts as nodes and relationships among the concepts as labeled links.…
Using a Computer-based Messaging System at a High School To Increase School/Home Communication.
ERIC Educational Resources Information Center
Burden, Mitzi K.
Minimal communication between school and home was found to contribute to low performance by students at McDuffie High School (South Carolina). This report describes the experience of establishing a computer-based telephone messaging system in the high school and involving parents, teachers, and students in its use. Additional strategies employed…
ERIC Educational Resources Information Center
Lahey, George F.; Coady, James D.
This study was conducted to determine whether learner control of lesson strategy is superior to programmed control in computer-based instruction (CBI), and, if so, whether learner control is more effective when guidance is provided. Subjects were 164 trainees assigned to the Basic Electricity/Electronics School in San Diego. They were randomly…
ERIC Educational Resources Information Center
Bouck, Emily C.; Meyer, Nancy K.; Satsangi, Rajiv; Savage, Melissa N.; Hunley, Megan
2015-01-01
Written expression is a neglected but critical component of education; yet, the writing process--from prewriting, to writing, and postwriting--is often an area of struggle for students with disabilities. One strategy to assist students with disabilities struggling with the writing process is the use of computer-based technology. This article…
NASA Astrophysics Data System (ADS)
Qin, Cheng-Zhi; Zhan, Lijun
2012-06-01
As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Casey W.; Green, Garrett; Noticewala, Sonal S.
Purpose: Validated models are needed to justify strategies to define planning target volumes (PTVs) for intact cervical cancer used in clinical practice. Our objective was to independently validate a previously published shape model, using data collected prospectively from clinical trials. Methods and Materials: We analyzed 42 patients with intact cervical cancer treated with daily fractionated pelvic intensity modulated radiation therapy and concurrent chemotherapy in one of 2 prospective clinical trials. We collected online cone beam computed tomography (CBCT) scans before each fraction. Clinical target volume (CTV) structures from the planning computed tomography scan were cast onto each CBCT scan aftermore » rigid registration and manually redrawn to account for organ motion and deformation. We applied the 95% isodose cloud from the planning computed tomography scan to each CBCT scan and computed any CTV outside the 95% isodose cloud. The primary aim was to determine the proportion of CTVs that were encompassed within the 95% isodose volume. A 1-sample t test was used to test the hypothesis that the probability of complete coverage was different from 95%. We used mixed-effects logistic regression to assess effects of time and patient variability. Results: The 95% isodose line completely encompassed 92.3% of all CTVs (95% confidence interval, 88.3%-96.4%), not significantly different from the 95% probability anticipated a priori (P=.19). The overall proportion of missed CTVs was small: the grand mean of covered CTVs was 99.9%, and 95.2% of misses were located in the anterior body of the uterus. Time did not affect coverage probability (P=.71). Conclusions: With the clinical implementation of a previously proposed PTV definition strategy based on a shape model for intact cervical cancer, the probability of CTV coverage was high and the volume of CTV missed was low. This PTV expansion strategy is acceptable for clinical trials and practice; however, we recommend daily image guidance to avoid systematic large misses in select patients.« less
Multi-view information fusion for automatic BI-RADS description of mammographic masses
NASA Astrophysics Data System (ADS)
Narvaez, Fabián; Díaz, Gloria; Romero, Eduardo
2011-03-01
Most CBIR-based CAD systems (Content Based Image Retrieval systems for Computer Aided Diagnosis) identify lesions that are eventually relevant. These systems base their analysis upon a single independent view. This article presents a CBIR framework which automatically describes mammographic masses with the BI-RADS lexicon, fusing information from the two mammographic views. After an expert selects a Region of Interest (RoI) at the two views, a CBIR strategy searches similar masses in the database by automatically computing the Mahalanobis distance between shape and texture feature vectors of the mammography. The strategy was assessed in a set of 400 cases, for which the suggested descriptions were compared with the ground truth provided by the data base. Two information fusion strategies were evaluated, allowing a retrieval precision rate of 89.6% in the best scheme. Likewise, the best performance obtained for shape, margin and pathology description, using a ROC methodology, was reported as AUC = 0.86, AUC = 0.72 and AUC = 0.85, respectively.
Clarke, Diana E; Narrow, William E; Regier, Darrel A; Kuramoto, S Janet; Kupfer, David J; Kuhl, Emily A; Greiner, Lisa; Kraemer, Helena C
2013-01-01
This article discusses the design,sampling strategy, implementation,and data analytic processes of the DSM-5 Field Trials. The DSM-5 Field Trials were conducted by using a test-retest reliability design with a stratified sampling approach across six adult and four pediatric sites in the United States and one adult site in Canada. A stratified random sampling approach was used to enhance precision in the estimation of the reliability coefficients. A web-based research electronic data capture system was used for simultaneous data collection from patients and clinicians across sites and for centralized data management.Weighted descriptive analyses, intraclass kappa and intraclass correlation coefficients for stratified samples, and receiver operating curves were computed. The DSM-5 Field Trials capitalized on advances since DSM-III and DSM-IV in statistical measures of reliability (i.e., intraclass kappa for stratified samples) and other recently developed measures to determine confidence intervals around kappa estimates. Diagnostic interviews using DSM-5 criteria were conducted by 279 clinicians of varied disciplines who received training comparable to what would be available to any clinician after publication of DSM-5.Overall, 2,246 patients with various diagnoses and levels of comorbidity were enrolled,of which over 86% were seen for two diagnostic interviews. A range of reliability coefficients were observed for the categorical diagnoses and dimensional measures. Multisite field trials and training comparable to what would be available to any clinician after publication of DSM-5 provided “real-world” testing of DSM-5 proposed diagnoses.
Dollé, Laurent; Chavarriaga, Ricardo
2018-01-01
We present a computational model of spatial navigation comprising different learning mechanisms in mammals, i.e., associative, cognitive mapping and parallel systems. This model is able to reproduce a large number of experimental results in different variants of the Morris water maze task, including standard associative phenomena (spatial generalization gradient and blocking), as well as navigation based on cognitive mapping. Furthermore, we show that competitive and cooperative patterns between different navigation strategies in the model allow to explain previous apparently contradictory results supporting either associative or cognitive mechanisms for spatial learning. The key computational mechanism to reconcile experimental results showing different influences of distal and proximal cues on the behavior, different learning times, and different abilities of individuals to alternatively perform spatial and response strategies, relies in the dynamic coordination of navigation strategies, whose performance is evaluated online with a common currency through a modular approach. We provide a set of concrete experimental predictions to further test the computational model. Overall, this computational work sheds new light on inter-individual differences in navigation learning, and provides a formal and mechanistic approach to test various theories of spatial cognition in mammals. PMID:29630600
A fully automatic microcalcification detection approach based on deep convolution neural network
NASA Astrophysics Data System (ADS)
Cai, Guanxiong; Guo, Yanhui; Zhang, Yaqin; Qin, Genggeng; Zhou, Yuanpin; Lu, Yao
2018-02-01
Breast cancer is one of the most common cancers and has high morbidity and mortality worldwide, posing a serious threat to the health of human beings. The emergence of microcalcifications (MCs) is an important signal of early breast cancer. However, it is still challenging and time consuming for radiologists to identify some tiny and subtle individual MCs in mammograms. This study proposed a novel computer-aided MC detection algorithm on the full field digital mammograms (FFDMs) using deep convolution neural network (DCNN). Firstly, a MC candidate detection system was used to obtain potential MC candidates. Then a DCNN was trained using a novel adaptive learning strategy, neutrosophic reinforcement sample learning (NRSL) strategy to speed up the learning process. The trained DCNN served to recognize true MCs. After been classified by DCNN, a density-based regional clustering method was imposed to form MC clusters. The accuracy of the DCNN with our proposed NRSL strategy converges faster and goes higher than the traditional DCNN at same epochs, and the obtained an accuracy of 99.87% on training set, 95.12% on validation set, and 93.68% on testing set at epoch 40. For cluster-based MC cluster detection evaluation, a sensitivity of 90% was achieved at 0.13 false positives (FPs) per image. The obtained results demonstrate that the designed DCNN plays a significant role in the MC detection after being prior trained.
Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian
2016-06-18
In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.
Qian, Yu; Wei, Chungwen; Lee, F. Eun-Hyung; Campbell, John; Halliley, Jessica; Lee, Jamie A.; Cai, Jennifer; Kong, Megan; Sadat, Eva; Thomson, Elizabeth; Dunn, Patrick; Seegmiller, Adam C.; Karandikar, Nitin J.; Tipton, Chris; Mosmann, Tim; Sanz, Iñaki; Scheuermann, Richard H.
2011-01-01
Background Advances in multi-parameter flow cytometry (FCM) now allow for the independent detection of larger numbers of fluorochromes on individual cells, generating data with increasingly higher dimensionality. The increased complexity of these data has made it difficult to identify cell populations from high-dimensional FCM data using traditional manual gating strategies based on single-color or two-color displays. Methods To address this challenge, we developed a novel program, FLOCK (FLOw Clustering without K), that uses a density-based clustering approach to algorithmically identify biologically relevant cell populations from multiple samples in an unbiased fashion, thereby eliminating operator-dependent variability. Results FLOCK was used to objectively identify seventeen distinct B cell subsets in a human peripheral blood sample and to identify and quantify novel plasmablast subsets responding transiently to tetanus and other vaccinations in peripheral blood. FLOCK has been implemented in the publically available Immunology Database and Analysis Portal – ImmPort (http://www.immport.org) for open use by the immunology research community. Conclusions FLOCK is able to identify cell subsets in experiments that use multi-parameter flow cytometry through an objective, automated computational approach. The use of algorithms like FLOCK for FCM data analysis obviates the need for subjective and labor intensive manual gating to identify and quantify cell subsets. Novel populations identified by these computational approaches can serve as hypotheses for further experimental study. PMID:20839340
PREDICTORS OF COMPUTER USE IN COMMUNITY-DWELLING ETHNICALLY DIVERSE OLDER ADULTS
Werner, Julie M.; Carlson, Mike; Jordan-Marsh, Maryalice; Clark, Florence
2011-01-01
Objective In this study we analyzed self-reported computer use, demographic variables, psychosocial variables, and health and well-being variables collected from 460 ethnically diverse, community-dwelling elders in order to investigate the relationship computer use has with demographics, well-being and other key psychosocial variables in older adults. Background Although younger elders with more education, those who employ active coping strategies, or those who are low in anxiety levels are thought to use computers at higher rates than others, previous research has produced mixed or inconclusive results regarding ethnic, gender, and psychological factors, or has concentrated on computer-specific psychological factors only (e.g., computer anxiety). Few such studies have employed large sample sizes or have focused on ethnically diverse populations of community-dwelling elders. Method With a large number of overlapping predictors, zero-order analysis alone is poorly equipped to identify variables that are independently associated with computer use. Accordingly, both zero-order and stepwise logistic regression analyses were conducted to determine the correlates of two types of computer use: email and general computer use. Results Results indicate that younger age, greater level of education, non-Hispanic ethnicity, behaviorally active coping style, general physical health, and role-related emotional health each independently predicted computer usage. Conclusion Study findings highlight differences in computer usage, especially in regard to Hispanic ethnicity and specific health and well-being factors. Application Potential applications of this research include future intervention studies, individualized computer-based activity programming, or customizable software and user interface design for older adults responsive to a variety of personal characteristics and capabilities. PMID:22046718
Predictors of computer use in community-dwelling, ethnically diverse older adults.
Werner, Julie M; Carlson, Mike; Jordan-Marsh, Maryalice; Clark, Florence
2011-10-01
In this study, we analyzed self-reported computer use, demographic variables, psychosocial variables, and health and well-being variables collected from 460 ethnically diverse, community-dwelling elders to investigate the relationship computer use has with demographics, well-being, and other key psychosocial variables in older adults. Although younger elders with more education, those who employ active coping strategies, or those who are low in anxiety levels are thought to use computers at higher rates than do others, previous research has produced mixed or inconclusive results regarding ethnic, gender, and psychological factors or has concentrated on computer-specific psychological factors only (e.g., computer anxiety). Few such studies have employed large sample sizes or have focused on ethnically diverse populations of community-dwelling elders. With a large number of overlapping predictors, zero-order analysis alone is poorly equipped to identify variables that are independently associated with computer use. Accordingly, both zero-order and stepwise logistic regression analyses were conducted to determine the correlates of two types of computer use: e-mail and general computer use. Results indicate that younger age, greater level of education, non-Hispanic ethnicity, behaviorally active coping style, general physical health, and role-related emotional health each independently predicted computer usage. Study findings highlight differences in computer usage, especially in regard to Hispanic ethnicity and specific health and well-being factors. Potential applications of this research include future intervention studies, individualized computer-based activity programming, or customizable software and user interface design for older adults responsive to a variety of personal characteristics and capabilities.
NASA Astrophysics Data System (ADS)
Yondo, Raul; Andrés, Esther; Valero, Eusebio
2018-01-01
Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial aerodynamicists, despite their increased interest among the research communities.
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less
Basavanhally, Ajay; Viswanath, Satish; Madabhushi, Anant
2015-01-01
Clinical trials increasingly employ medical imaging data in conjunction with supervised classifiers, where the latter require large amounts of training data to accurately model the system. Yet, a classifier selected at the start of the trial based on smaller and more accessible datasets may yield inaccurate and unstable classification performance. In this paper, we aim to address two common concerns in classifier selection for clinical trials: (1) predicting expected classifier performance for large datasets based on error rates calculated from smaller datasets and (2) the selection of appropriate classifiers based on expected performance for larger datasets. We present a framework for comparative evaluation of classifiers using only limited amounts of training data by using random repeated sampling (RRS) in conjunction with a cross-validation sampling strategy. Extrapolated error rates are subsequently validated via comparison with leave-one-out cross-validation performed on a larger dataset. The ability to predict error rates as dataset size increases is demonstrated on both synthetic data as well as three different computational imaging tasks: detecting cancerous image regions in prostate histopathology, differentiating high and low grade cancer in breast histopathology, and detecting cancerous metavoxels in prostate magnetic resonance spectroscopy. For each task, the relationships between 3 distinct classifiers (k-nearest neighbor, naive Bayes, Support Vector Machine) are explored. Further quantitative evaluation in terms of interquartile range (IQR) suggests that our approach consistently yields error rates with lower variability (mean IQRs of 0.0070, 0.0127, and 0.0140) than a traditional RRS approach (mean IQRs of 0.0297, 0.0779, and 0.305) that does not employ cross-validation sampling for all three datasets. PMID:25993029
Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C
2007-01-01
More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less
Evaluation strategies for isotope ratio measurements of single particles by LA-MC-ICPMS.
Kappel, S; Boulyga, S F; Dorta, L; Günther, D; Hattendorf, B; Koffler, D; Laaha, G; Leisch, F; Prohaska, T
2013-03-01
Data evaluation is a crucial step when it comes to the determination of accurate and precise isotope ratios computed from transient signals measured by multi-collector-inductively coupled plasma mass spectrometry (MC-ICPMS) coupled to, for example, laser ablation (LA). In the present study, the applicability of different data evaluation strategies (i.e. 'point-by-point', 'integration' and 'linear regression slope' method) for the computation of (235)U/(238)U isotope ratios measured in single particles by LA-MC-ICPMS was investigated. The analyzed uranium oxide particles (i.e. 9073-01-B, CRM U010 and NUSIMEP-7 test samples), having sizes down to the sub-micrometre range, are certified with respect to their (235)U/(238)U isotopic signature, which enabled evaluation of the applied strategies with respect to precision and accuracy. The different strategies were also compared with respect to their expanded uncertainties. Even though the 'point-by-point' method proved to be superior, the other methods are advantageous, as they take weighted signal intensities into account. For the first time, the use of a 'finite mixture model' is presented for the determination of an unknown number of different U isotopic compositions of single particles present on the same planchet. The model uses an algorithm that determines the number of isotopic signatures by attributing individual data points to computed clusters. The (235)U/(238)U isotope ratios are then determined by means of the slopes of linear regressions estimated for each cluster. The model was successfully applied for the accurate determination of different (235)U/(238)U isotope ratios of particles deposited on the NUSIMEP-7 test samples.
Simple Example of Backtest Overfitting (SEBO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less
2015-01-01
Many commonly used coarse-grained models for proteins are based on simplified interaction sites and consequently may suffer from significant limitations, such as the inability to properly model protein secondary structure without the addition of restraints. Recent work on a benzene fluid (LettieriS.; ZuckermanD. M.J. Comput. Chem.2012, 33, 268−27522120971) suggested an alternative strategy of tabulating and smoothing fully atomistic orientation-dependent interactions among rigid molecules or fragments. Here we report our initial efforts to apply this approach to the polar and covalent interactions intrinsic to polypeptides. We divide proteins into nearly rigid fragments, construct distance and orientation-dependent tables of the atomistic interaction energies between those fragments, and apply potential energy smoothing techniques to those tables. The amount of smoothing can be adjusted to give coarse-grained models that range from the underlying atomistic force field all the way to a bead-like coarse-grained model. For a moderate amount of smoothing, the method is able to preserve about 70–90% of the α-helical structure while providing a factor of 3–10 improvement in sampling per unit computation time (depending on how sampling is measured). For a greater amount of smoothing, multiple folding–unfolding transitions of the peptide were observed, along with a factor of 10–100 improvement in sampling per unit computation time, although the time spent in the unfolded state was increased compared with less smoothed simulations. For a β hairpin, secondary structure is also preserved, albeit for a narrower range of the smoothing parameter and, consequently, for a more modest improvement in sampling. We have also applied the new method in a “resolution exchange” setting, in which each replica runs a Monte Carlo simulation with a different degree of smoothing. We obtain exchange rates that compare favorably to our previous efforts at resolution exchange (LymanE.; ZuckermanD. M.J. Chem. Theory Comput.2006, 2, 656−666). PMID:25400525
The Role of Corticostriatal Systems in Speech Category Learning
Yi, Han-Gyol; Maddox, W. Todd; Mumford, Jeanette A.; Chandrasekaran, Bharath
2016-01-01
One of the most difficult category learning problems for humans is learning nonnative speech categories. While feedback-based category training can enhance speech learning, the mechanisms underlying these benefits are unclear. In this functional magnetic resonance imaging study, we investigated neural and computational mechanisms underlying feedback-dependent speech category learning in adults. Positive feedback activated a large corticostriatal network including the dorsolateral prefrontal cortex, inferior parietal lobule, middle temporal gyrus, caudate, putamen, and the ventral striatum. Successful learning was contingent upon the activity of domain-general category learning systems: the fast-learning reflective system, involving the dorsolateral prefrontal cortex that develops and tests explicit rules based on the feedback content, and the slow-learning reflexive system, involving the putamen in which the stimuli are implicitly associated with category responses based on the reward value in feedback. Computational modeling of response strategies revealed significant use of reflective strategies early in training and greater use of reflexive strategies later in training. Reflexive strategy use was associated with increased activation in the putamen. Our results demonstrate a critical role for the reflexive corticostriatal learning system as a function of response strategy and proficiency during speech category learning. Keywords: category learning, fMRI, corticostriatal systems, speech, putamen PMID:25331600
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.
Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less
Efficiency improvement of technological preparation of power equipment manufacturing
NASA Astrophysics Data System (ADS)
Milukov, I. A.; Rogalev, A. N.; Sokolov, V. P.; Shevchenko, I. V.
2017-11-01
Competitiveness of power equipment primarily depends on speeding-up the development and mastering of new equipment samples and technologies, enhancement of organisation and management of design, manufacturing and operation. Actual political, technological and economic conditions cause the acute need in changing the strategy and tactics of process planning. At that the issues of maintenance of equipment with simultaneous improvement of its efficiency and compatibility to domestically produced components are considering. In order to solve these problems, using the systems of computer-aided process planning for process design at all stages of power equipment life cycle is economically viable. Computer-aided process planning is developed for the purpose of improvement of process planning by using mathematical methods and optimisation of design and management processes on the basis of CALS technologies, which allows for simultaneous process design, process planning organisation and management based on mathematical and physical modelling of interrelated design objects and production system. An integration of computer-aided systems providing the interaction of informative and material processes at all stages of product life cycle is proposed as effective solution to the challenges in new equipment design and process planning.
Golden-ratio rotated stack-of-stars acquisition for improved volumetric MRI.
Zhou, Ziwu; Han, Fei; Yan, Lirong; Wang, Danny J J; Hu, Peng
2017-12-01
To develop and evaluate an improved stack-of-stars radial sampling strategy for reducing streaking artifacts. The conventional stack-of-stars sampling strategy collects the same radial angle for every partition (slice) encoding. In an undersampled acquisition, such an aligned acquisition generates coherent aliasing patterns and introduces strong streaking artifacts. We show that by rotating the radial spokes in a golden-angle manner along the partition-encoding direction, the aliasing pattern is modified, resulting in improved image quality for gridding and more advanced reconstruction methods. Computer simulations were performed and phantom as well as in vivo images for three different applications were acquired. Simulation, phantom, and in vivo experiments confirmed that the proposed method was able to generate images with less streaking artifact and sharper structures based on undersampled acquisitions in comparison with the conventional aligned approach at the same acceleration factors. By combining parallel imaging and compressed sensing in the reconstruction, streaking artifacts were mostly removed with improved delineation of fine structures using the proposed strategy. We present a simple method to reduce streaking artifacts and improve image quality in 3D stack-of-stars acquisitions by re-arranging the radial spoke angles in the 3D partition direction, which can be used for rapid volumetric imaging. Magn Reson Med 78:2290-2298, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin
2014-01-01
Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.
Gaussian process based intelligent sampling for measuring nano-structure surfaces
NASA Astrophysics Data System (ADS)
Sun, L. J.; Ren, M. J.; Yin, Y. H.
2016-09-01
Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
Large biases in regression-based constituent flux estimates: causes and diagnostic tools
Hirsch, Robert M.
2014-01-01
It has been documented in the literature that, in some cases, widely used regression-based models can produce severely biased estimates of long-term mean river fluxes of various constituents. These models, estimated using sample values of concentration, discharge, and date, are used to compute estimated fluxes for a multiyear period at a daily time step. This study compares results of the LOADEST seven-parameter model, LOADEST five-parameter model, and the Weighted Regressions on Time, Discharge, and Season (WRTDS) model using subsampling of six very large datasets to better understand this bias problem. This analysis considers sample datasets for dissolved nitrate and total phosphorus. The results show that LOADEST-7 and LOADEST-5, although they often produce very nearly unbiased results, can produce highly biased results. This study identifies three conditions that can give rise to these severe biases: (1) lack of fit of the log of concentration vs. log discharge relationship, (2) substantial differences in the shape of this relationship across seasons, and (3) severely heteroscedastic residuals. The WRTDS model is more resistant to the bias problem than the LOADEST models but is not immune to them. Understanding the causes of the bias problem is crucial to selecting an appropriate method for flux computations. Diagnostic tools for identifying the potential for bias problems are introduced, and strategies for resolving bias problems are described.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup
Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.
2010-01-01
Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651
Influence of various water quality sampling strategies on load estimates for small streams
Robertson, Dale M.; Roerish, Eric D.
1999-01-01
Extensive streamflow and water quality data from eight small streams were systematically subsampled to represent various water‐quality sampling strategies. The subsampled data were then used to determine the accuracy and precision of annual load estimates generated by means of a regression approach (typically used for big rivers) and to determine the most effective sampling strategy for small streams. Estimation of annual loads by regression was imprecise regardless of the sampling strategy used; for the most effective strategy, median absolute errors were ∼30% based on the load estimated with an integration method and all available data, if a regression approach is used with daily average streamflow. The most effective sampling strategy depends on the length of the study. For 1‐year studies, fixed‐period monthly sampling supplemented by storm chasing was the most effective strategy. For studies of 2 or more years, fixed‐period semimonthly sampling resulted in not only the least biased but also the most precise loads. Additional high‐flow samples, typically collected to help define the relation between high streamflow and high loads, result in imprecise, overestimated annual loads if these samples are consistently collected early in high‐flow events.
On-Line Mathematics Assessment: The Impact of Mode on Performance and Question Answering Strategies
ERIC Educational Resources Information Center
Johnson, Martin; Green, Sylvia
2006-01-01
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Miller, J G; Wolf, F M
1996-01-01
Strategies for implementing instructional technology are based on recent experiences at the University of Michigan Medical Center. The issues covered include 1) addressing facilities, hardware, and staffing needs, 2) determining learners' skill requirements and appropriate training activities, and 3) selecting and customizing educational software. Many examples are provided, and nine key points for success are emphasized. PMID:8653447
ERIC Educational Resources Information Center
Haag, Brenda Bannan; Grabowski, Barbara L.
The purpose of this exploratory study was to examine the effectiveness of learner manipulation of visuals with and without organizing cues in computer-based instruction on adults' factual, conceptual, and problem-solving learning. An instructional unit involving the physiology and the anatomy of the heart was used. A post-test only control group…
ERIC Educational Resources Information Center
DeVane, Ben; Steward, Cody; Tran, Kelly M.
2016-01-01
This article reports on a project that used a game-creation tool to introduce middle-school students ages 10 to 13 to problem-solving strategies similar to those in computer science through the lens of studio-based design arts. Drawing on historic paradigms in design pedagogy and contemporary educational approaches in the digital arts to teach…
Competent Reasoning with Rational Numbers.
ERIC Educational Resources Information Center
Smith, John P. III
1995-01-01
Analyzed students' reasoning with fractions. Found that skilled students applied strategies specifically tailored to restricted classes of fractions and produced reliable solutions with a minimum of computation effort. Results suggest that competent reasoning depends on a knowledge base that includes numerically specific and invented strategies,…
Solving satisfiability problems using a novel microarray-based DNA computer.
Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning
2007-01-01
An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.
de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M
2018-04-01
Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.
Nesterets, Yakov I; Gureyev, Timur E; Mayo, Sheridan C; Stevenson, Andrew W; Thompson, Darren; Brown, Jeremy M C; Kitchen, Marcus J; Pavlov, Konstantin M; Lockie, Darren; Brun, Francesco; Tromba, Giuliana
2015-11-01
Results are presented of a recent experiment at the Imaging and Medical beamline of the Australian Synchrotron intended to contribute to the implementation of low-dose high-sensitivity three-dimensional mammographic phase-contrast imaging, initially at synchrotrons and subsequently in hospitals and medical imaging clinics. The effect of such imaging parameters as X-ray energy, source size, detector resolution, sample-to-detector distance, scanning and data processing strategies in the case of propagation-based phase-contrast computed tomography (CT) have been tested, quantified, evaluated and optimized using a plastic phantom simulating relevant breast-tissue characteristics. Analysis of the data collected using a Hamamatsu CMOS Flat Panel Sensor, with a pixel size of 100 µm, revealed the presence of propagation-based phase contrast and demonstrated significant improvement of the quality of phase-contrast CT imaging compared with conventional (absorption-based) CT, at medically acceptable radiation doses.
ERIC Educational Resources Information Center
Larbi-Apau, Josephine A.; Moseley, James L.
2012-01-01
This study examined the validity of Selwyn's computer attitude scale (CAS) and its implication for technology-based performance of randomly sampled (n = 167) multidiscipline teaching faculty in higher education in Ghana. Considered, computer attitude is a critical function of computer attitude and potential performance. Composed of four…
Holmes, Thomas D; Guilmette, Raymond A; Cheng, Yung Sung; Parkhurst, Mary Ann; Hoover, Mark D
2009-03-01
The Capstone Depleted Uranium (DU) Aerosol Study was undertaken to obtain aerosol samples resulting from a large-caliber DU penetrator striking an Abrams or Bradley test vehicle. The sampling strategy was designed to (1) optimize the performance of the samplers and maintain their integrity in the extreme environment created during perforation of an armored vehicle by a DU penetrator, (2) collect aerosols as a function of time post perforation, and (3) obtain size-classified samples for analysis of chemical composition, particle morphology, and solubility in lung fluid. This paper describes the experimental setup and sampling methodologies used to achieve these objectives. Custom-designed arrays of sampling heads were secured to the inside of the target in locations approximating the breathing zones of the crew locations in the test vehicles. Each array was designed to support nine filter cassettes and nine cascade impactors mounted with quick-disconnect fittings. Shielding and sampler placement strategies were used to minimize sampler loss caused by the penetrator impact and the resulting fragments of eroded penetrator and perforated armor. A cyclone train was used to collect larger quantities of DU aerosol for measurement of chemical composition and solubility. A moving filter sample was used to obtain semicontinuous samples for DU concentration determination. Control for the air samplers was provided by five remotely located valve control and pressure monitoring units located inside and around the test vehicle. These units were connected to a computer interface chassis and controlled using a customized LabVIEW engineering computer control program. The aerosol sampling arrays and control systems for the Capstone study provided the needed aerosol samples for physicochemical analysis, and the resultant data were used for risk assessment of exposure to DU aerosol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmes, Thomas D.; Guilmette, Raymond A.; Cheng, Yung-Sung
2009-03-01
The Capstone Depleted Uranium Aerosol Study was undertaken to obtain aerosol samples resulting from a kinetic-energy cartridge with a large-caliber depleted uranium (DU) penetrator striking an Abrams or Bradley test vehicle. The sampling strategy was designed to (1) optimize the performance of the samplers and maintain their integrity in the extreme environment created during perforation of an armored vehicle by a DU penetrator, (2) collect aerosols as a function of time post-impact, and (3) obtain size-classified samples for analysis of chemical composition, particle morphology, and solubility in lung fluid. This paper describes the experimental setup and sampling methodologies used tomore » achieve these objectives. Custom-designed arrays of sampling heads were secured to the inside of the target in locations approximating the breathing zones of the vehicle commander, loader, gunner, and driver. Each array was designed to support nine filter cassettes and nine cascade impactors mounted with quick-disconnect fittings. Shielding and sampler placement strategies were used to minimize sampler loss caused by the penetrator impact and the resulting fragments of eroded penetrator and perforated armor. A cyclone train was used to collect larger quantities of DU aerosol for chemical composition and solubility. A moving filter sample was used to obtain semicontinuous samples for depleted uranium concentration determination. Control for the air samplers was provided by five remotely located valve control and pressure monitoring units located inside and around the test vehicle. These units were connected to a computer interface chassis and controlled using a customized LabVIEW engineering computer control program. The aerosol sampling arrays and control systems for the Capstone study provided the needed aerosol samples for physicochemical analysis, and the resultant data were used for risk assessment of exposure to DU aerosol.« less
ERIC Educational Resources Information Center
Kern, Richard
1985-01-01
A computer-based interactive system for diagnosing academic and school behavior problems is described. Elements include criterion-referenced testing, an instructional management system, and a behavior evaluation tool developed by the author. (JW)
Computational prediction of formulation strategies for beyond-rule-of-5 compounds.
Bergström, Christel A S; Charman, William N; Porter, Christopher J H
2016-06-01
The physicochemical properties of some contemporary drug candidates are moving towards higher molecular weight, and coincidentally also higher lipophilicity in the quest for biological selectivity and specificity. These physicochemical properties move the compounds towards beyond rule-of-5 (B-r-o-5) chemical space and often result in lower water solubility. For such B-r-o-5 compounds non-traditional delivery strategies (i.e. those other than conventional tablet and capsule formulations) typically are required to achieve adequate exposure after oral administration. In this review, we present the current status of computational tools for prediction of intestinal drug absorption, models for prediction of the most suitable formulation strategies for B-r-o-5 compounds and models to obtain an enhanced understanding of the interplay between drug, formulation and physiological environment. In silico models are able to identify the likely molecular basis for low solubility in physiologically relevant fluids such as gastric and intestinal fluids. With this baseline information, a formulation scientist can, at an early stage, evaluate different orally administered, enabling formulation strategies. Recent computational models have emerged that predict glass-forming ability and crystallisation tendency and therefore the potential utility of amorphous solid dispersion formulations. Further, computational models of loading capacity in lipids, and therefore the potential for formulation as a lipid-based formulation, are now available. Whilst such tools are useful for rapid identification of suitable formulation strategies, they do not reveal drug localisation and molecular interaction patterns between drug and excipients. For the latter, Molecular Dynamics simulations provide an insight into the interplay between drug, formulation and intestinal fluid. These different computational approaches are reviewed. Additionally, we analyse the molecular requirements of different targets, since these can provide an early signal that enabling formulation strategies will be required. Based on the analysis we conclude that computational biopharmaceutical profiling can be used to identify where non-conventional gateways, such as prediction of 'formulate-ability' during lead optimisation and early development stages, are important and may ultimately increase the number of orally tractable contemporary targets. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Wojtas-Niziurski, Wojciech; Meng, Yilin; Roux, Benoit; Bernèche, Simon
2013-01-01
The potential of mean force describing conformational changes of biomolecules is a central quantity that determines the function of biomolecular systems. Calculating an energy landscape of a process that depends on three or more reaction coordinates might require a lot of computational power, making some of multidimensional calculations practically impossible. Here, we present an efficient automatized umbrella sampling strategy for calculating multidimensional potential of mean force. The method progressively learns by itself, through a feedback mechanism, which regions of a multidimensional space are worth exploring and automatically generates a set of umbrella sampling windows that is adapted to the system. The self-learning adaptive umbrella sampling method is first explained with illustrative examples based on simplified reduced model systems, and then applied to two non-trivial situations: the conformational equilibrium of the pentapeptide Met-enkephalin in solution and ion permeation in the KcsA potassium channel. With this method, it is demonstrated that a significant smaller number of umbrella windows needs to be employed to characterize the free energy landscape over the most relevant regions without any loss in accuracy. PMID:23814508
Paper-based tuberculosis diagnostic devices with colorimetric gold nanoparticles
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Ting; Shen, Shu-Wei; Cheng, Chao-Min; Chen, Chien-Fu
2013-08-01
A colorimetric sensing strategy employing gold nanoparticles and a paper assay platform has been developed for tuberculosis diagnosis. Unmodified gold nanoparticles and single-stranded detection oligonucleotides are used to achieve rapid diagnosis without complicated and time-consuming thiolated or other surface-modified probe preparation processes. To eliminate the use of sophisticated equipment for data analysis, the color variance for multiple detection results was simultaneously collected and concentrated on cellulose paper with the data readout transmitted for cloud computing via a smartphone. The results show that the 2.6 nM tuberculosis mycobacterium target sequences extracted from patients can easily be detected, and the turnaround time after the human DNA is extracted from clinical samples was approximately 1 h.
Aranda-Escolástico, Ernesto; Guinaldo, María; Gordillo, Francisco; Dormido, Sebastián
2016-11-01
In this paper, periodic event-triggered controllers are proposed for the rotary inverted pendulum. The control strategy is divided in two steps: swing-up and stabilization. In both cases, the system is sampled periodically but the control actions are only computed at certain instances of time (based on events), which are a subset of the sampling times. For the stabilization control, the asymptotic stability is guaranteed applying the Lyapunov-Razumikhin theorem for systems with delays. This result is applicable to general linear systems and not only to the inverted pendulum. For the swing-up control, a trigger function is provided from the derivative of the Lyapunov function for the swing-up control law. Experimental results show a significant improvement with respect to periodic control in the number of control actions. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Gold rush - A swarm dynamics in games
NASA Astrophysics Data System (ADS)
Zelinka, Ivan; Bukacek, Michal
2017-07-01
This paper is focused on swarm intelligence techniques and its practical use in computer games. The aim is to show how a swarm dynamics can be generated by multiplayer game, then recorded, analyzed and eventually controlled. In this paper we also discuss possibility to use swarm intelligence instead of game players. Based on our previous experiments two games, using swarm algorithms are mentioned briefly here. The first one is strategy game StarCraft: Brood War, and TicTacToe in which SOMA algorithm has also take a role of player against human player. Open research reported here has shown potential benefit of swarm computation in the field of strategy games and players strategy based on swarm behavior record and analysis. We propose new game called Gold Rush as an experimental environment for human or artificial swarm behavior and consequent analysis.
Besmer, Michael D.; Hammes, Frederik; Sigrist, Jürg A.; Ort, Christoph
2017-01-01
Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies. PMID:29213255
Besmer, Michael D; Hammes, Frederik; Sigrist, Jürg A; Ort, Christoph
2017-01-01
Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies.
Jethwa, Pinakin R; Punia, Vineet; Patel, Tapan D; Duffis, E Jesus; Gandhi, Chirag D; Prestigiacomo, Charles J
2013-04-01
Recent studies have documented the high sensitivity of computed tomography angiography (CTA) in detecting a ruptured aneurysm in the presence of acute subarachnoid hemorrhage (SAH). The practice of digital subtraction angiography (DSA) when CTA does not reveal an aneurysm has thus been called into question. We examined this dilemma from a cost-effectiveness perspective by using current decision analysis techniques. A decision tree was created with the use of TreeAge Pro Suite 2012; in 1 arm, a CTA-negative SAH was followed up with DSA; in the other arm, patients were observed without further imaging. Based on literature review, costs and utilities were assigned to each potential outcome. Base-case and sensitivity analyses were performed to determine the cost-effectiveness of each strategy. A Monte Carlo simulation was then conducted by sampling each variable over a plausible distribution to evaluate the robustness of the model. With the use of a negative predictive value of 95.7% for CTA, observation was found to be the most cost-effective strategy ($6737/Quality Adjusted Life Year [QALY] vs $8460/QALY) in the base-case analysis. One-way sensitivity analysis demonstrated that DSA became the more cost-effective option if the negative predictive value of CTA fell below 93.72%. The Monte Carlo simulation produced an incremental cost-effectiveness ratio of $83 083/QALY. At the conventional willingness-to-pay threshold of $50 000/QALY, observation was the more cost-effective strategy in 83.6% of simulations. The decision to perform a DSA in CTA-negative SAH depends strongly on the sensitivity of CTA, and therefore must be evaluated at each center treating these types of patients. Given the high sensitivity of CTA reported in the current literature, performing DSA on all patients with CTA negative SAH may not be cost-effective at every institution.
A Perspective on Computational Human Performance Models as Design Tools
NASA Technical Reports Server (NTRS)
Jones, Patricia M.
2010-01-01
The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.
Update and review of accuracy assessment techniques for remotely sensed data
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Heinen, J. T.; Oderwald, R. G.
1983-01-01
Research performed in the accuracy assessment of remotely sensed data is updated and reviewed. The use of discrete multivariate analysis techniques for the assessment of error matrices, the use of computer simulation for assessing various sampling strategies, and an investigation of spatial autocorrelation techniques are examined.
Consumer Math 4, Mathematics: 5285.24.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
The last of four guidebooks for the General Math student covers installment purchases and small loans, investments, insurance, and cost of housing. Goals and strategies for the course are given; performance objectives for computational skills and for each unit are specified. A course outline, teaching suggestions for each unit, and sample pretests…
Efficient Strategies for Predictive Cell-Level Control of Lithium-Ion Batteries
NASA Astrophysics Data System (ADS)
Xavier, Marcelo A.
This dissertation introduces a set of state-space based model predictive control (MPC) algorithms tailored to a non-zero feedthrough term to account for the ohmic resistance that is inherent to the battery dynamics. MPC is herein applied to the problem of regulating cell-level measures of performance for lithium-ion batteries; the control methodologies are used first to compute a fast charging profile that respects input, output, and state constraints, i.e., input current, terminal voltage, and state of charge for an equivalent circuit model of the battery cell, and extended later to a linearized physics-based reduced-order model. The novelty of this work can summarized as follows: (1) the MPC variants are employed to a physics based reduce-order model in order to make use of the available set of internal electrochemical variables and mitigate internal mechanisms of cell degradation. (e.g., lithium plating); (2) we developed a dual-mode MPC closed-loop paradigm that suits the battery control problem with the objective of reducing computational effort by solving simpler optimization routines and guaranteeing stability; and finally (3) we developed a completely new approach of the use of a predictive control strategy where MPC is employed as a "smart sensor" for power estimation. Results are presented that show the comparative performance of the MPC algorithms for both EMC and PBROM These results highlight that dual-mode MPC can deliver optimal input current profiles by using a shorter horizon while still guaranteeing stability. Additionally, rigorous mathematical developments are presented for the development of the MPC algorithms. The use of MPC as a "smart sensor" presents it self as an appealing method for power estimation, since MPC permits a fully dynamic input profile that is able to achieve performance right at the proper constraint boundaries. Therefore, MPC is expected to produce accurate power limits for each computed sample time when compared to the Bisection method [1] which assumes constant input values over the prediction interval.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-09-21
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
van Maanen, Leendert; de Jong, Ritske; van Rijn, Hedderik
2014-01-01
When multiple strategies can be used to solve a type of problem, the observed response time distributions are often mixtures of multiple underlying base distributions each representing one of these strategies. For the case of two possible strategies, the observed response time distributions obey the fixed-point property. That is, there exists one reaction time that has the same probability of being observed irrespective of the actual mixture proportion of each strategy. In this paper we discuss how to compute this fixed-point, and how to statistically assess the probability that indeed the observed response times are generated by two competing strategies. Accompanying this paper is a free R package that can be used to compute and test the presence or absence of the fixed-point property in response time data, allowing for easy to use tests of strategic behavior. PMID:25170893
Pusic, Martin V.; LeBlanc, Vicki; Patel, Vimla L.
2001-01-01
Traditional task analysis for instructional design has emphasized the importance of precisely defining behavioral educational objectives and working back to select objective-appropriate instructional strategies. However, this approach may miss effective strategies. Cognitive task analysis, on the other hand, breaks a process down into its component knowledge representations. Selection of instructional strategies based on all such representations in a domain is likely to lead to optimal instructional design. In this demonstration, using the interpretation of cervical spine x-rays as an educational example, we show how a detailed cognitive task analysis can guide the development of computer-aided instruction.
A computer simulation approach to measurement of human control strategy
NASA Technical Reports Server (NTRS)
Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III
1982-01-01
Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.
Identification of missing variants by combining multiple analytic pipelines.
Ren, Yingxue; Reddy, Joseph S; Pottier, Cyril; Sarangi, Vivekananda; Tian, Shulan; Sinnwell, Jason P; McDonnell, Shannon K; Biernacka, Joanna M; Carrasquillo, Minerva M; Ross, Owen A; Ertekin-Taner, Nilüfer; Rademakers, Rosa; Hudson, Matthew; Mainzer, Liudmila Sergeevna; Asmann, Yan W
2018-04-16
After decades of identifying risk factors using array-based genome-wide association studies (GWAS), genetic research of complex diseases has shifted to sequencing-based rare variants discovery. This requires large sample sizes for statistical power and has brought up questions about whether the current variant calling practices are adequate for large cohorts. It is well-known that there are discrepancies between variants called by different pipelines, and that using a single pipeline always misses true variants exclusively identifiable by other pipelines. Nonetheless, it is common practice today to call variants by one pipeline due to computational cost and assume that false negative calls are a small percent of total. We analyzed 10,000 exomes from the Alzheimer's Disease Sequencing Project (ADSP) using multiple analytic pipelines consisting of different read aligners and variant calling strategies. We compared variants identified by using two aligners in 50,100, 200, 500, 1000, and 1952 samples; and compared variants identified by adding single-sample genotyping to the default multi-sample joint genotyping in 50,100, 500, 2000, 5000 and 10,000 samples. We found that using a single pipeline missed increasing numbers of high-quality variants correlated with sample sizes. By combining two read aligners and two variant calling strategies, we rescued 30% of pass-QC variants at sample size of 2000, and 56% at 10,000 samples. The rescued variants had higher proportions of low frequency (minor allele frequency [MAF] 1-5%) and rare (MAF < 1%) variants, which are the very type of variants of interest. In 660 Alzheimer's disease cases with earlier onset ages of ≤65, 4 out of 13 (31%) previously-published rare pathogenic and protective mutations in APP, PSEN1, and PSEN2 genes were undetected by the default one-pipeline approach but recovered by the multi-pipeline approach. Identification of the complete variant set from sequencing data is the prerequisite of genetic association analyses. The current analytic practice of calling genetic variants from sequencing data using a single bioinformatics pipeline is no longer adequate with the increasingly large projects. The number and percentage of quality variants that passed quality filters but are missed by the one-pipeline approach rapidly increased with sample size.
Effects of training on short- and long-term skill retention in a complex multiple-task environment.
Sauer, J; Hockey, G R; Wastell, D G
2000-12-01
The paper reports the results of an experiment on the performance and retention of a complex task. This was a computer-based simulation of the essential elements of a spacecraft's life support system. It allowed the authors to take a range of measures, including primary and secondary task performance, system intervention and information sampling strategies, mental model structure, and subjective operator state. The study compared the effectiveness of two methods of training, based on low level (procedure-based) and high level (system-based) understanding. Twenty-five participants were trained extensively on the task, then given a 1-h testing session. A second testing session was carried out 8 months after the first (with no intervening practice) with 17 of the original participants. While training had little effect on control performance, there were considerable effects on system management strategies, as well as in structure of operator's mental model. In the second testing session, the anticipated general performance decrement did not occur, though for complex faults there was an increase in selectivity towards the primary control task. The relevance of the findings for training and skill retention in real work environments is discussed in the context of a model of compensatory control.
Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards.
Bornstein, Marc H; Jager, Justin; Putnick, Diane L
2013-12-01
Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study's target population, whether they yield representative and generalizable estimates of subsamples within a study's target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce "noise" related to variation in subsamples and whether that "noise" can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting.
Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards
Bornstein, Marc H.; Jager, Justin; Putnick, Diane L.
2014-01-01
Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study’s target population, whether they yield representative and generalizable estimates of subsamples within a study’s target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce “noise” related to variation in subsamples and whether that “noise” can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting. PMID:25580049
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-01
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343
Moore, Jason H; Gilbert, Joshua C; Tsai, Chia-Ti; Chiang, Fu-Tien; Holden, Todd; Barney, Nate; White, Bill C
2006-07-21
Detecting, characterizing, and interpreting gene-gene interactions or epistasis in studies of human disease susceptibility is both a mathematical and a computational challenge. To address this problem, we have previously developed a multifactor dimensionality reduction (MDR) method for collapsing high-dimensional genetic data into a single dimension (i.e. constructive induction) thus permitting interactions to be detected in relatively small sample sizes. In this paper, we describe a comprehensive and flexible framework for detecting and interpreting gene-gene interactions that utilizes advances in information theory for selecting interesting single-nucleotide polymorphisms (SNPs), MDR for constructive induction, machine learning methods for classification, and finally graphical models for interpretation. We illustrate the usefulness of this strategy using artificial datasets simulated from several different two-locus and three-locus epistasis models. We show that the accuracy, sensitivity, specificity, and precision of a naïve Bayes classifier are significantly improved when SNPs are selected based on their information gain (i.e. class entropy removed) and reduced to a single attribute using MDR. We then apply this strategy to detecting, characterizing, and interpreting epistatic models in a genetic study (n = 500) of atrial fibrillation and show that both classification and model interpretation are significantly improved.
Owens, Otis L; Friedman, Daniela B; Brandt, Heather M; Bernhardt, Jay M; Hébert, James R
2016-05-01
African American (AA) men are significantly more likely to die of prostate cancer (PrCA) than other racial groups, and there is a critical need to identify strategies for providing information about PrCA screening and the importance of informed decision making (IDM). To assess whether a computer-based IDM intervention for PrCA screening would be appropriate for AA men, this formative evaluation study examined their (1) PrCA risk and screening knowledge; (2) decision-making processes for PrCA screening; (3) usage of, attitudes toward, and access to interactive communication technologies (ICTs); and (4) perceptions regarding a future, novel, computer-based PrCA education intervention. A purposive convenience sample of 39 AA men aged 37 to 66 years in the Southeastern United States was recruited through faith-based organizations to participate in one of six 90-minute focus groups and complete a 45-item descriptive survey. Participants were generally knowledgeable about PrCA. However, few engaged in IDM with their doctor and few were informed about the associated risks and uncertainties of PrCA screening. Most participants used ICTs on a daily basis for various purposes including health information seeking. Most participants were open to a novel, computer-based intervention if the system was easy to use and its animated avatars were culturally appropriate. Because study participants had low exposure to IDM for PrCA, but frequently used ICTs, IDM interventions using ICTs (e.g., computers) hold promise for AA men and should be explored for feasibility and effectiveness. These interventions should aim to increase PrCA screening knowledge and stress the importance of participating in IDM with doctors. © The Author(s) 2015.
The Graphical User Interface: Crisis, Danger, and Opportunity.
ERIC Educational Resources Information Center
Boyd, L. H.; And Others
1990-01-01
This article describes differences between the graphical user interface and traditional character-based interface systems, identifies potential problems posed by graphic computing environments for blind computer users, and describes some programs and strategies that are being developed to provide access to those environments. (Author/JDD)
Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I
2004-10-01
To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
Strategies to address participant misrepresentation for eligibility in Web-based research.
Kramer, Jessica; Rubin, Amy; Coster, Wendy; Helmuth, Eric; Hermos, John; Rosenbloom, David; Moed, Rich; Dooley, Meghan; Kao, Ying-Chia; Liljenquist, Kendra; Brief, Deborah; Enggasser, Justin; Keane, Terence; Roy, Monica; Lachowicz, Mark
2014-03-01
Emerging methodological research suggests that the World Wide Web ("Web") is an appropriate venue for survey data collection, and a promising area for delivering behavioral intervention. However, the use of the Web for research raises concerns regarding sample validity, particularly when the Web is used for recruitment and enrollment. The purpose of this paper is to describe the challenges experienced in two different Web-based studies in which participant misrepresentation threatened sample validity: a survey study and an online intervention study. The lessons learned from these experiences generated three types of strategies researchers can use to reduce the likelihood of participant misrepresentation for eligibility in Web-based research. Examples of procedural/design strategies, technical/software strategies and data analytic strategies are provided along with the methodological strengths and limitations of specific strategies. The discussion includes a series of considerations to guide researchers in the selection of strategies that may be most appropriate given the aims, resources and target population of their studies. Copyright © 2014 John Wiley & Sons, Ltd.
A novel 3D Cartesian random sampling strategy for Compressive Sensing Magnetic Resonance Imaging.
Valvano, Giuseppe; Martini, Nicola; Santarelli, Maria Filomena; Chiappino, Dante; Landini, Luigi
2015-01-01
In this work we propose a novel acquisition strategy for accelerated 3D Compressive Sensing Magnetic Resonance Imaging (CS-MRI). This strategy is based on a 3D cartesian sampling with random switching of the frequency encoding direction with other K-space directions. Two 3D sampling strategies are presented. In the first strategy, the frequency encoding direction is randomly switched with one of the two phase encoding directions. In the second strategy, the frequency encoding direction is randomly chosen between all the directions of the K-Space. These strategies can lower the coherence of the acquisition, in order to produce reduced aliasing artifacts and to achieve a better image quality after Compressive Sensing (CS) reconstruction. Furthermore, the proposed strategies can reduce the typical smoothing of CS due to the limited sampling of high frequency locations. We demonstrated by means of simulations that the proposed acquisition strategies outperformed the standard Compressive Sensing acquisition. This results in a better quality of the reconstructed images and in a greater achievable acceleration.
ERIC Educational Resources Information Center
Van Campen, Joseph A.
Computer software for programed language instruction, developed in the second quarter of 1970 at Stanford's Institute for Mathematical Studies in the Social Sciences is described in this report. The software includes: (1) a PDP-10 computer assembly language for generating drill sentences; (2) a coding system allowing a large number of sentences to…
A Micro-Processor Based System as a Teaching Tool.
ERIC Educational Resources Information Center
Spero, Samuel W.
1979-01-01
Two instructional strategies incorporating a microprocessor-based computer system are described. These are the use of the system to drive a television monitor, and the system's use in generating problem sets. (MP)
Nonprobability and probability-based sampling strategies in sexual science.
Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah
2015-01-01
With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.
A study of active learning methods for named entity recognition in clinical text.
Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua
2015-12-01
Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lozano-Vega, Gildardo; Benezeth, Yannick; Marzani, Franck; Boochs, Frank
2014-09-01
Accurate recognition of airborne pollen taxa is crucial for understanding and treating allergic diseases which affect an important proportion of the world population. Modern computer vision techniques enable the detection of discriminant characteristics. Apertures are among the important characteristics which have not been adequately explored until now. A flexible method of detection, localization, and counting of apertures of different pollen taxa with varying appearances is proposed. Aperture description is based on primitive images following the bag-of-words strategy. A confidence map is estimated based on the classification of sampled regions. The method is designed to be extended modularly to new aperture types employing the same algorithm by building individual classifiers. The method was evaluated on the top five allergenic pollen taxa in Germany, and its robustness to unseen particles was verified.
Suboptimal LQR-based spacecraft full motion control: Theory and experimentation
NASA Astrophysics Data System (ADS)
Guarnaccia, Leone; Bevilacqua, Riccardo; Pastorelli, Stefano P.
2016-05-01
This work introduces a real time suboptimal control algorithm for six-degree-of-freedom spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) approach and real-time linearization of the equations of motion. The control strategy is sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at each sample time. The cost function of the proposed controller has been compared with the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implementability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six-degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, and control algorithms for nano-satellites in a one-g laboratory environment. The tests show the real-time feasibility of the proposed approach.
Using Predictability for Lexical Segmentation.
Çöltekin, Çağrı
2017-09-01
This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.
Awad, Mohamed Moustafa; Alqahtani, H; Al-Mudahi, A; Murayshed, M S; Alrahlah, A; Bhandi, Shilpa H
2017-07-01
To review the adhesive bonding to different computer-aided design/computer-aided manufacturing (CAD/CAM) esthetic restorative materials. The use of CAD/CAM esthetic restorative materials has gained popularity in recent years. Several CAD/ CAM esthetic restorative materials are commercially available. Adhesive bonding is a major determinant of success of CAD/ CAM restorations. Review result: An account of the currently available bonding strategies are discussed with their rationale in various CAD/ CAM materials. Different surface treatment methods as well as adhesion promoters can be used to achieve reliable bonding of CAD/CAM restorative materials. Selection of bonding strategy to such material is determined based on its composition. Further evidence is required to evaluate the effect of new surface treatment methods, such as nonthermal atmospheric plasma and self-etching ceramic primer on bonding to different dental ceramics. An understanding of the currently available bonding strategies to CA/CAM materials can help the clinician to select the most indicated system for each category of materials.
Combined multi-spectrum and orthogonal Laplacianfaces for fast CB-XLCT imaging with single-view data
NASA Astrophysics Data System (ADS)
Zhang, Haibo; Geng, Guohua; Chen, Yanrong; Qu, Xuan; Zhao, Fengjun; Hou, Yuqing; Yi, Huangjian; He, Xiaowei
2017-12-01
Cone-beam X-ray luminescence computed tomography (CB-XLCT) is an attractive hybrid imaging modality, which has the potential of monitoring the metabolic processes of nanophosphors-based drugs in vivo. Single-view data reconstruction as a key issue of CB-XLCT imaging promotes the effective study of dynamic XLCT imaging. However, it suffers from serious ill-posedness in the inverse problem. In this paper, a multi-spectrum strategy is adopted to relieve the ill-posedness of reconstruction. The strategy is based on the third-order simplified spherical harmonic approximation model. Then, an orthogonal Laplacianfaces-based method is proposed to reduce the large computational burden without degrading the imaging quality. Both simulated data and in vivo experimental data were used to evaluate the efficiency and robustness of the proposed method. The results are satisfactory in terms of both location and quantitative recovering with computational efficiency, indicating that the proposed method is practical and promising for single-view CB-XLCT imaging.
Chessa, Manuela; Bianchi, Valentina; Zampetti, Massimo; Sabatini, Silvio P; Solari, Fabio
2012-01-01
The intrinsic parallelism of visual neural architectures based on distributed hierarchical layers is well suited to be implemented on the multi-core architectures of modern graphics cards. The design strategies that allow us to optimally take advantage of such parallelism, in order to efficiently map on GPU the hierarchy of layers and the canonical neural computations, are proposed. Specifically, the advantages of a cortical map-like representation of the data are exploited. Moreover, a GPU implementation of a novel neural architecture for the computation of binocular disparity from stereo image pairs, based on populations of binocular energy neurons, is presented. The implemented neural model achieves good performances in terms of reliability of the disparity estimates and a near real-time execution speed, thus demonstrating the effectiveness of the devised design strategies. The proposed approach is valid in general, since the neural building blocks we implemented are a common basis for the modeling of visual neural functionalities.
The Strategic Nature of Changing Your Mind
ERIC Educational Resources Information Center
Walsh, Matthew M.; Anderson, John R.
2009-01-01
In two experiments, we studied how people's strategy choices emerge through an initial and then a more considered evaluation of available strategies. The experiments employed a computer-based paradigm where participants solved multiplication problems using mental and calculator solutions. In addition to recording responses and solution times, we…
Team Culture and Business Strategy Simulation Performance
ERIC Educational Resources Information Center
Ritchie, William J.; Fornaciari, Charles J.; Drew, Stephen A. W.; Marlin, Dan
2013-01-01
Many capstone strategic management courses use computer-based simulations as core pedagogical tools. Simulations are touted as assisting students in developing much-valued skills in strategy formation, implementation, and team management in the pursuit of superior strategic performance. However, despite their rich nature, little is known regarding…
Cloud computing task scheduling strategy based on differential evolution and ant colony optimization
NASA Astrophysics Data System (ADS)
Ge, Junwei; Cai, Yu; Fang, Yiqiu
2018-05-01
This paper proposes a task scheduling strategy DEACO based on the combination of Differential Evolution (DE) and Ant Colony Optimization (ACO), aiming at the single problem of optimization objective in cloud computing task scheduling, this paper combines the shortest task completion time, cost and load balancing. DEACO uses the solution of the DE to initialize the initial pheromone of ACO, reduces the time of collecting the pheromone in ACO in the early, and improves the pheromone updating rule through the load factor. The proposed algorithm is simulated on cloudsim, and compared with the min-min and ACO. The experimental results show that DEACO is more superior in terms of time, cost, and load.
Why Adolescents Use a Computer-Based Health Information System.
ERIC Educational Resources Information Center
Hawkins, Robert P.; And Others
The Body Awareness Resource Network (BARN) is a system of interactive computer programs designed to provide adolescents with confidential, nonjudgmental health information, behavior change strategies, and sources of referral. These programs cover five adolescent health areas: alcohol and other drugs, human sexuality, smoking prevention and…
On the Bayesian Treed Multivariate Gaussian Process with Linear Model of Coregionalization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Lin, Guang
2015-02-01
The Bayesian treed Gaussian process (BTGP) has gained popularity in recent years because it provides a straightforward mechanism for modeling non-stationary data and can alleviate computational demands by fitting models to less data. The extension of BTGP to the multivariate setting requires us to model the cross-covariance and to propose efficient algorithms that can deal with trans-dimensional MCMC moves. In this paper we extend the cross-covariance of the Bayesian treed multivariate Gaussian process (BTMGP) to that of linear model of Coregionalization (LMC) cross-covariances. Different strategies have been developed to improve the MCMC mixing and invert smaller matrices in the Bayesianmore » inference. Moreover, we compare the proposed BTMGP with existing multiple BTGP and BTMGP in test cases and multiphase flow computer experiment in a full scale regenerator of a carbon capture unit. The use of the BTMGP with LMC cross-covariance helped to predict the computer experiments relatively better than existing competitors. The proposed model has a wide variety of applications, such as computer experiments and environmental data. In the case of computer experiments we also develop an adaptive sampling strategy for the BTMGP with LMC cross-covariance function.« less
Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; Hilgart, Mark C.; Stepanov, Sergey; Sanishvili, Ruslan; Becker, Michael; Winter, Graeme; Sauter, Nicholas K.; Smith, Janet L.; Fischetti, Robert F.
2014-01-01
The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates a collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce. PMID:25484844
Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; ...
2014-11-18
The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates amore » collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.« less
The Role of Corticostriatal Systems in Speech Category Learning.
Yi, Han-Gyol; Maddox, W Todd; Mumford, Jeanette A; Chandrasekaran, Bharath
2016-04-01
One of the most difficult category learning problems for humans is learning nonnative speech categories. While feedback-based category training can enhance speech learning, the mechanisms underlying these benefits are unclear. In this functional magnetic resonance imaging study, we investigated neural and computational mechanisms underlying feedback-dependent speech category learning in adults. Positive feedback activated a large corticostriatal network including the dorsolateral prefrontal cortex, inferior parietal lobule, middle temporal gyrus, caudate, putamen, and the ventral striatum. Successful learning was contingent upon the activity of domain-general category learning systems: the fast-learning reflective system, involving the dorsolateral prefrontal cortex that develops and tests explicit rules based on the feedback content, and the slow-learning reflexive system, involving the putamen in which the stimuli are implicitly associated with category responses based on the reward value in feedback. Computational modeling of response strategies revealed significant use of reflective strategies early in training and greater use of reflexive strategies later in training. Reflexive strategy use was associated with increased activation in the putamen. Our results demonstrate a critical role for the reflexive corticostriatal learning system as a function of response strategy and proficiency during speech category learning. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Figueroa-Torres, Gonzalo M; Pittman, Jon K; Theodoropoulos, Constantinos
2017-10-01
Microalgal starch and lipids, carbon-based storage molecules, are useful as potential biofuel feedstocks. In this work, cultivation strategies maximising starch and lipid formation were established by developing a multi-parameter kinetic model describing microalgal growth as well as starch and lipid formation, in conjunction with laboratory-scale experiments. Growth dynamics are driven by nitrogen-limited mixotrophic conditions, known to increase cellular starch and lipid contents whilst enhancing biomass growth. Model parameters were computed by fitting model outputs to a range of experimental datasets from batch cultures of Chlamydomonas reinhardtii. Predictive capabilities of the model were established against different experimental data. The model was subsequently used to compute optimal nutrient-based cultivation strategies in terms of initial nitrogen and carbon concentrations. Model-based optimal strategies yielded a significant increase of 261% for starch (0.065gCL -1 ) and 66% for lipid (0.08gCL -1 ) production compared to base-case conditions (0.018gCL -1 starch, 0.048gCL -1 lipids). Copyright © 2017 Elsevier Ltd. All rights reserved.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Andersson, Neil; Arostegui, Jorge; Nava-Aguilera, Elizabeth; Harris, Eva; Ledogar, Robert J
2017-05-30
Since the Aedes aegypti mosquitoes that transmit dengue virus can breed in clean water, WHO-endorsed vector control strategies place sachets of organophosphate pesticide, temephos (Abate), in household water storage containers. These and other pesticide-dependent approaches have failed to curb the spread of dengue and multiple dengue virus serotypes continue to spread throughout tropical and subtropical regions worldwide. A feasibility study in Managua, Nicaragua, generated instruments, intervention protocols, training schedules and impact assessment tools for a cluster randomised controlled trial of community-based approaches to vector control comprising an alternative strategy for dengue prevention and control in Nicaragua and Mexico. The Camino Verde (Green Way) is a pragmatic parallel group trial of pesticide-free dengue vector control, adding effectiveness to the standard government dengue control. A random sample from the most recent census in three coastal regions of Guerrero state in Mexico will generate 90 study clusters and the equivalent sampling frame in Managua, Nicaragua will generate 60 clusters, making a total of 150 clusters each of 137-140 households. After a baseline study, computer-driven randomisation will allocate to intervention one half of the sites, stratified by country, evidence of recent dengue virus infection in children aged 3-9 years and, in Nicaragua, level of community organisation. Following a common evidence-based education protocol, each cluster will develop and implement its own collective interventions including house-to-house visits, school-based programmes and inter-community visits. After 18 months, a follow-up study will compare dengue history, serological evidence of recent dengue virus infection (via measurement of anti-dengue virus antibodies in saliva samples) and entomological indices between intervention and control sites. Our hypothesis is that informed community mobilisation adds effectiveness in controlling dengue. ISRCTN27581154 .
A model-based 'varimax' sampling strategy for a heterogeneous population.
Akram, Nuzhat A; Farooqi, Shakeel R
2014-01-01
Sampling strategies are planned to enhance the homogeneity of a sample, hence to minimize confounding errors. A sampling strategy was developed to minimize the variation within population groups. Karachi, the largest urban agglomeration in Pakistan, was used as a model population. Blood groups ABO and Rh factor were determined for 3000 unrelated individuals selected through simple random sampling. Among them five population groups, namely Balochi, Muhajir, Pathan, Punjabi and Sindhi, based on paternal ethnicity were identified. An index was designed to measure the proportion of admixture at parental and grandparental levels. Population models based on index score were proposed. For validation, 175 individuals selected through stratified random sampling were genotyped for the three STR loci CSF1PO, TPOX and TH01. ANOVA showed significant differences across the population groups for blood groups and STR loci distribution. Gene diversity was higher across the sub-population model than in the agglomerated population. At parental level gene diversities are significantly higher across No admixture models than Admixture models. At grandparental level the difference was not significant. A sub-population model with no admixture at parental level was justified for sampling the heterogeneous population of Karachi.
Similarities and differences in dream content at the cross-cultural, gender, and individual levels.
William Domhoff, G; Schneider, Adam
2008-12-01
The similarities and differences in dream content at the cross-cultural, gender, and individual levels provide one starting point for carrying out studies that attempt to discover correspondences between dream content and various types of waking cognition. Hobson and Kahn's (Hobson, J. A., & Kahn, D. (2007). Dream content: Individual and generic aspects. Consciousness and Cognition, 16, 850-858.) conclusion that dream content may be more generic than most researchers realize, and that individual differences are less salient than usually thought, provides the occasion for a review of findings based on the Hall and Van de Castle (Hall, C., & Van de Castle, R. (1966). The content analysis of dreams. New York: Appleton-Century-Crofts.) coding system for the study of dream content. Then new findings based on a computationally intensive randomization strategy are presented to show the minimum sample sizes needed to detect gender and individual differences in dream content. Generally speaking, sample sizes of 100-125 dream reports are needed because most dream elements appear in less than 50% of dream reports and the magnitude of the differences usually is not large.
GLS-Finder: A Platform for Fast Profiling of Glucosinolates in Brassica Vegetables.
Sun, Jianghao; Zhang, Mengliang; Chen, Pei
2016-06-01
Mass spectrometry combined with related tandem techniques has become the most popular method for plant secondary metabolite characterization. We introduce a new strategy based on in-database searching, mass fragmentation behavior study, formula predicting for fast profiling of glucosinolates, a class of important compounds in brassica vegetables. A MATLAB script-based expert system computer program, "GLS-Finder", was developed. It is capable of qualitative and semi-quantitative analyses of glucosinolates in samples using data generated by ultrahigh-performance liquid chromatography-high-resolution accurate mass with multi-stage mass fragmentation (UHPLC-HRAM/MS(n)). A suite of bioinformatic tools was integrated into the "GLS-Finder" to perform raw data deconvolution, peak alignment, glucosinolate putative assignments, semi-quantitation, and unsupervised principal component analysis (PCA). GLS-Finder was successfully applied to identify intact glucosinolates in 49 commonly consumed Brassica vegetable samples in the United States. It is believed that this work introduces a new way of fast data processing and interpretation for qualitative and quantitative analyses of glucosinolates, where great efficacy was improved in comparison to identification manually.
Keil, Jan; Michel, Andrea; Sticca, Fabio; Leipold, Kristina; Klein, Annette M; Sierau, Susan; von Klitzing, Kai; White, Lars O
2017-08-01
Social dilemmas are characterized by conflicts between immediate self-interest and long-term collective goals. Although such conflicts lie at the heart of various challenging social interactions, we know little about how cooperation in these situations develops. To extend work on social dilemmas to child and adolescent samples, we developed an age-appropriate computer task (the Pizzagame) with the structural features of a public goods game (PGG). We administered the Pizzagame to a sample of 191 children 9 to 16 years of age. Subjects were led to believe they were playing the game over the Internet with three sets of two same-aged, same-sex co-players. In fact, the co-players were computer-generated and programmed to expose children to three consecutive conditions: (1) a cooperative strategy, (2) a selfish strategy, and (3) divergent cooperative-selfish strategies. Supporting the validity of the Pizzagame, our results revealed that children and adolescents displayed conditional cooperation, such that their contributions rose with the increasing cooperativeness of their co-players. Age and gender did not influence children and adolescents' cooperative behavior within each condition. However, older children adapted their behavior more flexibly between conditions to parallel the strategies of their co-players. These results support the utility of the Pizzagame as a feasible, reliable, and valid instrument for assessing and quantifying child and adolescent cooperative behavior. Moreover, these findings extend previous work showing that age influences cooperative behavior in the PGG.
Incorporation of the TIP4P water model into a continuum solvent for computing solvation free energy
NASA Astrophysics Data System (ADS)
Yang, Pei-Kun
2014-10-01
The continuum solvent model is one of the commonly used strategies to compute solvation free energy especially for large-scale conformational transitions such as protein folding or to calculate the binding affinity of protein-protein/ligand interactions. However, the dielectric polarization for computing solvation free energy from the continuum solvent is different than that obtained from molecular dynamic simulations. To mimic the dielectric polarization surrounding a solute in molecular dynamic simulations, the first-shell water molecules was modeled using a charge distribution of TIP4P in a hard sphere; the time-averaged charge distribution from the first-shell water molecules were estimated based on the coordination number of the solute, and the orientation distribution of the first-shell waters and the intermediate water molecules were treated as that of a bulk solvent. Based on this strategy, an equation describing the solvation free energy of ions was derived.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.
Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A
2014-10-01
Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.
Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less
Boevé, Anja J; Meijer, Rob R; Albers, Casper J; Beetsma, Yta; Bosker, Roel J
2015-01-01
The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration.
Sample injection and electrophoretic separation on a simple laminated paper based analytical device.
Xu, Chunxiu; Zhong, Minghua; Cai, Longfei; Zheng, Qingyu; Zhang, Xiaojun
2016-02-01
We described a strategy to perform multistep operations on a simple laminated paper-based separation device by using electrokinetic flow to manipulate the fluids. A laminated crossed-channel paper-based separation device was fabricated by cutting a filter paper sheet followed by lamination. Multiple function units including sample loading, sample injection, and electrophoretic separation were integrated on a single paper based analytical device for the first time, by applying potential at different reservoirs for sample, sample waste, buffer, and buffer waste. As a proof-of-concept demonstration, mixed sample solution containing carmine and sunset yellow were loaded in the sampling channel, and then injected into separation channel followed by electrophoretic separation, by adjusting the potentials applied at the four terminals of sampling and separation channel. The effects of buffer pH, buffer concentration, channel width, and separation time on resolution of electrophoretic separation were studied. This strategy may be used to perform multistep operations such as reagent dilution, sample injection, mixing, reaction, and separation on a single microfluidic paper based analytical device, which is very attractive for building micro total analysis systems on microfluidic paper based analytical devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
User's Guide to the Stand Prognosis Model
William R. Wykoff; Nicholas L. Crookston; Albert R. Stage
1982-01-01
The Stand Prognosis Model is a computer program that projects the development of forest stands in the Northern Rocky Mountains. Thinning options allow for simulation of a variety of management strategies. Input consists of a stand inventory, including sample tree records, and a set of option selection instructions. Output includes data normally found in stand, stock,...
Wong, Yung-Hao; Chiu, Chia-Chiun; Lin, Chih-Lung; Chen, Ting-Shou; Jheng, Bo-Ren; Lee, Yu-Ching; Chen, Jeremy; Chen, Bor-Sen
In recent years, many systems biology approaches have been used with various cancers. The materials described here can be used to build bases to discover novel cancer therapy targets in connection with computer-aided drug design (CADD). A deeper understanding of the mechanisms of cancer will provide more choices and correct strategies in the development of multiple target drug therapies, which is quite different from the traditional cancer single target therapy. Targeted therapy is one of the most powerful strategies against cancer and can also be applied to other diseases. Due to the large amount of progress in computer hardware and the theories of computational chemistry and physics, CADD has been the main strategy for developing novel drugs for cancer therapy. In contrast to traditional single target therapies, in this review we will emphasize the future direction of the field, i.e., multiple target therapies. Structure-based and ligand-based drug designs are the two main topics of CADD. The former needs both 3D protein structures and ligand structures, while the latter only needs ligand structures. Ordinarily it is estimated to take more than 14 years and 800 million dollars to develop a new drug. Many new CADD software programs and techniques have been developed in recent decades. We conclude with an example where we combined and applied systems biology and CADD to the core networks of four cancers and successfully developed a novel cocktail for drug therapy that treats multiple targets.
Innovative Strategies for Teaching Anatomy and Physiology.
ERIC Educational Resources Information Center
Ritt, Laura; Stewart, Barbara
1996-01-01
Describes the development of new teaching strategies in an anatomy and physiology laboratory at Burlington County College (New Jersey) based on laser disc technology, computers with multimedia capabilities, and appropriate software. Lab activities are described and results of a survey of former students are reported, including a comparison of lab…
Physical Webbing: Collaborative Kinesthetic Three-Dimensional Mind Maps[R
ERIC Educational Resources Information Center
Williams, Marian H.
2012-01-01
Mind Mapping has predominantly been used by individuals or collaboratively in groups as a paper-based or computer-generated learning strategy. In an effort to make Mind Mapping kinesthetic, collaborative, and three-dimensional, an innovative pedagogical strategy, termed Physical Webbing, was devised. In the Physical Web activity, groups…
Using machine learning to accelerate sampling-based inversion
NASA Astrophysics Data System (ADS)
Valentine, A. P.; Sambridge, M.
2017-12-01
In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.
NASA Astrophysics Data System (ADS)
Henderson, Jean Foster
The purpose of this study was to assess the effect of classroom restructuring involving computer laboratories on student achievement and student attitudes toward computers and computer courses. The effects of the targeted student attributes of gender, previous programming experience, math background, and learning style were also examined. The open lab-based class structure consisted of a traditional lecture class with a separate, unscheduled lab component in which lab assignments were completed outside of class; the closed lab-based class structure integrated a lab component within the lecture class so that half the class was reserved for lecture and half the class was reserved for students to complete lab assignments by working cooperatively with each other and under the supervision and guidance of the instructor. The sample consisted of 71 students enrolled in four intact classes of Computer Science I during the fall and spring semesters of the 2006--2007 school year at two southern universities: two classes were held in the fall (one at each university) and two classes were held in the spring (one at each university). A counterbalanced repeated measures design was used in which all students experienced both class structures for half of each semester. The order of control and treatment was rotated among the four classes. All students received the same amount of class and instructor time. A multivariate analysis of variance (MANOVA) via a multiple regression strategy was used to test the study's hypotheses. Although the overall MANOVA model was statistically significant, independent follow-up univariate analyses relative to each dependent measure found that the only significant research factor was math background: Students whose mathematics background was at the level of Calculus I or higher had significantly higher student achievement than students whose mathematics background was less than Calculus I. The results suggest that classroom structures that incorporate an open laboratory setting are just as effective on student achievement and attitudes as classroom structures that incorporate a closed laboratory setting. The results also suggest that math background is a strong predictor of student achievement in CS 1.
Reading Strategies: Issues in the Computerization of Machiavelli's "Il demonio che prese moglie".
ERIC Educational Resources Information Center
Morgan, Leslie Zarker
1994-01-01
The ideal computer-based foreign language reading program must include cognitive background, a learning taxonomy, sound computer design, and knowledge of what is needed for the specific language. Machiavelli's "Il demonia che prese moglie" is chosen for study due to its historical interest. (63 references) (CK)
Computer-Based Instruction for Purchasing Skills
ERIC Educational Resources Information Center
Ayres, Kevin M.; Langone, John; Boon, Richard T.; Norman, Audrey
2006-01-01
The purpose of this study was to investigate use of computers and video technologies to teach students to correctly make purchases in a community grocery store using the dollar plus purchasing strategy. Four middle school students diagnosed with intellectual disabilities participated in this study. A multiple probe across participants research…
Using Microcomputers for Assessment and Error Analysis. Monograph #23.
ERIC Educational Resources Information Center
Hasselbring, Ted S.; And Others
This monograph provides an overview of computer-based assessment and error analysis in the instruction of elementary students with complex medical, learning, and/or behavioral problems. Information on generating and scoring tests using the microcomputer is offered, as are ideas for using computers in the analysis of mathematical strategies and…
Discrete Mathematics Course Supported by CAS MATHEMATICA
ERIC Educational Resources Information Center
Ivanov, O. A.; Ivanova, V. V.; Saltan, A. A.
2017-01-01
In this paper, we discuss examples of assignments for a course in discrete mathematics for undergraduate students majoring in business informatics. We consider several problems with computer-based solutions and discuss general strategies for using computers in teaching mathematics and its applications. In order to evaluate the effectiveness of our…
Multimedia Instructional Tools and Student Learning in a Computer Applications Course
ERIC Educational Resources Information Center
Chapman, Debra L.; Wang, Shuyan
2015-01-01
Advances in technology and changes in educational strategies have resulted in the integration of technology in the classroom. Multimedia instructional tools (MMIT) provide student-centered active-learning instructional activities. MMITs are common in introductory computer applications courses based on the premise that MMITs should increase student…
Multimedia Instructional Tools and Student Learning in Computer Applications Courses
ERIC Educational Resources Information Center
Chapman, Debra Laier
2013-01-01
Advances in technology and changes in educational strategies have resulted in the integration of technology into the classroom. Multimedia instructional tools (MMIT) have been identified as a way to provide student-centered active-learning instructional material to students. MMITs are common in introductory computer applications courses based on…
Practical Problem-Based Learning in Computing Education
ERIC Educational Resources Information Center
O'Grady, Michael J.
2012-01-01
Computer Science (CS) is a relatively new disciple and how best to introduce it to new students remains an open question. Likewise, the identification of appropriate instructional strategies for the diverse topics that constitute the average curriculum remains open to debate. One approach considered by a number of practitioners in CS education…
Geuna, S
2000-11-20
Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.
1988-10-01
sample these ducts. This judgement was based on the following factors : 1. The ducts were open to the atmosphere. 2. RMA records of building area samples...selected based on several factors including piping arrangements, volume to be sampled, sampling equipment flow rates, and the flow rate necessary for...effective sampling. Therefore, each sampling point strategy and procedure was customized based on these factors . The individual specific sampling
Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J
2005-01-01
We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.
Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.
Newberg, Lee A
2008-08-15
A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.
NASA Astrophysics Data System (ADS)
Hativa, Nira
1993-12-01
This study sought to identify how high achievers learn and understand new concepts in arithmetic from computer-based practice which provides full solutions to examples but without verbal explanations. Four high-achieving second graders were observed in their natural school settings throughout all their computer-based practice sessions which involved the concept of rounding whole numbers, a concept which was totally new to them. Immediate post-session interviews inquired into students' strategies for solutions, errors, and their understanding of the underlying mathematical rules. The article describes the process through which the students construct their knowledge of the rounding concepts and the errors and misconceptions encountered in this process. The article identifies the cognitive abilities that promote student self-learning of the rounding concepts, their number concepts and "number sense." Differences in the ability to generalise, "mathematical memory," mindfulness of work and use of cognitive strategies are shown to account for the differences in patterns of, and gains in, learning and in maintaining knowledge among the students involved. Implications for the teaching of estimation concepts and of promoting students' "number sense," as well as for classroom use of computer-based practice are discussed.
Constraint-Based Scheduling System
NASA Technical Reports Server (NTRS)
Zweben, Monte; Eskey, Megan; Stock, Todd; Taylor, Will; Kanefsky, Bob; Drascher, Ellen; Deale, Michael; Daun, Brian; Davis, Gene
1995-01-01
Report describes continuing development of software for constraint-based scheduling system implemented eventually on massively parallel computer. Based on machine learning as means of improving scheduling. Designed to learn when to change search strategy by analyzing search progress and learning general conditions under which resource bottleneck occurs.
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
NASA Astrophysics Data System (ADS)
Chiu, Bernard; Li, Bing; Chow, Tommy W. S.
2013-09-01
With the advent of new therapies and management strategies for carotid atherosclerosis, there is a parallel need for measurement tools or biomarkers to evaluate the efficacy of these new strategies. 3D ultrasound has been shown to provide reproducible measurements of plaque area/volume and vessel wall volume. However, since carotid atherosclerosis is a focal disease that predominantly occurs at bifurcations, biomarkers based on local plaque change may be more sensitive than global volumetric measurements in demonstrating efficacy of new therapies. The ultimate goal of this paper is to develop a biomarker that is based on the local distribution of vessel-wall-plus-plaque thickness change (VWT-Change) that has occurred during the course of a clinical study. To allow comparison between different treatment groups, the VWT-Change distribution of each subject must first be mapped to a standardized domain. In this study, we developed a technique to map the 3D VWT-Change distribution to a 2D standardized template. We then applied a feature selection technique to identify regions on the 2D standardized map on which subjects in different treatment groups exhibit greater difference in VWT-Change. The proposed algorithm was applied to analyse the VWT-Change of 20 subjects in a placebo-controlled study of the effect of atorvastatin (Lipitor). The average VWT-Change for each subject was computed (i) over all points in the 2D map and (ii) over feature points only. For the average computed over all points, 97 subjects per group would be required to detect an effect size of 25% that of atorvastatin in a six-month study. The sample size is reduced to 25 subjects if the average were computed over feature points only. The introduction of this sensitive quantification technique for carotid atherosclerosis progression/regression would allow many proof-of-principle studies to be performed before a more costly and longer study involving a larger population is held to confirm the treatment efficacy.
NASA Astrophysics Data System (ADS)
Chartin, Caroline; Krüger, Inken; Goidts, Esther; Carnol, Monique; van Wesemael, Bas
2017-04-01
The quantification and the spatialisation of reliable SOC stocks (Mg C ha-1) and total stock (Tg C) baselines and associated uncertainties are fundamental to detect the gains or losses in SOC, and to locate sensitive areas with low SOC levels. Here, we aim to both quantify and spatialize SOC stocks at regional scale (southern Belgium) based on data from one non-design-based nor model-based sampling scheme. To this end, we developed a computation procedure based on Digital Soil Mapping techniques and stochastic simulations (Monte-Carlo) allowing the estimation of multiple (here, 10,000) independent spatialized datasets. The computation of the prediction uncertainty accounts for the errors associated to the both estimations of i) SOC stock at the pixel-related area scale and ii) parameters of the spatial model. Based on these 10,000 individuals, median SOC stocks and 90% prediction intervals were computed for each pixel, as well as total SOC stocks and their 90% prediction intervals for selected sub-areas and for the entire study area. Hence, a Generalised Additive Model (GAM) explaining 69.3 % of the SOC stock variance was calibrated and then validated (R2 = 0.64). The model overestimated low SOC stock (below 50 Mg C ha-1) and underestimated high SOC stock (especially those above 100 Mg C kg-1). A positive gradient of SOC stock occurred from the northwest to the center of Wallonia with a slight decrease on the southernmost part, correlating to the evolution of precipitation and temperature (along with elevation) and dominant land use. At the catchment scale higher SOC stocks were predicted on valley bottoms, especially for poorly drained soils under grassland. Mean predicted SOC stocks for cropland and grassland in Wallonia were of 26.58 Tg C (SD 1.52) and 43.30 Tg C (2.93), respectively. The procedure developed here allowed to predict realistic spatial patterns of SOC stocks all over agricultural lands of southern Belgium and to produce reliable statistics of total SOC stocks for each of the 20 combinations of land use / agricultural regions of Wallonia. This procedure appears useful to produce soil maps as policy tools in conducting sustainable management at regional and national scales, and to compute statistics which comply with specific requirements of reporting activities.
Raymond-Flesch, Marissa; Siemons, Rachel; Brindis, Claire D
2016-01-01
Limited research has focused on undocumented immigrants' health and access to care. This paper describes participant engagement strategies used to investigate the health needs of immigrants eligible for Deferred Action for Childhood Arrivals (DACA). Community-based strategies engaged advocates and undocumented Californians in study design and recruitment. Outreach in diverse settings, social media, and participant-driven sampling recruited 61 DACA-eligible focus group participants. Social media, community-based organizations (CBOs), family members, advocacy groups, and participant-driven sampling were the most successful recruitment strategies. Participants felt engaging in research was instrumental for sharing their concerns with health care providers and policymakers, noteworthy in light of their previously identified fears and mistrust of government officials. Using multiple culturally responsive strategies including participant-driven sampling, engagement with CBOs, and use of social media, those eligible for DACA eagerly engage as research participants. Educating researchers and institutional review boards (IRBs) about legal and safety concerns can improve research engagement.
Sampling design for spatially distributed hydrogeologic and environmental processes
Christakos, G.; Olea, R.A.
1992-01-01
A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.
Galbraith, Lauren; Jacobs, Casey; Hemmelgarn, Brenda R; Donald, Maoliosa; Manns, Braden J; Jun, Min
2018-01-01
Primary care providers manage the majority of patients with chronic kidney disease (CKD), although the most effective chronic disease management (CDM) strategies for these patients are unknown. We assessed the efficacy of CDM interventions used by primary care providers managing patients with CKD. The Medline, Embase and Cochrane Central databases were systematically searched (inception to November 2014) for randomized controlled trials (RCTs) assessing education-based and computer-assisted CDM interventions targeting primary care providers managing patients with CKD in the community. The efficacy of CDM interventions was assessed using quality indicators [use of angiotensin-converting enzyme inhibitor (ACEI) or angiotensin receptor blocker (ARB), proteinuria measurement and achievement of blood pressure (BP) targets] and clinical outcomes (change in BP and glomerular filtration rate). Two independent reviewers evaluated studies for inclusion, quality and extracted data. Random effects models were used to estimate pooled odds ratios (ORs) and weighted mean differences for outcomes of interest. Five studies (188 clinics; 494 physicians; 42 852 patients with CKD) were included. Two studies compared computer-assisted intervention strategies with usual care, two studies compared education-based intervention strategies with computer-assisted intervention strategies and one study compared both these intervention strategies with usual care. Compared with usual care, computer-assisted CDM interventions did not increase the likelihood of ACEI/ARB use among patients with CKD {pooled OR 1.00 [95% confidence interval (CI) 0.83-1.21]; I2 = 0.0%}. Similarly, education-related CDM interventions did not increase the likelihood of ACEI/ARB use compared with computer-assisted CDM interventions [pooled OR 1.12 (95% CI 0.77-1.64); I2 = 0.0%]. Inconsistencies in reporting methods limited further pooling of data. To date, there have been very few randomized trials testing CDM interventions targeting primary care providers with the goal of improving care of people with CKD. Those conducted to date have shown minimal impact, suggesting that other strategies, or multifaceted interventions, may be required to enhance care for patients with CKD in the community. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Denizard-Thompson, Nancy R; Singh, Sonal; Stevens, Sheila R; Miller, David P; Wofford, James L
2012-01-01
To determine whether an educational strategy using a handheld, multimedia computer (iPod™) is practical and sustainable for routine office-based patient educational tasks. With the limited amount of time allotted to the office encounter and the growing number of patient educational tasks, new strategies are needed to improve the efficiency of patient education. Education of patients anticoagulated with warfarin is considered critical to preventing complications. Despite the dangers associated with the use of warfarin, educational practices are variable and often haphazard. During a four-month period, we examined the implementation of a three-part series of iPod™-based patient educational modules delivered to anticoagulated patients at the time of routine INR (International Normalized Ratio) blood tests for outpatients on the anticoagulation registry at an urban community health center. A total of 141 computer module presentations were delivered to 91 patients during the four-month period. In all, 44 patients on the registry had no INR checkups, and thus no opportunity to view the modules, and 32 patients had at least three INR checkups but no modules were documented. Of the 130 patients with at least one INR performed during the study period, 22 (16.9%) patients completed all three modules, 91 (70.0%) patients received at least one module, and nine (7.6%) patients refused to view at least one module. Neither of the two handheld computers was lost or stolen, and no physician time was used in this routine educational activity. Patients reported that the audio and visual quality was very good, (9.0/10); the educational experience of the patient was helpful (7.4/10) compared with the patient's previous warfarin education (6.3/10), and the computer strategy extended the INR visit duration by 1-5 min at most. The computer-assisted patient educational strategy was well received by patients, and uptake of the intervention by the clinic was successful and durable. The iPod™ strategy standardized the educational message, improved clinic efficiency, and helped this busy clinic meet its educational goals for patient education.
Managing geometric information with a data base management system
NASA Technical Reports Server (NTRS)
Dube, R. P.
1984-01-01
The strategies for managing computer based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. The research on integrated programs for aerospace-vehicle design (IPAD) focuses on the use of data base management system (DBMS) technology to manage engineering/manufacturing data. The objectives of IPAD is to develop a computer based engineering complex which automates the storage, management, protection, and retrieval of engineering data. In particular, this facility must manage geometry information as well as associated data. The approach taken on the IPAD project to achieve this objective is discussed. Geometry management in current systems and the approach taken in the early IPAD prototypes are examined.
ERIC Educational Resources Information Center
Erdogan, Yavuz; Dede, Dinçer
2015-01-01
The purpose of this study is to compare the effects of computer assisted project-based instruction on learners' achievement in a science and technology course, in a computer course and in portfolio development. With this aim in mind, a quasi-experimental design was used and a sample of 70 seventh grade secondary school students from Org. Esref…
Annealed importance sampling with constant cooling rate
NASA Astrophysics Data System (ADS)
Giovannelli, Edoardo; Cardini, Gianni; Gellini, Cristina; Pietraperzia, Giangaetano; Chelli, Riccardo
2015-02-01
Annealed importance sampling is a simulation method devised by Neal [Stat. Comput. 11, 125 (2001)] to assign weights to configurations generated by simulated annealing trajectories. In particular, the equilibrium average of a generic physical quantity can be computed by a weighted average exploiting weights and estimates of this quantity associated to the final configurations of the annealed trajectories. Here, we review annealed importance sampling from the perspective of nonequilibrium path-ensemble averages [G. E. Crooks, Phys. Rev. E 61, 2361 (2000)]. The equivalence of Neal's and Crooks' treatments highlights the generality of the method, which goes beyond the mere thermal-based protocols. Furthermore, we show that a temperature schedule based on a constant cooling rate outperforms stepwise cooling schedules and that, for a given elapsed computer time, performances of annealed importance sampling are, in general, improved by increasing the number of intermediate temperatures.
Yang, Shuai; Zhang, Xinlei; Diao, Lihong; Guo, Feifei; Wang, Dan; Liu, Zhongyang; Li, Honglei; Zheng, Junjie; Pan, Jingshan; Nice, Edouard C; Li, Dong; He, Fuchu
2015-09-04
The Chromosome-centric Human Proteome Project (C-HPP) aims to catalog genome-encoded proteins using a chromosome-by-chromosome strategy. As the C-HPP proceeds, the increasing requirement for data-intensive analysis of the MS/MS data poses a challenge to the proteomic community, especially small laboratories lacking computational infrastructure. To address this challenge, we have updated the previous CAPER browser into a higher version, CAPER 3.0, which is a scalable cloud-based system for data-intensive analysis of C-HPP data sets. CAPER 3.0 uses cloud computing technology to facilitate MS/MS-based peptide identification. In particular, it can use both public and private cloud, facilitating the analysis of C-HPP data sets. CAPER 3.0 provides a graphical user interface (GUI) to help users transfer data, configure jobs, track progress, and visualize the results comprehensively. These features enable users without programming expertise to easily conduct data-intensive analysis using CAPER 3.0. Here, we illustrate the usage of CAPER 3.0 with four specific mass spectral data-intensive problems: detecting novel peptides, identifying single amino acid variants (SAVs) derived from known missense mutations, identifying sample-specific SAVs, and identifying exon-skipping events. CAPER 3.0 is available at http://prodigy.bprc.ac.cn/caper3.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Li, Jiayao; Zheng, Changxi; Liu, Boyin; Chou, Tsengming; Kim, Yeonuk; Qiu, Shi; Li, Jian; Yan, Wenyi; Fu, Jing
2018-06-11
High-resolution single-cell imaging in their native or near-native state has received considerable interest for decades. In this research, we present an innovative approach that can be employed to study both morphological and nano-mechanical properties of hydrated single bacterial cells. The proposed strategy is to encapsulate wet cells with monolayer graphene with a newly developed water membrane approach, followed by imaging with both electron microscopy (EM) and atomic force microscopy (AFM). A computational framework was developed to provide additional insights, with the detailed nanoindentation process on graphene modeled based on finite element method. The model was first validated by calibration with polymer materials of known properties, and the contribution of graphene was then studied and corrected to determine the actual moduli of the encapsulated hydrated sample. Aapplication of the proposed approach was performed on hydrated bacterial cells (Klebsiella pneumoniae) to correlate the structural and mechanical information. EM and EDS (energy-dispersive X-ray spectroscopy) imaging confirmed that the cells in their near-native stage can be studied inside the miniatured environment enabled with graphene encapsulation. The actual moduli of the encapsulated hydrated cells were determined based on the developed computational model in parallel, with results comparable with those acquired with Wet-AFM. It is expected that the successful establishment of controlled graphene encapsulation offers a new route for probing liquid/live cells with scanning probe microscopy, as well as correlative imaging of hydrated samples for both biological and material sciences. © 2018 IOP Publishing Ltd.
Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai
2016-02-19
In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver's EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver's vigilance level. Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model.
Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai
2016-01-01
In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver’s EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver’s vigilance level . Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model. PMID:26907278
ERIC Educational Resources Information Center
Owhoso, Vincent; Malgwi, Charles A.; Akpomi, Margaret
2014-01-01
The authors examine whether students who completed a computer-based intervention program, designed to help them develop abilities and skills in introductory accounting, later declared accounting as a major. A sample of 1,341 students participated in the study, of which 74 completed the intervention program (computer-based assisted learning [CBAL])…
Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques
Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering. PMID:21949695
Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.
Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering.
X-ray coherent scattering tomography of textured material (Conference Presentation)
NASA Astrophysics Data System (ADS)
Zhu, Zheyuan; Pang, Shuo
2017-05-01
Small-angle X-ray scattering (SAXS) measures the signature of angular-dependent coherently scattered X-rays, which contains richer information in material composition and structure compared to conventional absorption-based computed tomography. SAXS image reconstruction method of a 2 or 3 dimensional object based on computed tomography, termed as coherent scattering computed tomography (CSCT), enables the detection of spatially-resolved, material-specific isotropic scattering signature inside an extended object, and provides improved contrast for medical diagnosis, security screening, and material characterization applications. However, traditional CSCT methods assumes materials are fine powders or amorphous, and possess isotropic scattering profiles, which is not generally true for all materials. Anisotropic scatters cannot be captured using conventional CSCT method and result in reconstruction errors. To obtain correct information from the sample, we designed new imaging strategy which incorporates extra degree of detector motion into X-ray scattering tomography for the detection of anisotropic scattered photons from a series of two-dimensional intensity measurements. Using a table-top, narrow-band X-ray source and a panel detector, we demonstrate the anisotropic scattering profile captured from an extended object and the reconstruction of a three-dimensional object. For materials possessing a well-organized crystalline structure with certain symmetry, the scatter texture is more predictable. We will also discuss the compressive schemes and implementation of data acquisition to improve the collection efficiency and accelerate the imaging process.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-01-01
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
EOS Laser Atmosphere Wind Sounder (LAWS) investigation
NASA Technical Reports Server (NTRS)
Emmitt, George D.
1991-01-01
The related activities of the contract are outlined for the first year. These include: (1) attend team member meetings; (2) support EOS Project with science related activities; (3) prepare and Execution Phase plan; and (4) support LAWS and EOSDIS related work. Attached to the report is an appendix, 'LAWS Algorithm Development and Evaluation Laboratory (LADEL)'. Also attached is a copy of a proposal to the NASA EOS for 'LAWS Sampling Strategies and Wind Computation Algorithms -- Storm-Top Divergence Studies. Volume I: Investigation and Technical Plan, Data Plan, Computer Facilities Plan, Management Plan.'
NASA Astrophysics Data System (ADS)
Miller, Jacob; Sanders, Stephen; Miyake, Akimasa
2017-12-01
While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.
Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A
2018-06-01
Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
Spatial adaptive sampling in multiscale simulation
NASA Astrophysics Data System (ADS)
Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.
2014-07-01
In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bondar, M.L., E-mail: m.bondar@erasmusmc.nl; Hoogeman, M.S.; Mens, J.W.
2012-08-01
Purpose: To design and evaluate individualized nonadaptive and online-adaptive strategies based on a pretreatment established motion model for the highly deformable target volume in cervical cancer patients. Methods and Materials: For 14 patients, nine to ten variable bladder filling computed tomography (CT) scans were acquired at pretreatment and after 40 Gy. Individualized model-based internal target volumes (mbITVs) accounting for the cervix and uterus motion due to bladder volume changes were generated by using a motion-model constructed from two pretreatment CT scans (full and empty bladder). Two individualized strategies were designed: a nonadaptive strategy, using an mbITV accounting for the full-rangemore » of bladder volume changes throughout the treatment; and an online-adaptive strategy, using mbITVs of bladder volume subranges to construct a library of plans. The latter adapts the treatment online by selecting the plan-of-the-day from the library based on the measured bladder volume. The individualized strategies were evaluated by the seven to eight CT scans not used for mbITVs construction, and compared with a population-based approach. Geometric uniform margins around planning cervix-uterus and mbITVs were determined to ensure adequate coverage. For each strategy, the percentage of the cervix-uterus, bladder, and rectum volumes inside the planning target volume (PTV), and the clinical target volume (CTV)-to-PTV volume (volume difference between PTV and CTV) were calculated. Results: The margin for the population-based approach was 38 mm and for the individualized strategies was 7 to 10 mm. Compared with the population-based approach, the individualized nonadaptive strategy decreased the CTV-to-PTV volume by 48% {+-} 6% and the percentage of bladder and rectum inside the PTV by 5% to 45% and 26% to 74% (p < 0.001), respectively. Replacing the individualized nonadaptive strategy by an online-adaptive, two-plan library further decreased the percentage of bladder and rectum inside the PTV (0% to 10% and -1% to 9%; p < 0.004) and the CTV-to-PTV volume (4-96 ml). Conclusions: Compared with population-based margins, an individualized PTV results in better organ-at-risk sparing. Online-adaptive radiotherapy further improves organ-at-risk sparing.« less
Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing
NASA Astrophysics Data System (ADS)
Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.
2018-04-01
We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
USDA-ARS?s Scientific Manuscript database
Cumulative nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. This study used an agroecosystems simulation model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2...
Chapter 2: Sampling strategies in forest hydrology and biogeochemistry
Roger C. Bales; Martha H. Conklin; Branko Kerkez; Steven Glaser; Jan W. Hopmans; Carolyn T. Hunsaker; Matt Meadows; Peter C. Hartsough
2011-01-01
Many aspects of forest hydrology have been based on accurate but not necessarily spatially representative measurements, reflecting the measurement capabilities that were traditionally available. Two developments are bringing about fundamental changes in sampling strategies in forest hydrology and biogeochemistry: (a) technical advances in measurement capability, as is...
Balasubramanian, Akhila; Kulasingam, Shalini L.; Baer, Atar; Hughes, James P.; Myers, Evan R.; Mao, Constance; Kiviat, Nancy B.; Koutsky, Laura A.
2010-01-01
Objective Estimate the accuracy and cost-effectiveness of cervical cancer screening strategies based on high-risk HPV DNA testing of self-collected vaginal samples. Materials and Methods A subset of 1,665 women (18-50 years of age) participating in a cervical cancer screening study were screened by liquid-based cytology and by high-risk HPV DNA testing of both self-collected vaginal swab samples and clinician-collected cervical samples. Women with positive/abnormal screening test results and a subset of women with negative screening test results were triaged to colposcopy. Based on individual and combined test results, five screening strategies were defined. Estimates of sensitivity and specificity for cervical intraepithelial neoplasia grade 2 or worse were calculated and a Markov model was used to estimate the incremental cost-effectiveness ratios (ICERs) for each strategy. Results Compared to cytology-based screening, high-risk HPV DNA testing of self-collected vaginal samples was more sensitive (68%, 95%CI=58%-78% versus 85%, 95%CI=76%-94%) but less specific (89%, 95%CI=86%-91% versus 73%, 95%CI=67%-79%). A strategy of high-risk HPV DNA testing of self-collected vaginal samples followed by cytology triage of HPV positive women, was comparably sensitive (75%, 95%CI=64%-86%) and specific (88%, 95%CI=85%-92%) to cytology-based screening. In-home self-collection for high-risk HPV DNA detection followed by in-clinic cytology triage had a slightly lower lifetime cost and a slightly higher quality-adjusted life expectancy than did cytology-based screening (ICER of triennial screening compared to no screening was $9,871/QALY and $12,878/QALY, respectively). Conclusions Triennial screening by high-risk HPV DNA testing of in-home, self-collected vaginal samples followed by in-clinic cytology triage was cost-effective. PMID:20592553
A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.
Grando, M Adela; Glasspool, David; Fox, John
2012-01-01
To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Franck, I. M.; Koutsourelakis, P. S.
2017-01-01
This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.
Monitoring and Depth of Strategy Use in Computer-Based Learning Environments for Science and History
ERIC Educational Resources Information Center
Deekens, Victor M.; Greene, Jeffrey A.; Lobczowski, Nikki G.
2018-01-01
Background: Self-regulated learning (SRL) models position metacognitive monitoring as central to SRL processing and predictive of student learning outcomes (Winne & Hadwin, 2008; Zimmerman, 2000). A body of research evidence also indicates that depth of strategy use, ranging from surface to deep processing, is predictive of learning…
NASA Astrophysics Data System (ADS)
Melendez, Jaime; Sánchez, Clara I.; Philipsen, Rick H. H. M.; Maduskar, Pragnya; Dawson, Rodney; Theron, Grant; Dheda, Keertan; van Ginneken, Bram
2016-04-01
Lack of human resources and radiological interpretation expertise impair tuberculosis (TB) screening programmes in TB-endemic countries. Computer-aided detection (CAD) constitutes a viable alternative for chest radiograph (CXR) reading. However, no automated techniques that exploit the additional clinical information typically available during screening exist. To address this issue and optimally exploit this information, a machine learning-based combination framework is introduced. We have evaluated this framework on a database containing 392 patient records from suspected TB subjects prospectively recruited in Cape Town, South Africa. Each record comprised a CAD score, automatically computed from a CXR, and 12 clinical features. Comparisons with strategies relying on either CAD scores or clinical information alone were performed. Our results indicate that the combination framework outperforms the individual strategies in terms of the area under the receiving operating characteristic curve (0.84 versus 0.78 and 0.72), specificity at 95% sensitivity (49% versus 24% and 31%) and negative predictive value (98% versus 95% and 96%). Thus, it is believed that combining CAD and clinical information to estimate the risk of active disease is a promising tool for TB screening.
A Risk-Analysis Approach to Implementing Web-Based Assessment
ERIC Educational Resources Information Center
Ricketts, Chris; Zakrzewski, Stan
2005-01-01
Computer-Based Assessment is a risky business. This paper proposes the use of a model for web-based assessment systems that identifies pedagogic, operational, technical (non web-based), web-based and financial risks. The strategies and procedures for risk elimination or reduction arise from risk analysis and management and are the means by which…
Wells, I G; Cartwright, R Y; Farnan, L P
1993-12-15
The computing strategy in our laboratories evolved from research in Artificial Intelligence, and is based on powerful software tools running on high performance desktop computers with a graphical user interface. This allows most tasks to be regarded as design problems rather than implementation projects, and both rapid prototyping and an object-oriented approach to be employed during the in-house development and enhancement of the laboratory information systems. The practical application of this strategy is discussed, with particular reference to the system designer, the laboratory user and the laboratory customer. Routine operation covers five departments, and the systems are stable, flexible and well accepted by the users. Client-server computing, currently undergoing final trials, is seen as the key to further development, and this approach to Pathology computing has considerable potential for the future.
Sample-space-based feature extraction and class preserving projection for gene expression data.
Wang, Wenjun
2013-01-01
In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.
Boevé, Anja J.; Meijer, Rob R.; Albers, Casper J.; Beetsma, Yta; Bosker, Roel J.
2015-01-01
The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration. PMID:26641632
NASA Astrophysics Data System (ADS)
Johansen, T. H.; Feder, J.; Jøssang, T.
1986-06-01
A fully automated apparatus has been designed for measurements of dilatation in solid samples under well-defined thermal conditions. The oven can be thermally stabilized to better than 0.1 mK over a temperature range of -60 to 150 °C using a two-stage control strategy. Coarse control is obtained by heat exchange with a circulating thermal fluid, whereas the fine regulation is based on a solid-state heat pump—a Peltier element, acting as heating and cooling source. The bidirectional action of the Peltier element permits the sample block to be controlled at the average temperature of the surroundings, thus making an essentially adiabatic system with a minimum of thermal gradients in the sample block. The dilatometer cell integrated in the oven assembly is of the parallel plate air capacitor type, and the apparatus has been successfully used with a sensitivity of 0.07 Å. Our system is well suited for measurements near structural phase transitions with a relative resolution of Δt=(T-Tc)/Tc=2×10-7 in temperature and ΔL/L=1×10-9 in strain.
ERIC Educational Resources Information Center
Sanger, Michael J.; Greenbowe, Thomas J.
2000-01-01
Investigates the effects of both computer animations of microscopic chemical processes occurring in a galvanic cell and conceptual-change instruction based on chemical demonstrations on students' conceptions of current flow in electrolyte solutions. Finds that conceptual change instruction was effective at dispelling student misconceptions but…
A Puzzle-Based Seminar for Computer Engineering Freshmen
ERIC Educational Resources Information Center
Parhami, Behrooz
2008-01-01
We observe that recruitment efforts aimed at alleviating the shortage of skilled workforce in computer engineering must be augmented with strategies for retaining and motivating the students after they have enrolled in our educational programmes. At the University of California, Santa Barbara, we have taken a first step in this direction by…
ERIC Educational Resources Information Center
Shin, Mikyung; Bryant, Diane P.
2017-01-01
Students with mathematics learning disabilities (MLD) have a weak understanding of fraction concepts and skills, which are foundations of algebra. Such students might benefit from computer-assisted instruction that utilizes evidence-based instructional components (cognitive strategies, feedback, virtual manipulatives). As a pilot study using a…
CyberStrategies: How To Build an Internet-Based Information System.
ERIC Educational Resources Information Center
Carroll, Michael L.; Downs, W. Scott
Many organizations grapple with a glut of electronic information spawned by stockpiles of incompatible computers. This book offers solutions in information sharing and computer interaction. Rather than being about the Internet per se, it is about approaches that are characteristic of the Internet and the managerial and technical aspects of…
ERIC Educational Resources Information Center
Darabi, Aubteen; Nelson, David W.; Meeker, Richard; Liang, Xinya; Boulware, Wilma
2010-01-01
In a diagnostic problem solving operation of a computer-simulated chemical plant, chemical engineering students were randomly assigned to two groups: one studying product-oriented worked examples, the other practicing conventional problem solving. Effects of these instructional strategies on the progression of learners' mental models were examined…
ERIC Educational Resources Information Center
Talib, Othman; Matthews, Robert; Secombe, Margaret
2005-01-01
This paper discusses the potential of applying computer-animated instruction (CAnI) as an effective conceptual change strategy in teaching electrochemistry in comparison to conventional lecture-based instruction (CLI). The core assumption in this study is that conceptual change in learners is an active, constructive process that is enhanced by the…
Machine learning from computer simulations with applications in rail vehicle dynamics
NASA Astrophysics Data System (ADS)
Taheri, Mehdi; Ahmadian, Mehdi
2016-05-01
The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.
Pressure ulcers: implementation of evidence-based nursing practice.
Clarke, Heather F; Bradley, Chris; Whytock, Sandra; Handfield, Shannon; van der Wal, Rena; Gundry, Sharon
2005-03-01
A 2-year project was carried out to evaluate the use of multi-component, computer-assisted strategies for implementing clinical practice guidelines. This paper describes the implementation of the project and lessons learned. The evaluation and outcomes of implementing clinical practice guidelines to prevent and treat pressure ulcers will be reported in a separate paper. The prevalence and incidence rates of pressure ulcers, coupled with the cost of treatment, constitute a substantial burden for our health care system. It is estimated that treating a pressure ulcer can increase nursing time up to 50%, and that treatment costs per ulcer can range from US$10,000 to $86,000, with median costs of $27,000. Although evidence-based guidelines for prevention and optimum treatment of pressure ulcers have been developed, there is little empirical evidence about the effectiveness of implementation strategies. The study was conducted across the continuum of care (primary, secondary and tertiary) in a Canadian urban Health Region involving seven health care organizations (acute, home and extended care). Trained surveyors (Registered Nurses) determined the prevalence and incidence of pressure ulcers among patients in these organizations. The use of a computerized decision-support system assisted staff to select optimal, evidence-based care strategies, record information and analyse individual and aggregate data. Evaluation indicated an increase in knowledge relating to pressure ulcer prevention, treatment strategies, resources required, and the role of the interdisciplinary team. Lack of visible senior nurse leadership; time required to acquire computer skills and to implement new guidelines; and difficulties with the computer system were identified as barriers. There is a need for a comprehensive, supported and sustained approach to implementation of evidence-based practice for pressure ulcer prevention and treatment, greater understanding of organization-specific barriers, and mechanisms for addressing the barriers.
Rapid Sampling of Hydrogen Bond Networks for Computational Protein Design.
Maguire, Jack B; Boyken, Scott E; Baker, David; Kuhlman, Brian
2018-05-08
Hydrogen bond networks play a critical role in determining the stability and specificity of biomolecular complexes, and the ability to design such networks is important for engineering novel structures, interactions, and enzymes. One key feature of hydrogen bond networks that makes them difficult to rationally engineer is that they are highly cooperative and are not energetically favorable until the hydrogen bonding potential has been satisfied for all buried polar groups in the network. Existing computational methods for protein design are ill-equipped for creating these highly cooperative networks because they rely on energy functions and sampling strategies that are focused on pairwise interactions. To enable the design of complex hydrogen bond networks, we have developed a new sampling protocol in the molecular modeling program Rosetta that explicitly searches for sets of amino acid mutations that can form self-contained hydrogen bond networks. For a given set of designable residues, the protocol often identifies many alternative sets of mutations/networks, and we show that it can readily be applied to large sets of residues at protein-protein interfaces or in the interior of proteins. The protocol builds on a recently developed method in Rosetta for designing hydrogen bond networks that has been experimentally validated for small symmetric systems but was not extensible to many larger protein structures and complexes. The sampling protocol we describe here not only recapitulates previously validated designs with performance improvements but also yields viable hydrogen bond networks for cases where the previous method fails, such as the design of large, asymmetric interfaces relevant to engineering protein-based therapeutics.
A cloud-based workflow to quantify transcript-expression levels in public cancer compendia
Tatlow, PJ; Piccolo, Stephen R.
2016-01-01
Public compendia of sequencing data are now measured in petabytes. Accordingly, it is infeasible for researchers to transfer these data to local computers. Recently, the National Cancer Institute began exploring opportunities to work with molecular data in cloud-computing environments. With this approach, it becomes possible for scientists to take their tools to the data and thereby avoid large data transfers. It also becomes feasible to scale computing resources to the needs of a given analysis. We quantified transcript-expression levels for 12,307 RNA-Sequencing samples from the Cancer Cell Line Encyclopedia and The Cancer Genome Atlas. We used two cloud-based configurations and examined the performance and cost profiles of each configuration. Using preemptible virtual machines, we processed the samples for as little as $0.09 (USD) per sample. As the samples were processed, we collected performance metrics, which helped us track the duration of each processing step and quantified computational resources used at different stages of sample processing. Although the computational demands of reference alignment and expression quantification have decreased considerably, there remains a critical need for researchers to optimize preprocessing steps. We have stored the software, scripts, and processed data in a publicly accessible repository (https://osf.io/gqrz9). PMID:27982081
NASA Astrophysics Data System (ADS)
Kanta, L.; Berglund, E. Z.
2015-12-01
Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS) framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger (1) increases in the volume of water pumped through inter-basin transfers from an external reservoir and (2) drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.
A security mechanism based on evolutionary game in fog computing.
Sun, Yan; Lin, Fuhong; Zhang, Nan
2018-02-01
Fog computing is a distributed computing paradigm at the edge of the network and requires cooperation of users and sharing of resources. When users in fog computing open their resources, their devices are easily intercepted and attacked because they are accessed through wireless network and present an extensive geographical distribution. In this study, a credible third party was introduced to supervise the behavior of users and protect the security of user cooperation. A fog computing security mechanism based on human nervous system is proposed, and the strategy for a stable system evolution is calculated. The MATLAB simulation results show that the proposed mechanism can reduce the number of attack behaviors effectively and stimulate users to cooperate in application tasks positively.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O
2004-07-30
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.
Economic evaluation of a computed tomography directed referral strategy for chronic rhinosinusitis.
Kilty, S J; Leung, R; Rudmik, L
2016-12-01
Chronic rhinosinusitis (CRS) is a prevalent chronic inflammatory disease. The basis of a clinical diagnosis of CRS for primary care physicians (PCPs) is based upon the recognition of a symptom constellation that manifests with the disease. However, because the symptomatology of CRS may overlap with other diagnoses, the referral of patient to the most appropriate specialist may not always occur, leading to further delays in evaluation and treatment. Given the emphasis on improving the value of health care in Canada, a decision tree model was designed to evaluate whether an upfront computed tomography (CT) scan of the paranasal sinuses ordered by the PCP for a suspected case of CRS would be more cost-effective when compared to symptom-based specialist referral practice. The CT-based strategy resulted in the patient arriving at the most appropriate specialist 95% (±5%) of the time while the symptom-based referral strategy resulted in the patient arriving at the correct specialist 77% (±18%) of the time. The incremental cost effectiveness ratio (ICER) for the CT-based strategy was $1522 per patient arriving at the correct specialist. These results suggest that PCPs can improve the effectiveness of their referrals for CRS by utilising an upfront CT referral strategy. However, it would create an additional cost of approximately $1500 per patient referred. Given these findings, the potential clinical benefits of using an upfront CT scan in the Canadian primary care setting should be further studied to determine the value of the additional money spent to improve the effectiveness of CRS referral. © 2016 John Wiley & Sons Ltd.
Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying
2013-12-01
Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.
Robin M. Reich; Hans T. Schreuder
2006-01-01
The sampling strategy involving both statistical and in-place inventory information is presented for the natural resources project of the Green Belt area (Centuron Verde) in the Mexican state of Jalisco. The sampling designs used were a grid based ground sample of a 90x90 m plot and a two-stage stratified sample of 30 x 30 m plots. The data collected were used to...
Carlfjord, S; Andersson, A; Nilsen, P; Bendtsen, P; Lindberg, M
2010-12-01
The transmission of research findings into routine care is a slow and unpredictable process. Important factors predicting receptivity for innovations within organizations have been identified, but there is a need for further research in this area. The aim of this study was to describe contextual factors and evaluate if organizational climate and implementation strategy influenced outcome, when a computer-based concept for lifestyle intervention was introduced in primary health care (PHC). The study was conducted using a prospective intervention design. The computer-based concept was implemented at six PHC units. Contextual factors in terms of size, leadership, organizational climate and political environment at the units included in the study were assessed before implementation. Organizational climate was measured using the Creative Climate Questionnaire (CCQ). Two different implementation strategies were used: one explicit strategy, based on Rogers' theories about the innovation-decision process, and one implicit strategy. After 6 months, implementation outcome in terms of the proportion of patients who had been referred to the test, was measured. The CCQ questionnaire response rates among staff ranged from 67% to 91% at the six units. Organizational climate differed substantially between the units. Managers scored higher on CCQ than staff at the same unit. A combination of high CCQ scores and explicit implementation strategy was associated with a positive implementation outcome. Organizational climate varies substantially between different PHC units. High CCQ scores in combination with an explicit implementation strategy predict a positive implementation outcome when a new working tool is introduced in PHC. © 2010 Blackwell Publishing Ltd.
State-based versus reward-based motivation in younger and older adults.
Worthy, Darrell A; Cooper, Jessica A; Byrne, Kaileigh A; Gorlick, Marissa A; Maddox, W Todd
2014-12-01
Recent decision-making work has focused on a distinction between a habitual, model-free neural system that is motivated toward actions that lead directly to reward and a more computationally demanding goal-directed, model-based system that is motivated toward actions that improve one's future state. In this article, we examine how aging affects motivation toward reward-based versus state-based decision making. Participants performed tasks in which one type of option provided larger immediate rewards but the alternative type of option led to larger rewards on future trials, or improvements in state. We predicted that older adults would show a reduced preference for choices that led to improvements in state and a greater preference for choices that maximized immediate reward. We also predicted that fits from a hybrid reinforcement-learning model would indicate greater model-based strategy use in younger than in older adults. In line with these predictions, older adults selected the options that maximized reward more often than did younger adults in three of the four tasks, and modeling results suggested reduced model-based strategy use. In the task where older adults showed similar behavior to younger adults, our model-fitting results suggested that this was due to the utilization of a win-stay-lose-shift heuristic rather than a more complex model-based strategy. Additionally, within older adults, we found that model-based strategy use was positively correlated with memory measures from our neuropsychological test battery. We suggest that this shift from state-based to reward-based motivation may be due to age related declines in the neural structures needed for more computationally demanding model-based decision making.
Bennie, Jason A.; Kolbe-Alexander, Tracy; Meester, Femke De
2018-01-01
Prolonged sitting has been linked to adverse health outcomes; therefore, we developed and examined a web-based, computer-tailored workplace sitting intervention. As we had previously shown good effectiveness, the next stage was to conduct a dissemination study. This study reports on the dissemination efforts of a health promotion organisation, associated costs, reach achieved, and attributes of the website users. The organisation systematically registered all the time and resources invested to promote the intervention. Website usage statistics (reach) and descriptive statistics (website users’ attributes) were also assessed. Online strategies (promotion on their homepage; sending e-mails, newsletters, Twitter, Facebook and LinkedIn posts to professional partners) were the main dissemination methods. The total time investment was 25.6 h, which cost approximately 845 EUR in salaries. After sixteen months, 1599 adults had visited the website and 1500 (93.8%) completed the survey to receive personalized sitting advice. This sample was 38.3 ± 11.0 years, mainly female (76.9%), college/university educated (89.0%), highly sedentary (88.5% sat >8 h/day) and intending to change (93.0%) their sitting. Given the small time and money investment, these outcomes are positive and indicate the potential for wide-scale dissemination. However, more efforts are needed to reach men, non-college/university educated employees, and those not intending behavioural change. PMID:29789491
Cocker, Katrien De; Cardon, Greet; Bennie, Jason A; Kolbe-Alexander, Tracy; Meester, Femke De; Vandelanotte, Corneel
2018-05-22
Prolonged sitting has been linked to adverse health outcomes; therefore, we developed and examined a web-based, computer-tailored workplace sitting intervention. As we had previously shown good effectiveness, the next stage was to conduct a dissemination study. This study reports on the dissemination efforts of a health promotion organisation, associated costs, reach achieved, and attributes of the website users. The organisation systematically registered all the time and resources invested to promote the intervention. Website usage statistics (reach) and descriptive statistics (website users' attributes) were also assessed. Online strategies (promotion on their homepage; sending e-mails, newsletters, Twitter, Facebook and LinkedIn posts to professional partners) were the main dissemination methods. The total time investment was 25.6 h, which cost approximately 845 EUR in salaries. After sixteen months, 1599 adults had visited the website and 1500 (93.8%) completed the survey to receive personalized sitting advice. This sample was 38.3 ± 11.0 years, mainly female (76.9%), college/university educated (89.0%), highly sedentary (88.5% sat >8 h/day) and intending to change (93.0%) their sitting. Given the small time and money investment, these outcomes are positive and indicate the potential for wide-scale dissemination. However, more efforts are needed to reach men, non-college/university educated employees, and those not intending behavioural change.
NASA Astrophysics Data System (ADS)
Hirakawa, E. T.; Ezzedine, S. M.; Petersson, A.; Sjogreen, B.; Vorobiev, O.; Pitarka, A.; Antoun, T.; Walter, W. R.
2016-12-01
Motions from underground explosions are governed by non-linear hydrodynamic response of material. However, the numerical calculation of this non-linear constitutive behavior is computationally intensive in contrast to the elastic and acoustic linear wave propagation solvers. Here, we develop a hybrid modeling approach with one-way hydrodynamic-to-elastic coupling in three dimensions in order to propagate explosion generated ground motions from the non-linear near-source region to the far-field. Near source motions are computed using GEODYN-L, a Lagrangian hydrodynamics code for high-energy loading of earth materials. Motions on a dense grid of points sampled on two nested shells located beyond the non-linear damaged zone are saved, and then passed to SW4, an anelastic anisotropic fourth order finite difference code for seismic wave modeling. Our coupling strategy is based on the decomposition and uniqueness theorems where motions are introduced into SW4 as a boundary source and continue to propagate as elastic waves at a much lower computational cost than by using GEODYN-L to cover the entire near- and the far-field domain. The accuracy of the numerical calculations and the coupling strategy is demonstrated in cases with a purely elastic medium as well as non-linear medium. Our hybrid modeling approach is applied to SPE-4' and SPE-5 which are the most recent underground chemical explosions conducted at the Nevada National Security Site (NNSS) where the Source Physics Experiments (SPE) are performed. Our strategy by design is capable of incorporating complex non-linear effects near the source as well as volumetric and topographic material heterogeneity along the propagation path to receiver, and provides new prospects for modeling and understanding explosion generated seismic waveforms. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-698608.
NASA Astrophysics Data System (ADS)
Abedini, M. J.; Nasseri, M.; Burn, D. H.
2012-04-01
In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.
NASA Astrophysics Data System (ADS)
Sauer, Roger A.
2013-08-01
Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.
Nonvolatile Memory Materials for Neuromorphic Intelligent Machines.
Jeong, Doo Seok; Hwang, Cheol Seong
2018-04-18
Recent progress in deep learning extends the capability of artificial intelligence to various practical tasks, making the deep neural network (DNN) an extremely versatile hypothesis. While such DNN is virtually built on contemporary data centers of the von Neumann architecture, physical (in part) DNN of non-von Neumann architecture, also known as neuromorphic computing, can remarkably improve learning and inference efficiency. Particularly, resistance-based nonvolatile random access memory (NVRAM) highlights its handy and efficient application to the multiply-accumulate (MAC) operation in an analog manner. Here, an overview is given of the available types of resistance-based NVRAMs and their technological maturity from the material- and device-points of view. Examples within the strategy are subsequently addressed in comparison with their benchmarks (virtual DNN in deep learning). A spiking neural network (SNN) is another type of neural network that is more biologically plausible than the DNN. The successful incorporation of resistance-based NVRAM in SNN-based neuromorphic computing offers an efficient solution to the MAC operation and spike timing-based learning in nature. This strategy is exemplified from a material perspective. Intelligent machines are categorized according to their architecture and learning type. Also, the functionality and usefulness of NVRAM-based neuromorphic computing are addressed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.
2013-10-01
In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.