Sample records for computer implemented empirical

  1. Mechanistic-empirical Pavement Design Guide Implementation

    DOT National Transportation Integrated Search

    2010-06-01

    The recently introduced Mechanistic-Empirical Pavement Design Guide (MEPDG) and associated computer software provides a state-of-practice mechanistic-empirical highway pavement design methodology. The MEPDG methodology is based on pavement responses ...

  2. Computer implemented empirical mode decomposition method, apparatus, and article of manufacture for two-dimensional signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2001-01-01

    A computer implemented method of processing two-dimensional physical signals includes five basic components and the associated presentation techniques of the results. The first component decomposes the two-dimensional signal into one-dimensional profiles. The second component is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF's) from each profile based on local extrema and/or curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the profiles. In the third component, the IMF's of each profile are then subjected to a Hilbert Transform. The fourth component collates the Hilbert transformed IMF's of the profiles to form a two-dimensional Hilbert Spectrum. A fifth component manipulates the IMF's by, for example, filtering the two-dimensional signal by reconstructing the two-dimensional signal from selected IMF(s).

  3. System and methods for determining masking signals for applying empirical mode decomposition (EMD) and for demodulating intrinsic mode functions obtained from application of EMD

    DOEpatents

    Senroy, Nilanjan [New Delhi, IN; Suryanarayanan, Siddharth [Littleton, CO

    2011-03-15

    A computer-implemented method of signal processing is provided. The method includes generating one or more masking signals based upon a computed Fourier transform of a received signal. The method further includes determining one or more intrinsic mode functions (IMFs) of the received signal by performing a masking-signal-based empirical mode decomposition (EMD) using the at least one masking signal.

  4. Testing the Relation between Fidelity of Implementation and Student Outcomes in Math

    ERIC Educational Resources Information Center

    Crawford, Lindy; Carpenter, Dick M., II; Wilson, Mary T.; Schmeister, Megan; McDonald, Marilee

    2012-01-01

    The relation between fidelity of implementation and student outcomes in a computer-based middle school mathematics curriculum was measured empirically. Participants included 485 students and 23 teachers from 11 public middle schools across seven states. Implementation fidelity was defined using two constructs: fidelity to structure and fidelity to…

  5. Computer implemented empirical mode decomposition method, apparatus and article of manufacture

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    1999-01-01

    A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  6. Facilitating Preschoolers' Scientific Knowledge Construction via Computer Games Regarding Light and Shadow: The Effect of the Prediction-Observation-Explanation (POE) Strategy

    ERIC Educational Resources Information Center

    Hsu, Chung-Yuan; Tsai, Chin-Chung; Liang, Jyh-Chong

    2011-01-01

    Educational researchers have suggested that computer games have a profound influence on students' motivation, knowledge construction, and learning performance, but little empirical research has targeted preschoolers. Thus, the purpose of the present study was to investigate the effects of implementing a computer game that integrates the…

  7. Assessing Computer Literacy: A Validated Instrument and Empirical Results.

    ERIC Educational Resources Information Center

    Gabriel, Roy M.

    1985-01-01

    Describes development of a comprehensive computer literacy assessment battery for K-12 curriculum based on objectives of a curriculum implemented in the Worldwide Department of Defense Dependents Schools system. Test development and field test data are discussed and a correlational analysis which assists in interpretation of test results is…

  8. Predicting Cloud Computing Technology Adoption by Organizations: An Empirical Integration of Technology Acceptance Model and Theory of Planned Behavior

    ERIC Educational Resources Information Center

    Ekufu, ThankGod K.

    2012-01-01

    Organizations are finding it difficult in today's economy to implement the vast information technology infrastructure required to effectively conduct their business operations. Despite the fact that some of these organizations are leveraging on the computational powers and the cost-saving benefits of computing on the Internet cloud, others…

  9. Adaptive neural coding: from biological to behavioral decision-making

    PubMed Central

    Louie, Kenway; Glimcher, Paul W.; Webb, Ryan

    2015-01-01

    Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior. PMID:26722666

  10. Empirical performance of the multivariate normal universal portfolio

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Pang, Sook Theng

    2013-09-01

    Universal portfolios generated by the multivariate normal distribution are studied with emphasis on the case where variables are dependent, namely, the covariance matrix is not diagonal. The moving-order multivariate normal universal portfolio requires very long implementation time and large computer memory in its implementation. With the objective of reducing memory and implementation time, the finite-order universal portfolio is introduced. Some stock-price data sets are selected from the local stock exchange and the finite-order universal portfolio is run on the data sets, for small finite order. Empirically, it is shown that the portfolio can outperform the moving-order Dirichlet universal portfolio of Cover and Ordentlich[2] for certain parameters in the selected data sets.

  11. A four stage approach for ontology-based health information system design.

    PubMed

    Kuziemsky, Craig E; Lau, Francis

    2010-11-01

    To describe and illustrate a four stage methodological approach to capture user knowledge in a biomedical domain area, use that knowledge to design an ontology, and then implement and evaluate the ontology as a health information system (HIS). A hybrid participatory design-grounded theory (GT-PD) method was used to obtain data and code them for ontology development. Prototyping was used to implement the ontology as a computer-based tool. Usability testing evaluated the computer-based tool. An empirically derived domain ontology and set of three problem-solving approaches were developed as a formalized model of the concepts and categories from the GT coding. The ontology and problem-solving approaches were used to design and implement a HIS that tested favorably in usability testing. The four stage approach illustrated in this paper is useful for designing and implementing an ontology as the basis for a HIS. The approach extends existing ontology development methodologies by providing an empirical basis for theory incorporated into ontology design. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Computer implemented empirical mode decomposition method apparatus, and article of manufacture utilizing curvature extrema

    NASA Technical Reports Server (NTRS)

    Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)

    2003-01-01

    A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  13. Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy

    PubMed Central

    Schroll, Henning; Hamker, Fred H.

    2013-01-01

    Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002

  14. Dynamic data filtering system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-04-29

    A computer-implemented dynamic data filtering system and method for selectively choosing operating data of a monitored asset that modifies or expands a learned scope of an empirical model of normal operation of the monitored asset while simultaneously rejecting operating data of the monitored asset that is indicative of excessive degradation or impending failure of the monitored asset, and utilizing the selectively chosen data for adaptively recalibrating the empirical model to more accurately monitor asset aging changes or operating condition changes of the monitored asset.

  15. Mixed Methods Evaluation of Statewide Implementation of Mathematics Education Technology for K-12 Students

    ERIC Educational Resources Information Center

    Brasiel, Sarah; Martin, Taylor; Jeong, Soojeong; Yuan, Min

    2016-01-01

    An extensive body of research has demonstrated that the use in a K-12 classroom of technology, such as the Internet, computers, and software programs, enhances the learning of mathematics (Cheung & Slavin, 2013; Cohen & Hollebrands, 2011). In particular, growing empirical evidence supports that certain types of technology, such as…

  16. Role of Statistical Random-Effects Linear Models in Personalized Medicine.

    PubMed

    Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose

    2012-03-01

    Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.

  17. BlueSNP: R package for highly scalable genome-wide association studies using Hadoop clusters.

    PubMed

    Huang, Hailiang; Tata, Sandeep; Prill, Robert J

    2013-01-01

    Computational workloads for genome-wide association studies (GWAS) are growing in scale and complexity outpacing the capabilities of single-threaded software designed for personal computers. The BlueSNP R package implements GWAS statistical tests in the R programming language and executes the calculations across computer clusters configured with Apache Hadoop, a de facto standard framework for distributed data processing using the MapReduce formalism. BlueSNP makes computationally intensive analyses, such as estimating empirical p-values via data permutation, and searching for expression quantitative trait loci over thousands of genes, feasible for large genotype-phenotype datasets. http://github.com/ibm-bioinformatics/bluesnp

  18. Neural correlates and neural computations in posterior parietal cortex during perceptual decision-making

    PubMed Central

    Huk, Alexander C.; Meister, Miriam L. R.

    2012-01-01

    A recent line of work has found remarkable success in relating perceptual decision-making and the spiking activity in the macaque lateral intraparietal area (LIP). In this review, we focus on questions about the neural computations in LIP that are not answered by demonstrations of neural correlates of psychological processes. We highlight three areas of limitations in our current understanding of the precise neural computations that might underlie neural correlates of decisions: (1) empirical questions not yet answered by existing data; (2) implementation issues related to how neural circuits could actually implement the mechanisms suggested by both extracellular neurophysiology and psychophysics; and (3) ecological constraints related to the use of well-controlled laboratory tasks and whether they provide an accurate window on sensorimotor computation. These issues motivate the adoption of a more general “encoding-decoding framework” that will be fruitful for more detailed contemplation of how neural computations in LIP relate to the formation of perceptual decisions. PMID:23087623

  19. An Architecture for Cross-Cloud System Management

    NASA Astrophysics Data System (ADS)

    Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad

    The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.

  20. Empirical mode decomposition for analyzing acoustical signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    The present invention discloses a computer implemented signal analysis method through the Hilbert-Huang Transformation (HHT) for analyzing acoustical signals, which are assumed to be nonlinear and nonstationary. The Empirical Decomposition Method (EMD) and the Hilbert Spectral Analysis (HSA) are used to obtain the HHT. Essentially, the acoustical signal will be decomposed into the Intrinsic Mode Function Components (IMFs). Once the invention decomposes the acoustic signal into its constituting components, all operations such as analyzing, identifying, and removing unwanted signals can be performed on these components. Upon transforming the IMFs into Hilbert spectrum, the acoustical signal may be compared with other acoustical signals.

  1. Reproducible Computing: a new Technology for Statistics Education and Educational Research

    NASA Astrophysics Data System (ADS)

    Wessa, Patrick

    2009-05-01

    This paper explains how the R Framework (http://www.wessa.net) and a newly developed Compendium Platform (http://www.freestatistics.org) allow us to create, use, and maintain documents that contain empirical research results which can be recomputed and reused in derived work. It is illustrated that this technological innovation can be used to create educational applications that can be shown to support effective learning of statistics and associated analytical skills. It is explained how a Compendium can be created by anyone, without the need to understand the technicalities of scientific word processing (L style="font-variant: small-caps">ATEX) or statistical computing (R code). The proposed Reproducible Computing system allows educational researchers to objectively measure key aspects of the actual learning process based on individual and constructivist activities such as: peer review, collaboration in research, computational experimentation, etc. The system was implemented and tested in three statistics courses in which the use of Compendia was used to create an interactive e-learning environment that simulated the real-world process of empirical scientific research.

  2. Research-Based Implementation of Peer Instruction: A Literature Review

    PubMed Central

    Vickrey, Trisha; Rosploch, Kaitlyn; Rahmanian, Reihaneh; Pilarz, Matthew; Stains, Marilyne

    2015-01-01

    Current instructional reforms in undergraduate science, technology, engineering, and mathematics (STEM) courses have focused on enhancing adoption of evidence-based instructional practices among STEM faculty members. These practices have been empirically demonstrated to enhance student learning and attitudes. However, research indicates that instructors often adapt rather than adopt practices, unknowingly compromising their effectiveness. Thus, there is a need to raise awareness of the research-based implementation of these practices, develop fidelity of implementation protocols to understand adaptations being made, and ultimately characterize the true impact of reform efforts based on these practices. Peer instruction (PI) is an example of an evidence-based instructional practice that consists of asking students conceptual questions during class time and collecting their answers via clickers or response cards. Extensive research has been conducted by physics and biology education researchers to evaluate the effectiveness of this practice and to better understand the intricacies of its implementation. PI has also been investigated in other disciplines, such as chemistry and computer science. This article reviews and summarizes these various bodies of research and provides instructors and researchers with a research-based model for the effective implementation of PI. Limitations of current studies and recommendations for future empirical inquiries are also provided. PMID:25713095

  3. Poisson-process generalization for the trading waiting-time distribution in a double-auction mechanism

    NASA Astrophysics Data System (ADS)

    Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico

    2005-05-01

    In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.

  4. Role of Statistical Random-Effects Linear Models in Personalized Medicine

    PubMed Central

    Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose

    2012-01-01

    Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization. PMID:23467392

  5. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  6. Using Grid Cells for Navigation

    PubMed Central

    Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil

    2015-01-01

    Summary Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. PMID:26247860

  7. Implementation Issues of Virtual Desktop Infrastructure and Its Case Study for a Physician's Round at Seoul National University Bundang Hospital.

    PubMed

    Yoo, Sooyoung; Kim, Seok; Kim, Taegi; Kim, Jon Soo; Baek, Rong-Min; Suh, Chang Suk; Chung, Chin Youb; Hwang, Hee

    2012-12-01

    The cloud computing-based virtual desktop infrastructure (VDI) allows access to computing environments with no limitations in terms of time or place such that it can permit the rapid establishment of a mobile hospital environment. The objective of this study was to investigate the empirical issues to be considered when establishing a virtual mobile environment using VDI technology in a hospital setting and to examine the utility of the technology with an Apple iPad during a physician's rounds as a case study. Empirical implementation issues were derived from a 910-bed tertiary national university hospital that recently launched a VDI system. During the physicians' rounds, we surveyed patient satisfaction levels with the VDI-based mobile consultation service with the iPad and the relationship between these levels of satisfaction and hospital revisits, hospital recommendations, and the hospital brand image. Thirty-five inpatients (including their next-of-kin) and seven physicians participated in the survey. Implementation issues pertaining to the VDI system arose with regard to the highly availability system architecture, wireless network infrastructure, and screen resolution of the system. Other issues were related to privacy and security, mobile device management, and user education. When the system was used in rounds, patients and their next-of-kin expressed high satisfaction levels, and a positive relationship was noted as regards patients' decisions to revisit the hospital and whether the use of the VDI system improved the brand image of the hospital. Mobile hospital environments have the potential to benefit both physicians and patients. The issues related to the implementation of VDI system discussed here should be examined in advance for its successful adoption and implementation.

  8. Implementation Issues of Virtual Desktop Infrastructure and Its Case Study for a Physician's Round at Seoul National University Bundang Hospital

    PubMed Central

    Yoo, Sooyoung; Kim, Seok; Kim, Taegi; Kim, Jon Soo; Baek, Rong-Min; Suh, Chang Suk; Chung, Chin Youb

    2012-01-01

    Objectives The cloud computing-based virtual desktop infrastructure (VDI) allows access to computing environments with no limitations in terms of time or place such that it can permit the rapid establishment of a mobile hospital environment. The objective of this study was to investigate the empirical issues to be considered when establishing a virtual mobile environment using VDI technology in a hospital setting and to examine the utility of the technology with an Apple iPad during a physician's rounds as a case study. Methods Empirical implementation issues were derived from a 910-bed tertiary national university hospital that recently launched a VDI system. During the physicians' rounds, we surveyed patient satisfaction levels with the VDI-based mobile consultation service with the iPad and the relationship between these levels of satisfaction and hospital revisits, hospital recommendations, and the hospital brand image. Thirty-five inpatients (including their next-of-kin) and seven physicians participated in the survey. Results Implementation issues pertaining to the VDI system arose with regard to the highly availability system architecture, wireless network infrastructure, and screen resolution of the system. Other issues were related to privacy and security, mobile device management, and user education. When the system was used in rounds, patients and their next-of-kin expressed high satisfaction levels, and a positive relationship was noted as regards patients' decisions to revisit the hospital and whether the use of the VDI system improved the brand image of the hospital. Conclusions Mobile hospital environments have the potential to benefit both physicians and patients. The issues related to the implementation of VDI system discussed here should be examined in advance for its successful adoption and implementation. PMID:23346476

  9. Computational Grounded Cognition: a new alliance between grounded cognition and computational modeling

    PubMed Central

    Pezzulo, Giovanni; Barsalou, Lawrence W.; Cangelosi, Angelo; Fischer, Martin H.; McRae, Ken; Spivey, Michael J.

    2013-01-01

    Grounded theories assume that there is no central module for cognition. According to this view, all cognitive phenomena, including those considered the province of amodal cognition such as reasoning, numeric, and language processing, are ultimately grounded in (and emerge from) a variety of bodily, affective, perceptual, and motor processes. The development and expression of cognition is constrained by the embodiment of cognitive agents and various contextual factors (physical and social) in which they are immersed. The grounded framework has received numerous empirical confirmations. Still, there are very few explicit computational models that implement grounding in sensory, motor and affective processes as intrinsic to cognition, and demonstrate that grounded theories can mechanistically implement higher cognitive abilities. We propose a new alliance between grounded cognition and computational modeling toward a novel multidisciplinary enterprise: Computational Grounded Cognition. We clarify the defining features of this novel approach and emphasize the importance of using the methodology of Cognitive Robotics, which permits simultaneous consideration of multiple aspects of grounding, embodiment, and situatedness, showing how they constrain the development and expression of cognition. PMID:23346065

  10. Where to look? Automating attending behaviors of virtual human characters

    NASA Technical Reports Server (NTRS)

    Chopra Khullar, S.; Badler, N. I.

    2001-01-01

    This research proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from psychology, human factors and computer vision. Deliberate behaviors, the analogs of scanpaths in visual psychology, compete with involuntary attention capture and lapses into idling or free viewing. Insights provided by implementing this framework are: a defined set of parameters that impact the observable effects of attention, a defined vocabulary of looking behaviors for certain motor and cognitive activity, a defined hierarchy of three levels of eye behavior (endogenous, exogenous and idling) and a proposed method of how these types interact.

  11. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  12. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  13. The Role of Context Free Collaboration Design Patterns in Learning Design within LAMS: Lessons Learned from an Empirical Study

    ERIC Educational Resources Information Center

    Kordaki, Maria

    2011-01-01

    This study presents an experiment aimed at the design of short learning courses in the context of LAMS, using a number of specific context-free collaboration design patterns implemented within LAMS. In fact, 25 Prospective Computer Engineers (PCEs) participated in this experiment. The analysis of the data shows that PCEs fully used these context…

  14. Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).

    PubMed

    Bag, Arijit; Ghorai, Pradip Kr

    2016-05-01

    Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Automating approximate Bayesian computation by local linear regression.

    PubMed

    Thornton, Kevin R

    2009-07-07

    In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.

  16. One Small Step for Manuals: Computer-Assisted Training in Twelve-Step Facilitation*

    PubMed Central

    Sholomskas, Diane E.; Carroll, Kathleen M.

    2008-01-01

    Objective The burgeoning number of empirically validated therapies has not been met with systematic evaluation of practical, inexpensive means of teaching large numbers of clinicians to use these treatments effectively. An interactive, computer-assisted training program that sought to impart skills associated with the Project MATCH (Matching Alcoholism Treatments to Client Heterogeneity) Twelve-Step Facilitation (TSF) manual was developed to address this need. Method Twenty-five community-based substance use-treatment clinicians were randomized to one of two training conditions: (1) access to the computer-assisted training program plus the TSF manual or (2) access to the manual only. The primary outcome measure was change from pre- to posttraining in the clinicians' ability to demonstrate key TSF skills. Results The data suggested that the clinicians' ability to implement TSF, as assessed by independent ratings of adherence and skill for the key TSF interventions, was significantly higher after training for those who had access to the computerized training condition than those who were assigned to the manual-only condition. Those assigned to the computer-assisted training condition also demonstrated greater gains in a knowledge test assessing familiarity with concepts presented in the TSF manual. Conclusions Computer-based training may be a feasible and effective means of training larger numbers of clinicians in empirically supported, manual-guided therapies. PMID:17061013

  17. One small step for manuals: Computer-assisted training in twelve-step facilitation.

    PubMed

    Sholomskas, Diane E; Carroll, Kathleen M

    2006-11-01

    The burgeoning number of empirically validated therapies has not been met with systematic evaluation of practical, inexpensive means of teaching large numbers of clinicians to use these treatments effectively. An interactive, computer-assisted training program that sought to impart skills associated with the Project MATCH (Matching Alcoholism Treatments to Client Heterogeneity) Twelve-Step Facilitation (TSF) manual was developed to address this need. Twenty-five community-based substance use-treatment clinicians were randomized to one of two training conditions: (1) access to the computer- assisted training program plus the TSF manual or (2) access to the manual only. The primary outcome measure was change from preto posttraining in the clinicians' ability to demonstrate key TSF skills. The data suggested that the clinicians' ability to implement TSF, as assessed by independent ratings of adherence and skill for the key TSF interventions, was significantly higher after training for those who had access to the computerized training condition than those who were assigned to the manual-only condition. Those assigned to the computer-assisted training condition also demonstrated greater gains in a knowledge test assessing familiarity with concepts presented in the TSF manual. Computer-based training may be a feasible and effective means of training larger numbers of clinicians in empirically supported, manual-guided therapies.

  18. Tests of Fit for Asymmetric Laplace Distributions with Applications on Financial Data

    NASA Astrophysics Data System (ADS)

    Fragiadakis, Kostas; Meintanis, Simos G.

    2008-11-01

    New goodness-of-fit tests for the family of asymmetric Laplace distributions are constructed. The proposed tests are based on a weighted integral incorporating the empirical characteristic function of suitably standardized data, and can be written in a closed form appropriate for computer implementation. Monte Carlo results show that the new procedure are competitive with classical goodness-of-fit methods. Applications with financial data are also included.

  19. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2004-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  20. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2002-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  1. Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment

    NASA Technical Reports Server (NTRS)

    Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.

  2. Neighbour lists for smoothed particle hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Rezavand, Massoud; Rauch, Wolfgang

    2018-04-01

    The efficient iteration of neighbouring particles is a performance critical aspect of any high performance smoothed particle hydrodynamics (SPH) solver. SPH solvers that implement a constant smoothing length generally divide the simulation domain into a uniform grid to reduce the computational complexity of the neighbour search. Based on this method, particle neighbours are either stored per grid cell or for each individual particle, denoted as Verlet list. While the latter approach has significantly higher memory requirements, it has the potential for a significant computational speedup. A theoretical comparison is performed to estimate the potential improvements of the method based on unknown hardware dependent factors. Subsequently, the computational performance of both approaches is empirically evaluated on graphics processing units. It is shown that the speedup differs significantly for different hardware, dimensionality and floating point precision. The Verlet list algorithm is implemented as an alternative to the cell linked list approach in the open-source SPH solver DualSPHysics and provided as a standalone software package.

  3. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  4. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine

    PubMed Central

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2016-01-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473

  5. Evaluation of an eye-pointer interaction device for human-computer interaction.

    PubMed

    Cáceres, Enrique; Carrasco, Miguel; Ríos, Sebastián

    2018-03-01

    Advances in eye-tracking technology have led to better human-computer interaction, and involve controlling a computer without any kind of physical contact. This research describes the transformation of a commercial eye-tracker for use as an alternative peripheral device in human-computer interactions, implementing a pointer that only needs the eye movements of a user facing a computer screen, thus replacing the need to control the software by hand movements. The experiment was performed with 30 test individuals who used the prototype with a set of educational videogames. The results show that, although most of the test subjects would prefer a mouse to control the pointer, the prototype tested has an empirical precision similar to that of the mouse, either when trying to control its movements or when attempting to click on a point of the screen.

  6. The Mechanics of Embodiment: A Dialog on Embodiment and Computational Modeling

    PubMed Central

    Pezzulo, Giovanni; Barsalou, Lawrence W.; Cangelosi, Angelo; Fischer, Martin H.; McRae, Ken; Spivey, Michael J.

    2011-01-01

    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamoring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensorimotor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialog between two fictional characters: Ernest, the “experimenter,” and Mary, the “computational modeler.” The dialog consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modeling. PMID:21713184

  7. Using Grid Cells for Navigation.

    PubMed

    Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil

    2015-08-05

    Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this "vector navigation" relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Effects of people-centred factors on enterprise resource planning implementation project success: empirical evidence from Sri Lanka

    NASA Astrophysics Data System (ADS)

    Wickramasinghe, Vathsala; Gunawardena, Vathsala

    2010-08-01

    Extant literature suggests people-centred factors as one of the major areas influencing enterprise resource planning (ERP) implementation project success. Yet, to date, few empirical studies attempted to validate the link between people-centred factors and ERP implementation project success. The purpose of this study is to empirically identify people-centred factors that are critical to ERP implementation projects in Sri Lanka. The study develops and empirically validates a framework for people-centred factors that influence the success of ERP implementation projects. Survey research methodology was used and collected data from 74 ERP implementation projects in Sri Lanka. The people-centred factors of 'project team competence', 'rewards' and 'communication and change' were found to predict significantly the ERP implementation project success.

  9. Implementation of model predictive control for resistive wall mode stabilization on EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2015-10-01

    A model predictive control (MPC) method for stabilization of the resistive wall mode (RWM) in the EXTRAP T2R reversed-field pinch is presented. The system identification technique is used to obtain a linearized empirical model of EXTRAP T2R. MPC employs the model for prediction and computes optimal control inputs that satisfy performance criterion. The use of a linearized form of the model allows for compact formulation of MPC, implemented on a millisecond timescale, that can be used for real-time control. The design allows the user to arbitrarily suppress any selected Fourier mode. The experimental results from EXTRAP T2R show that the designed and implemented MPC successfully stabilizes the RWM.

  10. Heterogeneity and Self-Organization of Complex Systems Through an Application to Financial Market with Multiagent Systems

    NASA Astrophysics Data System (ADS)

    Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille

    2017-12-01

    Multiagent systems (MAS) provide a useful tool for exploring the complex dynamics and behavior of financial markets and now MAS approach has been widely implemented and documented in the empirical literature. This paper introduces the implementation of an innovative multi-scale mathematical model for a computational agent-based financial market. The paper develops a method to quantify the degree of self-organization which emerges in the system and shows that the capacity of self-organization is maximized when the agent behaviors are heterogeneous. Numerical results are presented and analyzed, showing how the global market behavior emerges from specific individual behavior interactions.

  11. Lost in translation? Moving contingency management and cognitive behavioral therapy into clinical practice.

    PubMed

    Carroll, Kathleen M

    2014-10-01

    In the treatment of addictions, the gap between the availability of evidence-based therapies and their limited implementation in practice has not yet been bridged. Two empirically validated behavioral therapies, contingency management (CM) and cognitive behavioral therapy (CBT), exemplify this challenge. Both have a relatively strong level of empirical support but each has weak and uneven adoption in clinical practice. This review highlights examples of how barriers to their implementation in practice have been addressed systematically, using the Stage Model of Behavioral Therapies Development as an organizing framework. For CM, barriers such as cost and ideology have been addressed through the development of lower-cost and other adaptations to make it more community friendly. For CBT, barriers such as relative complexity, lack of trained providers, and need for supervision have been addressed via conversion to standardized computer-assisted versions that can serve as clinician extenders. Although these and other modifications have rendered both interventions more disseminable, diffusion of innovation remains a complex, often unpredictable process. The existing specialty addiction-treatment system may require significant reforms to fully implement CBT and CM, particularly greater focus on definable treatment goals and performance-based outcomes. © 2014 New York Academy of Sciences.

  12. From empirical data to time-inhomogeneous continuous Markov processes.

    PubMed

    Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G

    2016-03-01

    We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.

  13. Empirical modeling of dynamic behaviors of pneumatic artificial muscle actuators.

    PubMed

    Wickramatunge, Kanchana Crishan; Leephakpreeda, Thananchai

    2013-11-01

    Pneumatic Artificial Muscle (PAM) actuators yield muscle-like mechanical actuation with high force to weight ratio, soft and flexible structure, and adaptable compliance for rehabilitation and prosthetic appliances to the disabled as well as humanoid robots or machines. The present study is to develop empirical models of the PAM actuators, that is, a PAM coupled with pneumatic control valves, in order to describe their dynamic behaviors for practical control design and usage. Empirical modeling is an efficient approach to computer-based modeling with observations of real behaviors. Different characteristics of dynamic behaviors of each PAM actuator are due not only to the structures of the PAM actuators themselves, but also to the variations of their material properties in manufacturing processes. To overcome the difficulties, the proposed empirical models are experimentally derived from real physical behaviors of the PAM actuators, which are being implemented. In case studies, the simulated results with good agreement to experimental results, show that the proposed methodology can be applied to describe the dynamic behaviors of the real PAM actuators. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  14. The impact of fillers on lineup performance.

    PubMed

    Wetmore, Stacy A; McAdoo, Ryan M; Gronlund, Scott D; Neuschatz, Jeffrey S

    2017-01-01

    Filler siphoning theory posits that the presence of fillers (known innocents) in a lineup protects an innocent suspect from being chosen by siphoning choices away from that innocent suspect. This mechanism has been proposed as an explanation for why simultaneous lineups (viewing all lineup members at once) induces better performance than showups (one-person identification procedures). We implemented filler siphoning in a computational model (WITNESS, Clark, Applied Cognitive Psychology 17:629-654, 2003), and explored the impact of the number of fillers (lineup size) and filler quality on simultaneous and sequential lineups (viewing lineups members in sequence), and compared both to showups. In limited situations, we found that filler siphoning can produce a simultaneous lineup performance advantage, but one that is insufficient in magnitude to explain empirical data. However, the magnitude of the empirical simultaneous lineup advantage can be approximated once criterial variability is added to the model. But this modification works by negatively impacting showups rather than promoting more filler siphoning. In sequential lineups, fillers were found to harm performance. Filler siphoning fails to clarify the relationship between simultaneous lineups and sequential lineups or showups. By incorporating constructs like filler siphoning and criterial variability into a computational model, and trying to approximate empirical data, we can sort through explanations of eyewitness decision-making, a prerequisite for policy recommendations.

  15. GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith

    2014-08-25

    Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less

  16. Efficient implementation of the many-body Reactive Bond Order (REBO) potential on GPU

    NASA Astrophysics Data System (ADS)

    Trędak, Przemysław; Rudnicki, Witold R.; Majewski, Jacek A.

    2016-09-01

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPU to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.

  17. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays.

    PubMed

    McLachlan, G J; Bean, R W; Jones, L Ben-Tovim

    2006-07-01

    An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

  18. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    PubMed

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  19. Performance management of high performance computing for medical image processing in Amazon Web Services

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  20. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services

    PubMed Central

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-01-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trędak, Przemysław, E-mail: przemyslaw.tredak@fuw.edu.pl; Rudnicki, Witold R.; Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, ul. Pawińskiego 5a, 02-106 Warsaw

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPUmore » to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.« less

  2. Pressure ulcers: implementation of evidence-based nursing practice.

    PubMed

    Clarke, Heather F; Bradley, Chris; Whytock, Sandra; Handfield, Shannon; van der Wal, Rena; Gundry, Sharon

    2005-03-01

    A 2-year project was carried out to evaluate the use of multi-component, computer-assisted strategies for implementing clinical practice guidelines. This paper describes the implementation of the project and lessons learned. The evaluation and outcomes of implementing clinical practice guidelines to prevent and treat pressure ulcers will be reported in a separate paper. The prevalence and incidence rates of pressure ulcers, coupled with the cost of treatment, constitute a substantial burden for our health care system. It is estimated that treating a pressure ulcer can increase nursing time up to 50%, and that treatment costs per ulcer can range from US$10,000 to $86,000, with median costs of $27,000. Although evidence-based guidelines for prevention and optimum treatment of pressure ulcers have been developed, there is little empirical evidence about the effectiveness of implementation strategies. The study was conducted across the continuum of care (primary, secondary and tertiary) in a Canadian urban Health Region involving seven health care organizations (acute, home and extended care). Trained surveyors (Registered Nurses) determined the prevalence and incidence of pressure ulcers among patients in these organizations. The use of a computerized decision-support system assisted staff to select optimal, evidence-based care strategies, record information and analyse individual and aggregate data. Evaluation indicated an increase in knowledge relating to pressure ulcer prevention, treatment strategies, resources required, and the role of the interdisciplinary team. Lack of visible senior nurse leadership; time required to acquire computer skills and to implement new guidelines; and difficulties with the computer system were identified as barriers. There is a need for a comprehensive, supported and sustained approach to implementation of evidence-based practice for pressure ulcer prevention and treatment, greater understanding of organization-specific barriers, and mechanisms for addressing the barriers.

  3. Integrating Empirical-Modeling Approaches to Improve Understanding of Terrestrial Ecology Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarthy, Heather; Luo, Yiqi; Wullschleger, Stan D

    Recent decades have seen tremendous increases in the quantity of empirical ecological data collected by individual investigators, as well as through research networks such as FLUXNET (Baldocchi et al., 2001). At the same time, advances in computer technology have facilitated the development and implementation of large and complex land surface and ecological process models. Separately, each of these information streams provides useful, but imperfect information about ecosystems. To develop the best scientific understanding of ecological processes, and most accurately predict how ecosystems may cope with global change, integration of empirical and modeling approaches is necessary. However, true integration - inmore » which models inform empirical research, which in turn informs models (Fig. 1) - is not yet common in ecological research (Luo et al., 2011). The goal of this workshop, sponsored by the Department of Energy, Office of Science, Biological and Environmental Research (BER) program, was to bring together members of the empirical and modeling communities to exchange ideas and discuss scientific practices for increasing empirical - model integration, and to explore infrastructure and/or virtual network needs for institutionalizing empirical - model integration (Yiqi Luo, University of Oklahoma, Norman, OK, USA). The workshop included presentations and small group discussions that covered topics ranging from model-assisted experimental design to data driven modeling (e.g. benchmarking and data assimilation) to infrastructure needs for empirical - model integration. Ultimately, three central questions emerged. How can models be used to inform experiments and observations? How can experimental and observational results be used to inform models? What are effective strategies to promote empirical - model integration?« less

  4. Structure Shapes Dynamics and Directionality in Diverse Brain Networks: Mathematical Principles and Empirical Confirmation in Three Species

    NASA Astrophysics Data System (ADS)

    Moon, Joon-Young; Kim, Junhyeok; Ko, Tae-Wook; Kim, Minkyung; Iturria-Medina, Yasser; Choi, Jee-Hyun; Lee, Joseph; Mashour, George A.; Lee, Uncheol

    2017-04-01

    Identifying how spatially distributed information becomes integrated in the brain is essential to understanding higher cognitive functions. Previous computational and empirical studies suggest a significant influence of brain network structure on brain network function. However, there have been few analytical approaches to explain the role of network structure in shaping regional activities and directionality patterns. In this study, analytical methods are applied to a coupled oscillator model implemented in inhomogeneous networks. We first derive a mathematical principle that explains the emergence of directionality from the underlying brain network structure. We then apply the analytical methods to the anatomical brain networks of human, macaque, and mouse, successfully predicting simulation and empirical electroencephalographic data. The results demonstrate that the global directionality patterns in resting state brain networks can be predicted solely by their unique network structures. This study forms a foundation for a more comprehensive understanding of how neural information is directed and integrated in complex brain networks.

  5. Analytical techniques for the study of some parameters of multispectral scanner systems for remote sensing

    NASA Technical Reports Server (NTRS)

    Wiswell, E. R.; Cooper, G. R. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The concept of average mutual information in the received spectral random process about the spectral scene was developed. Techniques amenable to implementation on a digital computer were also developed to make the required average mutual information calculations. These techniques required identification of models for the spectral response process of scenes. Stochastic modeling techniques were adapted for use. These techniques were demonstrated on empirical data from wheat and vegetation scenes.

  6. Implementation of the RS232 communication trainer using computers and the ATMEGA microcontroller for interface engineering Courses

    NASA Astrophysics Data System (ADS)

    Amelia, Afritha; Julham; Viyata Sundawa, Bakti; Pardede, Morlan; Sutrisno, Wiwinta; Rusdi, Muhammad

    2017-09-01

    RS232 of serial communication is the communication system in the computer and microcontroller. This communication was studied in Department of Electrical Engineering and Department of Computer Engineering and Informatics Department at Politeknik Negeri Medan. Recently, an application of simulation was installed on the computer which used for teaching and learning process. The drawback of this system is not useful for communication method between learner and trainer. Therefore, this study was created method of 10 stage to which divided into 7 stages and 3 major phases. It can be namely the analysis of potential problems and data collection, trainer design, and empirical testing and revision. After that, the trainer and module were tested in order to get feedback from the learner. The result showed that 70.10% of feedback which wide reasonable from the learner of questionnaire.

  7. Investigation of computational aeroacoustic tools for noise predictions of wind turbine aerofoils

    NASA Astrophysics Data System (ADS)

    Humpf, A.; Ferrer, E.; Munduate, X.

    2007-07-01

    In this work trailing edge noise levels of a research aerofoil have been computed and compared to aeroacoustic measurements using two different approaches. On the other hand, aerodynamic and aeroacoustic calculations were performed with the full Navier-Stokes CFD code Fluent [Fluent Inc 2005 Fluent 6.2 Users Guide, Lebanon, NH, USA] on the basis of a steady RANS simulation. Aerodynamic characteristics were computed by the aid of various turbulence models. By the combined usage of implemented broadband noise source models, it was tried to isolate and determine the trailing edge noise level. Throughout this work two methods of different computational cost have been tested and quantitative and qualitative results obtained. On the one hand, the semi-empirical noise prediction tool NAFNoise [Moriarty P 2005 NAFNoise User's Guide. Golden, Colorado, July. http://wind.nrel.gov/designcodes/ simulators/NAFNoise] was used to directly predict trailing edge noise by taking into consideration the nature of the experiments.

  8. Optimal nonlinear information processing capacity in delay-based reservoir computers

    NASA Astrophysics Data System (ADS)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2015-09-01

    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.

  9. Optimal nonlinear information processing capacity in delay-based reservoir computers.

    PubMed

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2015-09-11

    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.

  10. Optimal nonlinear information processing capacity in delay-based reservoir computers

    PubMed Central

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2015-01-01

    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature. PMID:26358528

  11. Particle transport in the human respiratory tract: formulation of a nodal inverse distance weighted Eulerian-Lagrangian transport and implementation of the Wind-Kessel algorithm for an oral delivery.

    PubMed

    Kannan, Ravishekar; Guo, Peng; Przekwas, Andrzej

    2016-06-01

    This paper is the first in a series wherein efficient computational methods are developed and implemented to accurately quantify the transport, deposition, and clearance of the microsized particles (range of interest: 2 to 10 µm) in the human respiratory tract. In particular, this paper (part I) deals with (i) development of a detailed 3D computational finite volume mesh comprising of the NOPL (nasal, oral, pharyngeal and larynx), trachea and several airway generations; (ii) use of CFD Research Corporation's finite volume Computational Biology (CoBi) flow solver to obtain the flow physics for an oral inhalation simulation; (iii) implement a novel and accurate nodal inverse distance weighted Eulerian-Lagrangian formulation to accurately obtain the deposition, and (iv) development of Wind-Kessel boundary condition algorithm. This new Wind-Kessel boundary condition algorithm allows the 'escaped' particles to reenter the airway through the outlets, thereby to an extent accounting for the drawbacks of having a finite number of lung generations in the computational mesh. The deposition rates in the NOPL, trachea, the first and second bifurcation were computed, and they were in reasonable accord with the Typical Path Length model. The quantitatively validated results indicate that these developments will be useful for (i) obtaining depositions in diseased lungs (because of asthma and COPD), for which there are no empirical models, and (ii) obtaining the secondary clearance (mucociliary clearance) of the deposited particles. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  12. EON: software for long time simulations of atomic scale systems

    NASA Astrophysics Data System (ADS)

    Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme

    2014-07-01

    The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.

  13. Empirical Determination of Competence Areas to Computer Science Education

    ERIC Educational Resources Information Center

    Zendler, Andreas; Klaudt, Dieter; Seitz, Cornelia

    2014-01-01

    The authors discuss empirically determined competence areas to K-12 computer science education, emphasizing the cognitive level of competence. The results of a questionnaire with 120 professors of computer science serve as a database. By using multi-dimensional scaling and cluster analysis, four competence areas to computer science education…

  14. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    NASA Astrophysics Data System (ADS)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  15. Parallelizing serial code for a distributed processing environment with an application to high frequency electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Work, Paul R.

    1991-12-01

    This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.

  16. Unstructured grids on SIMD torus machines

    NASA Technical Reports Server (NTRS)

    Bjorstad, Petter E.; Schreiber, Robert

    1994-01-01

    Unstructured grids lead to unstructured communication on distributed memory parallel computers, a problem that has been considered difficult. Here, we consider adaptive, offline communication routing for a SIMD processor grid. Our approach is empirical. We use large data sets drawn from supercomputing applications instead of an analytic model of communication load. The chief contribution of this paper is an experimental demonstration of the effectiveness of certain routing heuristics. Our routing algorithm is adaptive, nonminimal, and is generally designed to exploit locality. We have a parallel implementation of the router, and we report on its performance.

  17. Implementing Evidence-Based Practice: A Review of the Empirical Research Literature

    ERIC Educational Resources Information Center

    Gray, Mel; Joy, Elyssa; Plath, Debbie; Webb, Stephen A.

    2013-01-01

    The article reports on the findings of a review of empirical studies examining the implementation of evidence-based practice (EBP) in the human services. Eleven studies were located that defined EBP as a research-informed, clinical decision-making process and identified barriers and facilitators to EBP implementation. A thematic analysis of the…

  18. GPR random noise reduction using BPD and EMD

    NASA Astrophysics Data System (ADS)

    Ostoori, Roya; Goudarzi, Alireza; Oskooi, Behrooz

    2018-04-01

    Ground-penetrating radar (GPR) exploration is a new high-frequency technology that explores near-surface objects and structures accurately. The high-frequency antenna of the GPR system makes it a high-resolution method compared to other geophysical methods. The frequency range of recorded GPR is so wide that random noise recording is inevitable due to acquisition. This kind of noise comes from unknown sources and its correlation to the adjacent traces is nearly zero. This characteristic of random noise along with the higher accuracy of GPR system makes denoising very important for interpretable results. The main objective of this paper is to reduce GPR random noise based on pursuing denoising using empirical mode decomposition. Our results showed that empirical mode decomposition in combination with basis pursuit denoising (BPD) provides satisfactory outputs due to the sifting process compared to the time-domain implementation of the BPD method on both synthetic and real examples. Our results demonstrate that because of the high computational costs, the BPD-empirical mode decomposition technique should only be used for heavily noisy signals.

  19. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  20. Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudheer, C. D.; Krishnan, S.; Srinivasan, A.

    Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receivingmore » at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.« less

  1. Quantifying uncertainty in climate change science through empirical information theory.

    PubMed

    Majda, Andrew J; Gershgorin, Boris

    2010-08-24

    Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.

  2. Against the empirical viability of the Deutsch-Wallace-Everett approach to quantum mechanics

    NASA Astrophysics Data System (ADS)

    Dawid, Richard; Thébault, Karim P. Y.

    2014-08-01

    The subjective Everettian approach to quantum mechanics presented by Deutsch and Wallace fails to constitute an empirically viable theory of quantum phenomena. The decision theoretic implementation of the Born rule realized in this approach provides no basis for rejecting Everettian quantum mechanics in the face of empirical data that contradicts the Born rule. The approach of Greaves and Myrvold, which provides a subjective implementation of the Born rule as well but derives it from empirical data rather than decision theoretic arguments, avoids the problem faced by Deutsch and Wallace and is empirically viable. However, there is good reason to cast doubts on its scientific value.

  3. Design of a cooperative problem-solving system for enroute flight planning: An empirical study of its use by airline dispatchers

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.; Mccoy, C. Elaine; Layton, Charles; Orasanu, Judith; Chappel, Sherry; Palmer, EV; Corker, Kevin

    1993-01-01

    In a previous report, an empirical study of 30 pilots using the Flight Planning Testbed was reported. An identical experiment using the Flight Planning Testbed (FPT), except that 27 airline dispatchers were studied, is described. Five general questions were addressed in this study: (1) under what circumstances do the introduction of computer-generated suggestions (flight plans) influence the planning behavior of dispatchers (either in a beneficial or adverse manner); (2) what is the nature of such influences (i.e., how are the person's cognitive processes changed); (3) how beneficial are the general design concepts underlying FPT (use of a graphical interface, embedding graphics in a spreadsheet, etc.); (4) how effective are the specific implementation decisions made in realizing these general design concepts; and (5) how effectively do dispatchers evaluate situations requiring replanning, and how effectively do they identify appropriate solutions to these situations.

  4. Empirical modeling of environment-enhanced fatigue crack propagation in structural alloys for component life prediction

    NASA Technical Reports Server (NTRS)

    Richey, Edward, III

    1995-01-01

    This research aims to develop the methods and understanding needed to incorporate time and loading variable dependent environmental effects on fatigue crack propagation (FCP) into computerized fatigue life prediction codes such as NASA FLAGRO (NASGRO). In particular, the effect of loading frequency on FCP rates in alpha + beta titanium alloys exposed to an aqueous chloride solution is investigated. The approach couples empirical modeling of environmental FCP with corrosion fatigue experiments. Three different computer models have been developed and incorporated in the DOS executable program. UVAFAS. A multiple power law model is available, and can fit a set of fatigue data to a multiple power law equation. A model has also been developed which implements the Wei and Landes linear superposition model, as well as an interpolative model which can be utilized to interpolate trends in fatigue behavior based on changes in loading characteristics (stress ratio, frequency, and hold times).

  5. Development of a new model for short period ocean tidal variations of Earth rotation

    NASA Astrophysics Data System (ADS)

    Schuh, Harald

    2015-08-01

    Within project SPOT (Short Period Ocean Tidal variations in Earth rotation) we develop a new high frequency Earth rotation model based on empirical ocean tide models. The main purpose of the SPOT model is its application to space geodetic observations such as GNSS and VLBI.We consider an empirical ocean tide model, which does not require hydrodynamic ocean modeling to determine ocean tidal angular momentum. We use here the EOT11a model of Savcenko & Bosch (2012), which is extended for some additional minor tides (e.g. M1, J1, T2). As empirical tidal models do not provide ocean tidal currents, which are re- quired for the computation of oceanic relative angular momentum, we implement an approach first published by Ray (2001) to estimate ocean tidal current veloci- ties for all tides considered in the extended EOT11a model. The approach itself is tested by application to tidal heights from hydrodynamic ocean tide models, which also provide tidal current velocities. Based on the tidal heights and the associated current velocities the oceanic tidal angular momentum (OTAM) is calculated.For the computation of the related short period variation of Earth rotation, we have re-examined the Euler-Liouville equation for an elastic Earth model with a liquid core. The focus here is on the consistent calculation of the elastic Love num- bers and associated Earth model parameters, which are considered in the Euler- Liouville equation for diurnal and sub-diurnal periods in the frequency domain.

  6. Attending unintended transformations of health care infrastructure

    PubMed Central

    Wentzer, Helle; Bygholm, Ann

    2007-01-01

    Introduction Western health care is under pressure from growing demands on quality and efficiency. The development and implementation of information technology, IT is a key mean of health care authorities to improve on health care infrastructure. Theory and methods Against a background of theories on human-computer interaction and IT-mediated communication, different empirical studies of IT implementation in health care are analyzed. The outcome is an analytical discernment between different relations of communication and levels of interaction with IT in health care infrastructure. These relations and levels are synthesized into a framework for identifying tensions and potential problems in the mediation of health care with the IT system. These problems are also known as unexpected adverse consequences, UACs, from IT implementation into clinical health care practices. Results This paper develops a conceptual framework for addressing transformations of communication and workflow in health care as a result of implementing IT. Conclusion and discussion The purpose of the conceptual framework is to support the attention to and continuous screening for errors and unintended consequences of IT implementation into health care practices and outcomes. PMID:18043725

  7. Imaging simulation of active EO-camera

    NASA Astrophysics Data System (ADS)

    Pérez, José; Repasi, Endre

    2018-04-01

    A modeling scheme for active imaging through atmospheric turbulence is presented. The model consists of two parts: In the first part, the illumination laser beam is propagated to a target that is described by its reflectance properties, using the well-known split-step Fourier method for wave propagation. In the second part, the reflected intensity distribution imaged on a camera is computed using an empirical model developed for passive imaging through atmospheric turbulence. The split-step Fourier method requires carefully chosen simulation parameters. These simulation requirements together with the need to produce dynamic scenes with a large number of frames led us to implement the model on GPU. Validation of this implementation is shown for two different metrics. This model is well suited for Gated-Viewing applications. Examples of imaging simulation results are presented here.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mou, J.I.; King, C.

    The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess themore » status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.« less

  9. Computational techniques in tribology and material science at the atomic level

    NASA Technical Reports Server (NTRS)

    Ferrante, J.; Bozzolo, G. H.

    1992-01-01

    Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.

  10. An empirical evaulation of computerized tools to aid in enroute flight planning

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.; Mccoy, C. Elaine; Layton, Charles

    1993-01-01

    The paper describes an experiment using the Flight Planning Testbed (FPT) in which 27 airline dispatchers were studied. Five general questions was addresses in the study: Under what circumstances does the introduction of computer-generated suggestions (flight plans) influence the planning behavior of dispatchers; what is the nature of such influences; How beneficial are the general design concepts underlying FPT; How effective are the specific implementation decisions made in realizing these general design concepts; How effectively do dispatchers evaluate situations requiring replanning and how effectively do they identify appropriate solutions to these situations. The study leaves little doubt that the introduction of computer-generated suggestions for solving a flight planning problem can have a marked impact on the cognitive processes of the user and on the ultimate plan selected.

  11. Facilitating Preschoolers' Scientific Knowledge Construction via Computer Games Regarding Light and Shadow: The Effect of the Prediction-Observation-Explanation (POE) Strategy

    NASA Astrophysics Data System (ADS)

    Hsu, Chung-Yuan; Tsai, Chin-Chung; Liang, Jyh-Chong

    2011-10-01

    Educational researchers have suggested that computer games have a profound influence on students' motivation, knowledge construction, and learning performance, but little empirical research has targeted preschoolers. Thus, the purpose of the present study was to investigate the effects of implementing a computer game that integrates the prediction-observation-explanation (POE) strategy (White and Gunstone in Probing understanding. Routledge, New York, 1992) on facilitating preschoolers' acquisition of scientific concepts regarding light and shadow. The children's alternative conceptions were explored as well. Fifty participants were randomly assigned into either an experimental group that played a computer game integrating the POE model or a control group that played a non-POE computer game. By assessing the students' conceptual understanding through interviews, this study revealed that the students in the experimental group significantly outperformed their counterparts in the concepts regarding "shadow formation in daylight" and "shadow orientation." However, children in both groups, after playing the games, still expressed some alternative conceptions such as "Shadows always appear behind a person" and "Shadows should be on the same side as the sun."

  12. Laboratory study of concrete properties to support implementation of the new AASHTO mechanistic-empirical pavement design guide.

    DOT National Transportation Integrated Search

    2012-09-01

    Properties of concrete embodying materials typically used in Wisconsin paving projects were evaluated in support of future : implementation of the AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG). The primary concrete : properties studied w...

  13. Laboratory study of concrete properties to support implementation of the new AASHTO mechanistic empirical pavement design guide.

    DOT National Transportation Integrated Search

    2012-09-01

    Properties of concrete embodying materials typically used in Wisconsin paving projects were evaluated in support of future : implementation of the AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG). The primary concrete : properties studied w...

  14. An empirical identification and categorisation of training best practices for ERP implementation projects

    NASA Astrophysics Data System (ADS)

    Esteves, Jose Manuel

    2014-11-01

    Although training is one of the most cited critical success factors in Enterprise Resource Planning (ERP) systems implementations, few empirical studies have attempted to examine the characteristics of management of the training process within ERP implementation projects. Based on the data gathered from a sample of 158 respondents across four stakeholder groups involved in ERP implementation projects, and using a mixed method design, we have assembled a derived set of training best practices. Results suggest that the categorised list of ERP training best practices can be used to better understand training activities in ERP implementation projects. Furthermore, the results reveal that the company size and location have an impact on the relevance of training best practices. This empirical study also highlights the need to investigate the role of informal workplace trainers in ERP training activities.

  15. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  16. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de; Aradi, B.

    2015-11-14

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply themore » method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.« less

  17. A Systematic Literature Review of Empirical Evidence on Computer Games and Serious Games

    ERIC Educational Resources Information Center

    Connolly, Thomas M.; Boyle, Elizabeth A.; MacArthur, Ewan; Hainey, Thomas; Boyle, James M.

    2012-01-01

    This paper examines the literature on computer games and serious games in regard to the potential positive impacts of gaming on users aged 14 years or above, especially with respect to learning, skill enhancement and engagement. Search terms identified 129 papers reporting empirical evidence about the impacts and outcomes of computer games and…

  18. Educational Outcomes and Research from 1:1 Computing Settings

    ERIC Educational Resources Information Center

    Bebell, Damian; O'Dwyer, Laura M.

    2010-01-01

    Despite the growing interest in 1:1 computing initiatives, relatively little empirical research has focused on the outcomes of these investments. The current special edition of the Journal of Technology and Assessment presents four empirical studies of K-12 1:1 computing programs and one review of key themes in the conversation about 1:1 computing…

  19. Towards an accurate representation of electrostatics in classical force fields: Efficient implementation of multipolar interactions in biomolecular simulations

    NASA Astrophysics Data System (ADS)

    Sagui, Celeste; Pedersen, Lee G.; Darden, Thomas A.

    2004-01-01

    The accurate simulation of biologically active macromolecules faces serious limitations that originate in the treatment of electrostatics in the empirical force fields. The current use of "partial charges" is a significant source of errors, since these vary widely with different conformations. By contrast, the molecular electrostatic potential (MEP) obtained through the use of a distributed multipole moment description, has been shown to converge to the quantum MEP outside the van der Waals surface, when higher order multipoles are used. However, in spite of the considerable improvement to the representation of the electronic cloud, higher order multipoles are not part of current classical biomolecular force fields due to the excessive computational cost. In this paper we present an efficient formalism for the treatment of higher order multipoles in Cartesian tensor formalism. The Ewald "direct sum" is evaluated through a McMurchie-Davidson formalism [L. McMurchie and E. Davidson, J. Comput. Phys. 26, 218 (1978)]. The "reciprocal sum" has been implemented in three different ways: using an Ewald scheme, a particle mesh Ewald (PME) method, and a multigrid-based approach. We find that even though the use of the McMurchie-Davidson formalism considerably reduces the cost of the calculation with respect to the standard matrix implementation of multipole interactions, the calculation in direct space remains expensive. When most of the calculation is moved to reciprocal space via the PME method, the cost of a calculation where all multipolar interactions (up to hexadecapole-hexadecapole) are included is only about 8.5 times more expensive than a regular AMBER 7 [D. A. Pearlman et al., Comput. Phys. Commun. 91, 1 (1995)] implementation with only charge-charge interactions. The multigrid implementation is slower but shows very promising results for parallelization. It provides a natural way to interface with continuous, Gaussian-based electrostatics in the future. It is hoped that this new formalism will facilitate the systematic implementation of higher order multipoles in classical biomolecular force fields.

  20. Probabilistic analysis of tsunami hazards

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2006-01-01

    Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).

  1. Traffic load spectra for implementing and using the mechanistic-empirical pavement design guide in Georgia.

    DOT National Transportation Integrated Search

    2014-02-01

    The GDOT is preparing for implementation of the Mechanistic-Empirical Pavement Design : Guide (MEPDG). As part of this preparation, a statewide traffic load spectra program is being : developed for gathering truck axle loading data. This final report...

  2. Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan

    2012-05-15

    Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less

  3. Using ERP and WfM Systems for Implementing Business Processes: An Empirical Study

    NASA Astrophysics Data System (ADS)

    Aversano, Lerina; Tortorella, Maria

    Software systems mainly considered from enterprises for dealing with a business process automation belong to the following two categories: Workflow Management Systems (WfMS) and Enterprise Resource Planning (ERP) systems. The wider diffusion of ERP systems tends to favourite this solution, but there are several limitations of most ERP systems for automating business processes. This paper reports an empirical study aiming at comparing the ability of implementing business processes of ERP systems and WfMSs. Two different case studies have been considered in the empirical study. It evaluates and analyses the correctness and completeness of the process models implemented by using ERP and WfM systems.

  4. Short Stories via Computers in EFL Classrooms: An Empirical Study for Reading and Writing Skills

    ERIC Educational Resources Information Center

    Yilmaz, Adnan

    2015-01-01

    The present empirical study scrutinizes the use of short stories via computer technologies in teaching and learning English language. The objective of the study is two-fold: to examine how short stories could be used through computer programs in teaching and learning English and to collect data about students' perceptions of this technique via…

  5. GIS Teacher Training: Empirically-Based Indicators of Effectiveness

    ERIC Educational Resources Information Center

    Höhnle, Steffen; Fögele, Janis; Mehren, Rainer; Schubert, Jan Christoph

    2016-01-01

    In spite of various actions, the implementation of GIS (geographic information systems) in German schools is still very low. In the presented research, teaching experts as well as teaching novices were presented with empirically based constraints for implementation stemming from an earlier survey. In the process of various group discussions, the…

  6. A Systematic Review of Strategies for Implementing Empirically Supported Mental Health Interventions

    ERIC Educational Resources Information Center

    Powell, Byron J.; Proctor, Enola K.; Glass, Joseph E.

    2014-01-01

    Objective: This systematic review examines experimental studies that test the effectiveness of strategies intended to integrate empirically supported mental health interventions into routine care settings. Our goal was to characterize the state of the literature and to provide direction for future implementation studies. Method: A literature…

  7. Preparation for implementation of the mechanistic-empirical pavement design guide in Michigan : part 1 - HMA mixture characterization.

    DOT National Transportation Integrated Search

    2013-03-01

    This is the final report of the Part 1 (HMA Mixture Characterization) of the Preparation for Implementation of the Mechanistic-Empirical Pavement Design Guide in Michigan project. The main objectives of the Part 1 were (i) to conduct a literatu...

  8. Reacting Chemistry Based Burn Model for Explosive Hydrocodes

    NASA Astrophysics Data System (ADS)

    Schwaab, Matthew; Greendyke, Robert; Steward, Bryan

    2017-06-01

    Currently, in hydrocodes designed to simulate explosive material undergoing shock-induced ignition, the state of the art is to use one of numerous reaction burn rate models. These burn models are designed to estimate the bulk chemical reaction rate. Unfortunately, these models are largely based on empirical data and must be recalibrated for every new material being simulated. We propose that the use of an equilibrium Arrhenius rate reacting chemistry model in place of these empirically derived burn models will improve the accuracy for these computational codes. Such models have been successfully used in codes simulating the flow physics around hypersonic vehicles. A reacting chemistry model of this form was developed for the cyclic nitramine RDX by the Naval Research Laboratory (NRL). Initial implementation of this chemistry based burn model has been conducted on the Air Force Research Laboratory's MPEXS multi-phase continuum hydrocode. In its present form, the burn rate is based on the destruction rate of RDX from NRL's chemistry model. Early results using the chemistry based burn model show promise in capturing deflagration to detonation features more accurately in continuum hydrocodes than previously achieved using empirically derived burn models.

  9. A model of therapist competencies for the empirically supported interpersonal psychotherapy for adolescent depression.

    PubMed

    Sburlati, Elizabeth S; Lyneham, Heidi J; Mufson, Laura H; Schniering, Carolyn A

    2012-06-01

    In order to treat adolescent depression, a number of empirically supported treatments (ESTs) have been developed from both the cognitive behavioral therapy (CBT) and interpersonal psychotherapy (IPT-A) frameworks. Research has shown that in order for these treatments to be implemented in routine clinical practice (RCP), effective therapist training must be generated and provided. However, before such training can be developed, a good understanding of the therapist competencies needed to implement these ESTs is required. Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011) developed a model of therapist competencies for implementing CBT using the well-established Delphi technique. Given that IPT-A differs considerably to CBT, the current study aims to develop a model of therapist competencies for the implementation of IPT-A using a similar procedure as that applied in Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011). This method involved: (1) identifying and reviewing an empirically supported IPT-A approach, (2) extracting therapist competencies required for the implementation of IPT-A, (3) consulting with a panel of IPT-A experts to generate an overall model of therapist competencies, and (4) validating the overall model with the IPT-A manual author. The resultant model offers an empirically derived set of competencies necessary for effectively treating adolescent depression using IPT-A and has wide implications for the development of therapist training, competence assessment measures, and evidence-based practice guidelines. This model, therefore, provides an empirical framework for the development of dissemination and implementation programs aimed at ensuring that adolescents with depression receive effective care in RCP settings. Key similarities and differences between CBT and IPT-A, and the therapist competencies required for implementing these treatments, are also highlighted throughout this article.

  10. GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung Dac

    2017-03-01

    The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.

  11. Use of qualitative environmental and phenotypic variables in the context of allele distribution models: detecting signatures of selection in the genome of Lake Victoria cichlids.

    PubMed

    Joost, Stéphane; Kalbermatten, Michael; Bezault, Etienne; Seehausen, Ole

    2012-01-01

    When searching for loci possibly under selection in the genome, an alternative to population genetics theoretical models is to establish allele distribution models (ADM) for each locus to directly correlate allelic frequencies and environmental variables such as precipitation, temperature, or sun radiation. Such an approach implementing multiple logistic regression models in parallel was implemented within a computing program named MATSAM: . Recently, this application was improved in order to support qualitative environmental predictors as well as to permit the identification of associations between genomic variation and individual phenotypes, allowing the detection of loci involved in the genetic architecture of polymorphic characters. Here, we present the corresponding methodological developments and compare the results produced by software implementing population genetics theoretical models (DFDIST: and BAYESCAN: ) and ADM (MATSAM: ) in an empirical context to detect signatures of genomic divergence associated with speciation in Lake Victoria cichlid fishes.

  12. Sensitivity Analysis and Accuracy of a CFD-TFM Approach to Bubbling Bed Using Pressure Drop Fluctuations

    PubMed Central

    Tricomi, Leonardo; Melchiori, Tommaso; Chiaramonti, David; Boulet, Micaël; Lavoie, Jean Michel

    2017-01-01

    Based upon the two fluid model (TFM) theory, a CFD model was implemented to investigate a cold multiphase-fluidized bubbling bed reactor. The key variable used to characterize the fluid dynamic of the experimental system, and compare it to model predictions, was the time-pressure drop induced by the bubble motion across the bed. This time signal was then processed to obtain the power spectral density (PSD) distribution of pressure fluctuations. As an important aspect of this work, the effect of the sampling time scale on the empirical power spectral density (PSD) was investigated. A time scale of 40 s was found to be a good compromise ensuring both simulation performance and numerical validation consistency. The CFD model was first numerically verified by mesh refinement process, after what it was used to investigate the sensitivity with regards to minimum fluidization velocity (as a calibration point for drag law), restitution coefficient, and solid pressure term while assessing his accuracy in matching the empirical PSD. The 2D model provided a fair match with the empirical time-averaged pressure drop, the relating fluctuations amplitude, and the signal’s energy computed as integral of the PSD. A 3D version of the TFM was also used and it improved the match with the empirical PSD in the very first part of the frequency spectrum. PMID:28695119

  13. Sensitivity Analysis and Accuracy of a CFD-TFM Approach to Bubbling Bed Using Pressure Drop Fluctuations.

    PubMed

    Tricomi, Leonardo; Melchiori, Tommaso; Chiaramonti, David; Boulet, Micaël; Lavoie, Jean Michel

    2017-01-01

    Based upon the two fluid model (TFM) theory, a CFD model was implemented to investigate a cold multiphase-fluidized bubbling bed reactor. The key variable used to characterize the fluid dynamic of the experimental system, and compare it to model predictions, was the time-pressure drop induced by the bubble motion across the bed. This time signal was then processed to obtain the power spectral density (PSD) distribution of pressure fluctuations. As an important aspect of this work, the effect of the sampling time scale on the empirical power spectral density (PSD) was investigated. A time scale of 40 s was found to be a good compromise ensuring both simulation performance and numerical validation consistency. The CFD model was first numerically verified by mesh refinement process, after what it was used to investigate the sensitivity with regards to minimum fluidization velocity (as a calibration point for drag law), restitution coefficient, and solid pressure term while assessing his accuracy in matching the empirical PSD. The 2D model provided a fair match with the empirical time-averaged pressure drop, the relating fluctuations amplitude, and the signal's energy computed as integral of the PSD. A 3D version of the TFM was also used and it improved the match with the empirical PSD in the very first part of the frequency spectrum.

  14. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  15. A Spreadsheet for the Mixing of a Row of Jets with a Confined Crossflow

    NASA Technical Reports Server (NTRS)

    Holderman, J. D.; Smith, T. D.; Clisset, J. R.; Lear, W. E.

    2005-01-01

    An interactive computer code, written with a readily available software program, Microsoft Excel (Microsoft Corporation, Redmond, WA) is presented which displays 3 D oblique plots of a conserved scalar distribution downstream of jets mixing with a confined crossflow, for a single row, double rows, or opposed rows of jets with or without flow area convergence and/or a non-uniform crossflow scalar distribution. This project used a previously developed empirical model of jets mixing in a confined crossflow to create an Microsoft Excel spreadsheet that can output the profiles of a conserved scalar for jets injected into a confined crossflow given several input variables. The program uses multiple spreadsheets in a single Microsoft Excel notebook to carry out the modeling. The first sheet contains the main program, controls for the type of problem to be solved, and convergence criteria. The first sheet also provides for input of the specific geometry and flow conditions. The second sheet presents the results calculated with this routine to show the effects on the mixing of varying flow and geometric parameters. Comparisons are also made between results from the version of the empirical correlations implemented in the spreadsheet and the versions originally written in Applesoft BASIC (Apple Computer, Cupertino, CA) in the 1980's.

  16. A Spreadsheet for the Mixing of a Row of Jets with a Confined Crossflow. Supplement

    NASA Technical Reports Server (NTRS)

    Holderman, J. D.; Smith, T. D.; Clisset, J. R.; Lear, W. E.

    2005-01-01

    An interactive computer code, written with a readily available software program, Microsoft Excel (Microsoft Corporation, Redmond, WA) is presented which displays 3 D oblique plots of a conserved scalar distribution downstream of jets mixing with a confined crossflow, for a single row, double rows, or opposed rows of jets with or without flow area convergence and/or a non-uniform crossflow scalar distribution. This project used a previously developed empirical model of jets mixing in a confined crossflow to create an Microsoft Excel spreadsheet that can output the profiles of a conserved scalar for jets injected into a confined crossflow given several input variables. The program uses multiple spreadsheets in a single Microsoft Excel notebook to carry out the modeling. The first sheet contains the main program, controls for the type of problem to be solved, and convergence criteria. The first sheet also provides for input of the specific geometry and flow conditions. The second sheet presents the results calculated with this routine to show the effects on the mixing of varying flow and geometric parameters. Comparisons are also made between results from the version of the empirical correlations implemented in the spreadsheet and the versions originally written in Applesoft BASIC (Apple Computer, Cupertino, CA) in the 1980's.

  17. Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments

    PubMed Central

    Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter

    2015-01-01

    Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227

  18. The effect of Fisher information matrix approximation methods in population optimal design calculations.

    PubMed

    Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C

    2016-12-01

    With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.

  19. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  20. Improved inland water levels from SAR altimetry using novel empirical and physical retrackers

    NASA Astrophysics Data System (ADS)

    Villadsen, Heidi; Deng, Xiaoli; Andersen, Ole B.; Stenseng, Lars; Nielsen, Karina; Knudsen, Per

    2016-06-01

    Satellite altimetry has proven a valuable resource of information on river and lake levels where in situ data are sparse or non-existent. In this study several new methods for obtaining stable inland water levels from CryoSat-2 Synthetic Aperture Radar (SAR) altimetry are presented and evaluated. In addition, the possible benefits from combining physical and empirical retrackers are investigated. The retracking methods evaluated in this paper include the physical SAR Altimetry MOde Studies and Applications (SAMOSA3) model, a traditional subwaveform threshold retracker, the proposed Multiple Waveform Persistent Peak (MWaPP) retracker, and a method combining the physical and empirical retrackers. Using a physical SAR waveform retracker over inland water has not been attempted before but shows great promise in this study. The evaluation is performed for two medium-sized lakes (Lake Vänern in Sweden and Lake Okeechobee in Florida), and in the Amazon River in Brazil. Comparing with in situ data shows that using the SAMOSA3 retracker generally provides the lowest root-mean-squared-errors (RMSE), closely followed by the MWaPP retracker. For the empirical retrackers, the RMSE values obtained when comparing with in situ data in Lake Vänern and Lake Okeechobee are in the order of 2-5 cm for well-behaved waveforms. Combining the physical and empirical retrackers did not offer significantly improved mean track standard deviations or RMSEs. Based on these studies, it is suggested that future SAR derived water levels are obtained using the SAMOSA3 retracker whenever information about other physical properties apart from range is desired. Otherwise we suggest using the empirical MWaPP retracker described in this paper, which is both easy to implement, computationally efficient, and gives a height estimate for even the most contaminated waveforms.

  1. Schema therapy for borderline personality disorder: a comprehensive review of its empirical foundations, effectiveness and implementation possibilities.

    PubMed

    Sempértegui, Gabriela A; Karreman, Annemiek; Arntz, Arnoud; Bekker, Marrie H J

    2013-04-01

    Borderline personality disorder is a serious psychiatric disorder for which the effectiveness of the current pharmacotherapeutical and psychotherapeutic approaches has shown to be limited. In the last decades, schema therapy has increased in popularity as a treatment of borderline personality disorder; however, systematic evaluation of both effectiveness and empirical evidence for the theoretical background of the therapy is limited. This literature review comprehensively evaluates the current empirical status of schema therapy for borderline personality disorder. We first described the theoretical framework and reviewed its empirical foundations. Next, we examined the evidence regarding effectiveness and implementability. We found evidence for a considerable number of elements of Young's schema model; however, the strength of the results varies and there are also mixed results and some empirical blanks in the theory. The number of studies on effectiveness is small, but reviewed findings suggest that schema therapy is a promising treatment. In Western-European societies, the therapy could be readily implemented as a cost-effective strategy with positive economic consequences. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. A data colocation grid framework for big data medical image processing: backend design

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  3. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.

    PubMed

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  4. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design

    PubMed Central

    Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-01-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668

  5. GRAM-86 - FOUR DIMENSIONAL GLOBAL REFERENCE ATMOSPHERE MODEL

    NASA Technical Reports Server (NTRS)

    Johnson, D.

    1994-01-01

    The Four-D Global Reference Atmosphere program was developed from an empirical atmospheric model which generates values for pressure, density, temperature, and winds from surface level to orbital altitudes. This program can be used to generate altitude profiles of atmospheric parameters along any simulated trajectory through the atmosphere. The program was developed for design applications in the Space Shuttle program, such as the simulation of external tank re-entry trajectories. Other potential applications would be global circulation and diffusion studies, and generating profiles for comparison with other atmospheric measurement techniques, such as satellite measured temperature profiles and infrasonic measurement of wind profiles. The program is an amalgamation of two empirical atmospheric models for the low (25km) and the high (90km) atmosphere, with a newly developed latitude-longitude dependent model for the middle atmosphere. The high atmospheric region above 115km is simulated entirely by the Jacchia (1970) model. The Jacchia program sections are in separate subroutines so that other thermosphericexospheric models could easily be adapted if required for special applications. The atmospheric region between 30km and 90km is simulated by a latitude-longitude dependent empirical model modification of the latitude dependent empirical model of Groves (1971). Between 90km and 115km a smooth transition between the modified Groves values and the Jacchia values is accomplished by a fairing technique. Below 25km the atmospheric parameters are computed by the 4-D worldwide atmospheric model of Spiegler and Fowler (1972). This data set is not included. Between 25km and 30km an interpolation scheme is used between the 4-D results and the modified Groves values. The output parameters consist of components for: (1) latitude, longitude, and altitude dependent monthly and annual means, (2) quasi-biennial oscillations (QBO), and (3) random perturbations to partially simulate the variability due to synoptic, diurnal, planetary wave, and gravity wave variations. Quasi-biennial and random variation perturbations are computed from parameters determined by various empirical studies and are added to the monthly mean values. The UNIVAC version of GRAM is written in UNIVAC FORTRAN and has been implemented on a UNIVAC 1110 under control of EXEC 8 with a central memory requirement of approximately 30K of 36 bit words. The GRAM program was developed in 1976 and GRAM-86 was released in 1986. The monthly data files were last updated in 1986. The DEC VAX version of GRAM is written in FORTRAN 77 and has been implemented on a DEC VAX 11/780 under control of VMS 4.X with a central memory requirement of approximately 100K of 8 bit bytes. The GRAM program was originally developed in 1976 and later converted to the VAX in 1986 (GRAM-86). The monthly data files were last updated in 1986.

  6. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  7. A Fast Implementation of the Isodata Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Le Moigne, Jacqueline; Mount, David M.; Netanyahu, Nathan S.

    2007-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to IsoDATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  8. Modelling nanoscale objects in order to conduct an empirical research into their properties as part of an engineering system designed

    NASA Astrophysics Data System (ADS)

    Makarov, M.; Shchanikov, S.; Trantina, N.

    2017-01-01

    We have conducted a research into the major, in terms of their future application, properties of nanoscale objects, based on modelling these objects as free-standing physical elements beyond the structure of an engineering system designed for their integration as well as a part of a system that operates under the influence of the external environment. For the empirical research suggested within the scope of this work, we have chosen a nanoscale electronic element intended to be used while designing information processing systems with the parallel architecture - a memristor. The target function of the research was to provide the maximum fault-tolerance index of a memristor-based system when affected by all possible impacts of the internal destabilizing factors and external environment. The research results have enabled us to receive and classify all the factors predetermining the fault-tolerance index of the hardware implementation of a computing system based on the nanoscale electronic element base.

  9. The ecological module of BOATS-1.0: a bioenergetically-constrained model of marine upper trophic levels suitable for studies of fisheries and ocean biogeochemistry

    NASA Astrophysics Data System (ADS)

    Carozza, D. A.; Bianchi, D.; Galbraith, E. D.

    2015-12-01

    Environmental change and the exploitation of marine resources have had profound impacts on marine communities, with potential implications for ocean biogeochemistry and food security. In order to study such global-scale problems, it is helpful to have computationally efficient numerical models that predict the first-order features of fish biomass production as a function of the environment, based on empirical and mechanistic understandings of marine ecosystems. Here we describe the ecological module of the BiOeconomic mArine Trophic Size-spectrum (BOATS) model, which takes an Earth-system approach to modeling fish biomass at the global scale. The ecological model is designed to be used on an Earth System model grid, and determines size spectra of fish biomass by explicitly resolving life history as a function of local temperature and net primary production. Biomass production is limited by the availability of photosynthetic energy to upper trophic levels, following empirical trophic efficiency scalings, and by well-established empirical temperature-dependent growth rates. Natural mortality is calculated using an empirical size-based relationship, while reproduction and recruitment depend on both the food availability to larvae from net primary production and the production of eggs by mature adult fish. We describe predicted biomass spectra and compare them to observations, and conduct a sensitivity study to determine how the change as a function of net primary production and temperature. The model relies on a limited number of parameters compared to similar modeling efforts, while retaining realistic representations of biological and ecological processes, and is computationally efficient, allowing extensive parameter-space analyses even when implemented globally. As such, it enables the exploration of the linkages between ocean biogeochemistry, climate, and upper trophic levels at the global scale, as well as a representation of fish biomass for idealized studies of fisheries.

  10. The ecological module of BOATS-1.0: a bioenergetically constrained model of marine upper trophic levels suitable for studies of fisheries and ocean biogeochemistry

    NASA Astrophysics Data System (ADS)

    Carozza, David Anthony; Bianchi, Daniele; Galbraith, Eric Douglas

    2016-04-01

    Environmental change and the exploitation of marine resources have had profound impacts on marine communities, with potential implications for ocean biogeochemistry and food security. In order to study such global-scale problems, it is helpful to have computationally efficient numerical models that predict the first-order features of fish biomass production as a function of the environment, based on empirical and mechanistic understandings of marine ecosystems. Here we describe the ecological module of the BiOeconomic mArine Trophic Size-spectrum (BOATS) model, which takes an Earth-system approach to modelling fish biomass at the global scale. The ecological model is designed to be used on an Earth-system model grid, and determines size spectra of fish biomass by explicitly resolving life history as a function of local temperature and net primary production. Biomass production is limited by the availability of photosynthetic energy to upper trophic levels, following empirical trophic efficiency scalings, and by well-established empirical temperature-dependent growth rates. Natural mortality is calculated using an empirical size-based relationship, while reproduction and recruitment depend on both the food availability to larvae from net primary production and the production of eggs by mature adult fish. We describe predicted biomass spectra and compare them to observations, and conduct a sensitivity study to determine how they change as a function of net primary production and temperature. The model relies on a limited number of parameters compared to similar modelling efforts, while retaining reasonably realistic representations of biological and ecological processes, and is computationally efficient, allowing extensive parameter-space analyses even when implemented globally. As such, it enables the exploration of the linkages between ocean biogeochemistry, climate, and upper trophic levels at the global scale, as well as a representation of fish biomass for idealized studies of fisheries.

  11. Efficient computation of the joint sample frequency spectra for multiple populations.

    PubMed

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  12. Efficient computation of the joint sample frequency spectra for multiple populations

    PubMed Central

    Kamm, John A.; Terhorst, Jonathan; Song, Yun S.

    2016-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248

  13. pLARmEB: integration of least angle regression with empirical Bayes for multilocus genome-wide association studies.

    PubMed

    Zhang, J; Feng, J-Y; Ni, Y-L; Wen, Y-J; Niu, Y; Tamba, C L; Yue, C; Song, Q; Zhang, Y-M

    2017-06-01

    Multilocus genome-wide association studies (GWAS) have become the state-of-the-art procedure to identify quantitative trait nucleotides (QTNs) associated with complex traits. However, implementation of multilocus model in GWAS is still difficult. In this study, we integrated least angle regression with empirical Bayes to perform multilocus GWAS under polygenic background control. We used an algorithm of model transformation that whitened the covariance matrix of the polygenic matrix K and environmental noise. Markers on one chromosome were included simultaneously in a multilocus model and least angle regression was used to select the most potentially associated single-nucleotide polymorphisms (SNPs), whereas the markers on the other chromosomes were used to calculate kinship matrix as polygenic background control. The selected SNPs in multilocus model were further detected for their association with the trait by empirical Bayes and likelihood ratio test. We herein refer to this method as the pLARmEB (polygenic-background-control-based least angle regression plus empirical Bayes). Results from simulation studies showed that pLARmEB was more powerful in QTN detection and more accurate in QTN effect estimation, had less false positive rate and required less computing time than Bayesian hierarchical generalized linear model, efficient mixed model association (EMMA) and least angle regression plus empirical Bayes. pLARmEB, multilocus random-SNP-effect mixed linear model and fast multilocus random-SNP-effect EMMA methods had almost equal power of QTN detection in simulation experiments. However, only pLARmEB identified 48 previously reported genes for 7 flowering time-related traits in Arabidopsis thaliana.

  14. Pedagogising the University: On Higher Education Policy Implementation and Its Effects on Social Relations

    ERIC Educational Resources Information Center

    Stavrou, Sophia

    2016-01-01

    This paper aims at providing a theoretical and empirical discussion on the concept of pedagogisation which derives from the hypothesis of a new era of "totally pedagogised society" in Basil Bernstein's work. The article is based on empirical research on higher education policy, with a focus on the implementation of curriculum change…

  15. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  16. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  17. S-matrix analysis of the baryon electric charge correlation

    NASA Astrophysics Data System (ADS)

    Lo, Pok Man; Friman, Bengt; Redlich, Krzysztof; Sasaki, Chihiro

    2018-03-01

    We compute the correlation of the net baryon number with the electric charge (χBQ) for an interacting hadron gas using the S-matrix formulation of statistical mechanics. The observable χBQ is particularly sensitive to the details of the pion-nucleon interaction, which are consistently incorporated in the current scheme via the empirical scattering phase shifts. Comparing to the recent lattice QCD studies in the (2 + 1)-flavor system, we find that the natural implementation of interactions and the proper treatment of resonances in the S-matrix approach lead to an improved description of the lattice data over that obtained in the hadron resonance gas model.

  18. Modeling for Battery Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick

    2017-01-01

    For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient, and is of suitable accuracy for reliable EOD prediction in a variety of operational profiles. The model can be considered an electrochemical engineering model, but unlike most such models found in the literature, certain approximations are done that allow to retain computational efficiency for online implementation of the model. Although the focus here is on Li-ion batteries, the model is quite general and can be applied to different chemistries through a change of model parameter values. Progress on model development, providing model validation results and EOD prediction results is being presented.

  19. Extended polarization in 3rd order SCC-DFTB from chemical potential equilization

    PubMed Central

    Kaminski, Steve; Giese, Timothy J.; Gaus, Michael; York, Darrin M.; Elstner, Marcus

    2012-01-01

    In this work we augment the approximate density functional method SCC-DFTB (DFTB3) with the chemical potential equilization (CPE) approach in order to improve the performance for molecular electronic polarizabilities. The CPE method, originally implemented for NDDO type methods by Giese and York, has been shown to emend minimal basis methods wrt response properties significantly, and has been applied to SCC-DFTB recently. CPE allows to overcome this inherent limitation of minimal basis methods by supplying an additional response density. The systematic underestimation is thereby corrected quantitatively without the need to extend the atomic orbital basis, i.e. without increasing the overall computational cost significantly. Especially the dependency of polarizability as a function of molecular charge state was significantly improved from the CPE extension of DFTB3. The empirical parameters introduced by the CPE approach were optimized for 172 organic molecules in order to match the results from density functional methods (DFT) methods using large basis sets. However, the first order derivatives of molecular polarizabilities, as e.g. required to compute Raman activities, are not improved by the current CPE implementation, i.e. Raman spectra are not improved. PMID:22894819

  20. Levels and loops: the future of artificial intelligence and neuroscience.

    PubMed Central

    Bell, A J

    1999-01-01

    In discussing artificial intelligence and neuroscience, I will focus on two themes. The first is the universality of cycles (or loops): sets of variables that affect each other in such a way that any feed-forward account of causality and control, while informative, is misleading. The second theme is based around the observation that a computer is an intrinsically dualistic entity, with its physical set-up designed so as not to interfere with its logical set-up, which executes the computation. The brain is different. When analysed empirically at several different levels (cellular, molecular), it appears that there is no satisfactory way to separate a physical brain model (or algorithm, or representation), from a physical implementational substrate. When program and implementation are inseparable and thus interfere with each other, a dualistic point-of-view is impossible. Forced by empiricism into a monistic perspective, the brain-mind appears as neither embodied by or embedded in physical reality, but rather as identical to physical reality. This perspective has implications for the future of science and society. I will approach these from a negative point-of-view, by critiquing some of our millennial culture's popular projected futures. PMID:10670021

  1. Development of a Learning-Oriented Computer Assisted Instruction Designed to Improve Skills in the Clinical Assessment of the Nutritional Status: A Pilot Evaluation

    PubMed Central

    García de Diego, Laura; Cuervo, Marta; Martínez, J. Alfredo

    2015-01-01

    Computer assisted instruction (CAI) is an effective tool for evaluating and training students and professionals. In this article we will present a learning-oriented CAI, which has been developed for students and health professionals to acquire and retain new knowledge through the practice. A two-phase pilot evaluation was conducted, involving 8 nutrition experts and 30 postgraduate students, respectively. In each training session, the software developed guides users in the integral evaluation of a patient’s nutritional status and helps them to implement actions. The program includes into the format clinical tools, which can be used to recognize possible patient’s needs, to improve the clinical reasoning and to develop professional skills. Among them are assessment questionnaires and evaluation criteria, cardiovascular risk charts, clinical guidelines and photographs of various diseases. This CAI is a complete software package easy to use and versatile, aimed at clinical specialists, medical staff, scientists, educators and clinical students, which can be used as a learning tool. This application constitutes an advanced method for students and health professionals to accomplish nutritional assessments combining theoretical and empirical issues, which can be implemented in their academic curriculum. PMID:25978456

  2. Development of a learning-oriented computer assisted instruction designed to improve skills in the clinical assessment of the nutritional status: a pilot evaluation.

    PubMed

    García de Diego, Laura; Cuervo, Marta; Martínez, J Alfredo

    2015-01-01

    Computer assisted instruction (CAI) is an effective tool for evaluating and training students and professionals. In this article we will present a learning-oriented CAI, which has been developed for students and health professionals to acquire and retain new knowledge through the practice. A two-phase pilot evaluation was conducted, involving 8 nutrition experts and 30 postgraduate students, respectively. In each training session, the software developed guides users in the integral evaluation of a patient's nutritional status and helps them to implement actions. The program includes into the format clinical tools, which can be used to recognize possible patient's needs, to improve the clinical reasoning and to develop professional skills. Among them are assessment questionnaires and evaluation criteria, cardiovascular risk charts, clinical guidelines and photographs of various diseases. This CAI is a complete software package easy to use and versatile, aimed at clinical specialists, medical staff, scientists, educators and clinical students, which can be used as a learning tool. This application constitutes an advanced method for students and health professionals to accomplish nutritional assessments combining theoretical and empirical issues, which can be implemented in their academic curriculum.

  3. North Dakota implementation of mechanistic-empirical pavement design guide (MEPDG).

    DOT National Transportation Integrated Search

    2014-12-01

    North Dakota currently designs roads based on the AASHTO Design Guide procedure, which is based on : the empirical findings of the AASHTO Road Test of the late 1950s. However, limitations of the current : empirical approach have prompted AASHTO to mo...

  4. Accelerating atomistic calculations of quantum energy eigenstates on graphic cards

    NASA Astrophysics Data System (ADS)

    Rodrigues, Walter; Pecchia, A.; Lopez, M.; Auf der Maur, M.; Di Carlo, A.

    2014-10-01

    Electronic properties of nanoscale materials require the calculation of eigenvalues and eigenvectors of large matrices. This bottleneck can be overcome by parallel computing techniques or the introduction of faster algorithms. In this paper we report a custom implementation of the Lanczos algorithm with simple restart, optimized for graphical processing units (GPUs). The whole algorithm has been developed using CUDA and runs entirely on the GPU, with a specialized implementation that spares memory and reduces at most machine-to-device data transfers. Furthermore parallel distribution over several GPUs has been attained using the standard message passing interface (MPI). Benchmark calculations performed on a GaN/AlGaN wurtzite quantum dot with up to 600,000 atoms are presented. The empirical tight-binding (ETB) model with an sp3d5s∗+spin-orbit parametrization has been used to build the system Hamiltonian (H).

  5. Ontological approach for safe and effective polypharmacy prescription

    PubMed Central

    Grando, Adela; Farrish, Susan; Boyd, Cynthia; Boxwala, Aziz

    2012-01-01

    The intake of multiple medications in patients with various medical conditions challenges the delivery of medical care. Initial empirical studies and pilot implementations seem to indicate that generic safe and effective multi-drug prescription principles could be defined and reused to reduce adverse drug events and to support compliance with medical guidelines and drug formularies. Given that ontologies are known to provide well-principled, sharable, setting-independent and machine-interpretable declarative specification frameworks for modeling and reasoning on biomedical problems, we explore here their use in the context of multi-drug prescription. We propose an ontology for modeling drug-related knowledge and a repository of safe and effective generic prescription principles. To test the usability and the level of granularity of the developed ontology-based specification models and heuristic we implemented a tool that computes the complexity of multi-drug treatments, and a decision aid to check the safeness and effectiveness of prescribed multi-drug treatments. PMID:23304299

  6. Treatment staff turnover in organizations implementing evidence-based practices: Turnover rates and their association with client outcomes

    PubMed Central

    Garner, Bryan R.; Hunter, Brooke D.; Modisette, Kathryn C.; Ihnes, Pamela C.; Godley, Susan H.

    2011-01-01

    High staff turnover has been described as a problem for the substance use disorder treatment field. This assertion is based primarily on the assumption that staff turnover adversely impacts treatment delivery and effectiveness. This assumption, however, has not been empirically tested. In this study, we computed annualized rates of turnover for treatment staff (n=249) participating in an evidence-based practice implementation initiative and examined the association between organizational-level rates of staff turnover and client-level outcomes. Annualized rates of staff turnover were 31% for clinicians and 19% for clinical supervisors. Additionally, multilevel analyses did not reveal the expected relationship between staff turnover and poorer client-level outcomes. Rather, organizational-level rates of staff turnover were found to have a significant positive association with two measures of treatment effectiveness: less involvement in illegal activity and lower social risk. Possible explanations for these findings are discussed. PMID:22154040

  7. Coping with the Stigma of Mental Illness: Empirically-Grounded Hypotheses from Computer Simulations

    ERIC Educational Resources Information Center

    Kroska, Amy; Har, Sarah K.

    2011-01-01

    This research demonstrates how affect control theory and its computer program, "Interact", can be used to develop empirically-grounded hypotheses regarding the connection between cultural labels and behaviors. Our demonstration focuses on propositions in the modified labeling theory of mental illness. According to the MLT, negative societal…

  8. Gathering Empirical Evidence Concerning Links between Computer Aided Design (CAD) and Creativity

    ERIC Educational Resources Information Center

    Musta'amal, Aede Hatib; Norman, Eddie; Hodgson, Tony

    2009-01-01

    Discussion is often reported concerning potential links between computer-aided designing and creativity, but there is a lack of systematic enquiry to gather empirical evidence concerning such links. This paper reports an indication of findings from other research studies carried out in contexts beyond general education that have sought evidence…

  9. Water exchanges versus water works: Insights from a computable general equilibrium model for the Balearic Islands

    NASA Astrophysics Data System (ADS)

    Gómez, Carlos M.; Tirado, Dolores; Rey-Maquieira, Javier

    2004-10-01

    We present a computable general equilibrium model (CGE) for the Balearic Islands, specifically performed to analyze the welfare gains associated with an improvement in the allocation of water rights through voluntary water exchanges (mainly between the agriculture and urban sectors). For the implementation of the empirical model we built the social accounting matrix (SAM) from the last available input-output table of the islands (for the year 1997). Water exchanges provide an important alternative to make the allocation of water flexible enough to cope with the cyclical droughts that characterize the natural water regime on the islands. The main conclusion is that the increased efficiency provided by ``water markets'' makes this option more advantageous than the popular alternative of building new desalinization plants. Contrary to common opinion, a ``water market'' can also have positive and significant impacts on the agricultural income.

  10. Mining protein-protein interaction networks: denoising effects

    NASA Astrophysics Data System (ADS)

    Marras, Elisabetta; Capobianco, Enrico

    2009-01-01

    A typical instrument to pursue analysis in complex network studies is the analysis of the statistical distributions. They are usually computed for measures which characterize network topology, and are aimed at capturing both structural and dynamics aspects. Protein-protein interaction networks (PPIN) have also been studied through several measures. It is in general observed that a power law is expected to characterize scale-free networks. However, mixing the original noise cover with outlying information and other system-dependent fluctuations makes the empirical detection of the power law a difficult task. As a result the uncertainty level increases when looking at the observed sample; in particular, one may wonder whether the computed features may be sufficient to explain the interactome. We then address noise problems by implementing both decomposition and denoising techniques that reduce the impact of factors known to affect the accuracy of power law detection.

  11. Evaluating the Theoretic Adequacy and Applied Potential of Computational Models of the Spacing Effect.

    PubMed

    Walsh, Matthew M; Gluck, Kevin A; Gunzelmann, Glenn; Jastrzembski, Tiffany; Krusmark, Michael

    2018-06-01

    The spacing effect is among the most widely replicated empirical phenomena in the learning sciences, and its relevance to education and training is readily apparent. Yet successful applications of spacing effect research to education and training is rare. Computational modeling can provide the crucial link between a century of accumulated experimental data on the spacing effect and the emerging interest in using that research to enable adaptive instruction. In this paper, we review relevant literature and identify 10 criteria for rigorously evaluating computational models of the spacing effect. Five relate to evaluating the theoretic adequacy of a model, and five relate to evaluating its application potential. We use these criteria to evaluate a novel computational model of the spacing effect called the Predictive Performance Equation (PPE). Predictive Performance Equation combines elements of earlier models of learning and memory including the General Performance Equation, Adaptive Control of Thought-Rational, and the New Theory of Disuse, giving rise to a novel computational account of the spacing effect that performs favorably across the complete sets of theoretic and applied criteria. We implemented two other previously published computational models of the spacing effect and compare them to PPE using the theoretic and applied criteria as guides. Copyright © 2018 Cognitive Science Society, Inc.

  12. Empirically Understanding Can Make Problems Go Away: The Case of the Chinese Room

    ERIC Educational Resources Information Center

    Overskeid, Geir

    2005-01-01

    The many authors debating whether computers can understand often fail to clarify what understanding is, and no agreement exists on this important issue. In his Chinese room argument, Searle (1980) claims that computers running formal programs can never understand. I discuss Searle's claim based on a definition of understanding that is empirical,…

  13. An Empirical Evaluation of Puzzle-Based Learning as an Interest Approach for Teaching Introductory Computer Science

    ERIC Educational Resources Information Center

    Merrick, K. E.

    2010-01-01

    This correspondence describes an adaptation of puzzle-based learning to teaching an introductory computer programming course. Students from two offerings of the course--with and without the puzzle-based learning--were surveyed over a two-year period. Empirical results show that the synthesis of puzzle-based learning concepts with existing course…

  14. GPU-based simulation of optical propagation through turbulence for active and passive imaging

    NASA Astrophysics Data System (ADS)

    Monnier, Goulven; Duval, François-Régis; Amram, Solène

    2014-10-01

    IMOTEP is a GPU-based (Graphical Processing Units) software relying on a fast parallel implementation of Fresnel diffraction through successive phase screens. Its applications include active imaging, laser telemetry and passive imaging through turbulence with anisoplanatic spatial and temporal fluctuations. Thanks to parallel implementation on GPU, speedups ranging from 40X to 70X are achieved. The present paper gives a brief overview of IMOTEP models, algorithms, implementation and user interface. It then focuses on major improvements recently brought to the anisoplanatic imaging simulation method. Previously, we took advantage of the computational power offered by the GPU to develop a simulation method based on large series of deterministic realisations of the PSF distorted by turbulence. The phase screen propagation algorithm, by reproducing higher moments of the incident wavefront distortion, provides realistic PSFs. However, we first used a coarse gaussian model to fit the numerical PSFs and characterise there spatial statistics through only 3 parameters (two-dimensional displacements of centroid and width). Meanwhile, this approach was unable to reproduce the effects related to the details of the PSF structure, especially the "speckles" leading to prominent high-frequency content in short-exposure images. To overcome this limitation, we recently implemented a new empirical model of the PSF, based on Principal Components Analysis (PCA), ought to catch most of the PSF complexity. The GPU implementation allows estimating and handling efficiently the numerous (up to several hundreds) principal components typically required under the strong turbulence regime. A first demanding computational step involves PCA, phase screen propagation and covariance estimates. In a second step, realistic instantaneous images, fully accounting for anisoplanatic effects, are quickly generated. Preliminary results are presented.

  15. Conceptual Design Optimization of an Augmented Stability Aircraft Incorporating Dynamic Response and Actuator Constraints

    NASA Technical Reports Server (NTRS)

    Welstead, Jason; Crouse, Gilbert L., Jr.

    2014-01-01

    Empirical sizing guidelines such as tail volume coefficients have long been used in the early aircraft design phases for sizing stabilizers, resulting in conservatively stable aircraft. While successful, this results in increased empty weight, reduced performance, and greater procurement and operational cost relative to an aircraft with optimally sized surfaces. Including flight dynamics in the conceptual design process allows the design to move away from empirical methods while implementing modern control techniques. A challenge of flight dynamics and control is the numerous design variables, which are changing fluidly throughout the conceptual design process, required to evaluate the system response to some disturbance. This research focuses on addressing that challenge not by implementing higher order tools, such as computational fluid dynamics, but instead by linking the lower order tools typically used within the conceptual design process so each discipline feeds into the other. In thisresearch, flight dynamics and control was incorporated into the conceptual design process along with the traditional disciplines of vehicle sizing, weight estimation, aerodynamics, and performance. For the controller, a linear quadratic regulator structure with constant gains has been specified to reduce the user input. Coupling all the disciplines in the conceptual design phase allows the aircraft designer to explore larger design spaces where stabilizers are sized according to dynamic response constraints rather than historical static margin and volume coefficient guidelines.

  16. High-resolution subgrid models: background, grid generation, and implementation

    NASA Astrophysics Data System (ADS)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  17. Silicon photonics for high-performance interconnection networks

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr

    2011-12-01

    We assert in the course of this work that silicon photonics has the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems, and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. This work showcases that chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, enable unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of this work, we demonstrate such feasibility of waveguides, modulators, switches, and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. Furthermore, we leverage the unique properties of available silicon photonic materials to create novel silicon photonic devices, subsystems, network topologies, and architectures to enable unprecedented performance of these photonic interconnection networks and computing systems. We show that the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. Furthermore, we explore the immense potential of all-optical functionalities implemented using parametric processing in the silicon platform, demonstrating unique methods that have the ability to revolutionize computation and communication. Silicon photonics enables new sets of opportunities that we can leverage for performance gains, as well as new sets of challenges that we must solve. Leveraging its inherent compatibility with standard fabrication techniques of the semiconductor industry, combined with its capability of dense integration with advanced microelectronics, silicon photonics also offers a clear path toward commercialization through low-cost mass-volume production. Combining empirical validations of feasibility, demonstrations of massive performance gains in large-scale systems, and the potential for commercial penetration of silicon photonics, the impact of this work will become evident in the many decades that follow.

  18. A Perspective on Computational Human Performance Models as Design Tools

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  19. Full-band quantum simulation of electron devices with the pseudopotential method: Theory, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Pala, M. G.; Esseni, D.

    2018-03-01

    This paper presents the theory, implementation, and application of a quantum transport modeling approach based on the nonequilibrium Green's function formalism and a full-band empirical pseudopotential Hamiltonian. We here propose to employ a hybrid real-space/plane-wave basis that results in a significant reduction of the computational complexity compared to a full plane-wave basis. To this purpose, we provide a theoretical formulation in the hybrid basis of the quantum confinement, the self-energies of the leads, and the coupling between the device and the leads. After discussing the theory and the implementation of the new simulation methodology, we report results for complete, self-consistent simulations of different electron devices, including a silicon Esaki diode, a thin-body silicon field effect transistor (FET), and a germanium tunnel FET. The simulated transistors have technologically relevant geometrical features with a semiconductor film thickness of about 4 nm and a channel length ranging from 10 to 17 nm. We believe that the newly proposed formalism may find applications also in transport models based on ab initio Hamiltonians, as those employed in density functional theory methods.

  20. Implementation of extended Lagrangian dynamics in GROMACS for polarizable simulations using the classical Drude oscillator model.

    PubMed

    Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D

    2015-07-15

    Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.

  1. 4P: fast computing of population genetics statistics from large DNA polymorphism panels

    PubMed Central

    Benazzo, Andrea; Panziera, Alex; Bertorelle, Giorgio

    2015-01-01

    Massive DNA sequencing has significantly increased the amount of data available for population genetics and molecular ecology studies. However, the parallel computation of simple statistics within and between populations from large panels of polymorphic sites is not yet available, making the exploratory analyses of a set or subset of data a very laborious task. Here, we present 4P (parallel processing of polymorphism panels), a stand-alone software program for the rapid computation of genetic variation statistics (including the joint frequency spectrum) from millions of DNA variants in multiple individuals and multiple populations. It handles a standard input file format commonly used to store DNA variation from empirical or simulation experiments. The computational performance of 4P was evaluated using large SNP (single nucleotide polymorphism) datasets from human genomes or obtained by simulations. 4P was faster or much faster than other comparable programs, and the impact of parallel computing using multicore computers or servers was evident. 4P is a useful tool for biologists who need a simple and rapid computer program to run exploratory population genetics analyses in large panels of genomic data. It is also particularly suitable to analyze multiple data sets produced in simulation studies. Unix, Windows, and MacOs versions are provided, as well as the source code for easier pipeline implementations. PMID:25628874

  2. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  3. Intelligent Command and Control Systems for Satellite Ground Operations

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1999-01-01

    This grant, Intelligent Command and Control Systems for Satellite Ground Operations, funded by NASA Goddard Space Flight Center, has spanned almost a decade. During this time, it has supported a broad range of research addressing the changing needs of NASA operations. It is important to note that many of NASA's evolving needs, for example, use of automation to drastically reduce (e.g., 70%) operations costs, are similar requirements in both government and private sectors. Initially the research addressed the appropriate use of emerging and inexpensive computational technologies, such as X Windows, graphics, and color, together with COTS (commercial-off-the-shelf) hardware and software such as standard Unix workstations to re-engineer satellite operations centers. The first phase of research supported by this grant explored the development of principled design methodologies to make effective use of emerging and inexpensive technologies. The ultimate performance measures for new designs were whether or not they increased system effectiveness while decreasing costs. GT-MOCA (The Georgia Tech Mission Operations Cooperative Associate) and GT-VITA (Georgia Tech Visual and Inspectable Tutor and Assistant), whose latter stages were supported by this research, explored model-based design of collaborative operations teams and the design of intelligent tutoring systems, respectively. Implemented in proof-of-concept form for satellite operations, empirical evaluations of both, using satellite operators for the former and personnel involved in satellite control operations for the latter, demonstrated unequivocally the feasibility and effectiveness of the proposed modeling and design strategy underlying both research efforts. The proof-of-concept implementation of GT-MOCA showed that the methodology could specify software requirements that enabled a human-computer operations team to perform without any significant performance differences from the standard two-person satellite operations team. GT-VITA, using the same underlying methodology, the operator function model (OFM), and its computational implementation, OFMspert, successfully taught satellite control knowledge required by flight operations team members. The tutor structured knowledge in three ways: declarative knowledge (e.g., What is this? What does it do?), procedural knowledge, and operational skill. Operational skill is essential in real-time operations. It combines the two former knowledge types, assisting a student to use them effectively in a dynamic, multi-tasking, real-time operations environment. A high-fidelity simulator of the operator interface to the ground control system, including an almost full replication of both the human-computer interface and human interaction with the dynamic system, was used in the GT-MOCA and GT-VITA evaluations. The GT-VITA empirical evaluation, conducted with a range of'novices' that included GSFC operations management, GSFC operations software developers, and new flight operations team members, demonstrated that GT-VITA effectively taught a wide range of knowledge in a succinct and engaging manner.

  4. Creativity, information, and consciousness: The information dynamics of thinking.

    PubMed

    Wiggins, Geraint A

    2018-05-07

    This paper presents a theory of the basic operation of mind, Information Dynamics of Thinking, which is intended for computational implementation and thence empirical testing. It is based on the information theory of Shannon, and treats the mind/brain as an information processing organ that aims to be information-efficient, in that it predicts its world, so as to use information efficiently, and regularly re-represents it, so as to store information efficiently. The theory is presented in context of a background review of various research areas that impinge upon its development. Consequences of the theory and testable hypotheses arising from it are discussed. Copyright © 2018. Published by Elsevier B.V.

  5. The benchmark aeroelastic models program: Description and highlights of initial results

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.; Eckstrom, Clinton V.; Rivera, Jose A., Jr.; Dansberry, Bryan E.; Farmer, Moses G.; Durham, Michael H.

    1991-01-01

    An experimental effort was implemented in aeroelasticity called the Benchmark Models Program. The primary purpose of this program is to provide the necessary data to evaluate computational fluid dynamic codes for aeroelastic analysis. It also focuses on increasing the understanding of the physics of unsteady flows and providing data for empirical design. An overview is given of this program and some results obtained in the initial tests are highlighted. The tests that were completed include measurement of unsteady pressures during flutter of rigid wing with a NACA 0012 airfoil section and dynamic response measurements of a flexible rectangular wing with a thick circular arc airfoil undergoing shock boundary layer oscillations.

  6. Intent inferencing by an intelligent operator's associate - A validation study

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    1988-01-01

    In the supervisory control of a complex, dynamic system, one potential form of aiding for the human operator is a computer-based operator's associate. The design philosophy of the operator's associate is that of 'amplifying' rather than automating human skills. In particular, the associate possesses understanding and control properties. Understanding allows it to infer operator intentions and thus form the basis for context-dependent advice and reminders; control properties allow the human operator to dynamically delegate individual tasks or subfunctions to the associate. This paper focuses on the design, implementation, and validation of the intent inferencing function. Two validation studies are described which empirically demonstrate the viability of the proposed approach to intent inferencing.

  7. Empirical Data Collection and Analysis Using Camtasia and Transana

    ERIC Educational Resources Information Center

    Thorsteinsson, Gisli; Page, Tom

    2009-01-01

    One of the possible techniques for collecting empirical data is video recordings of a computer screen with specific screen capture software. This method for collecting empirical data shows how students use the BSCWII (Be Smart Cooperate Worldwide--a web based collaboration/groupware environment) to coordinate their work and collaborate in…

  8. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less

  9. Increasing Chemical Space Coverage by Combining Empirical and Computational Fragment Screens

    PubMed Central

    2015-01-01

    Most libraries for fragment-based drug discovery are restricted to 1,000–10,000 compounds, but over 500,000 fragments are commercially available and potentially accessible by virtual screening. Whether this larger set would increase chemotype coverage, and whether a computational screen can pragmatically prioritize them, is debated. To investigate this question, a 1281-fragment library was screened by nuclear magnetic resonance (NMR) against AmpC β-lactamase, and hits were confirmed by surface plasmon resonance (SPR). Nine hits with novel chemotypes were confirmed biochemically with KI values from 0.2 to low mM. We also computationally docked 290,000 purchasable fragments with chemotypes unrepresented in the empirical library, finding 10 that had KI values from 0.03 to low mM. Though less novel than those discovered by NMR, the docking-derived fragments filled chemotype holes from the empirical library. Crystal structures of nine of the fragments in complex with AmpC β-lactamase revealed new binding sites and explained the relatively high affinity of the docking-derived fragments. The existence of chemotype holes is likely a general feature of fragment libraries, as calculation suggests that to represent the fragment substructures of even known biogenic molecules would demand a library of minimally over 32,000 fragments. Combining computational and empirical fragment screens enables the discovery of unexpected chemotypes, here by the NMR screen, while capturing chemotypes missing from the empirical library and tailored to the target, with little extra cost in resources. PMID:24807704

  10. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  11. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  12. A cross-national comparison of incident reporting systems implemented in German and Swiss hospitals.

    PubMed

    Manser, Tanja; Imhof, Michael; Lessing, Constanze; Briner, Matthias

    2017-06-01

    This study aimed to empirically compare incident reporting systems (IRS) in two European countries and to explore the relationship of IRS characteristics with context factors such as hospital characteristics and characteristics of clinical risk management (CRM). We performed exploratory, secondary analyses of data on characteristics of IRS from nationwide surveys of CRM practices. The survey was originally sent to 2136 hospitals in Germany and Switzerland. Persons responsible for CRM in 622 hospitals completed the survey (response rate 29%). None. Differences between IRS in German and Swiss hospitals were assessed using Chi2, Fisher's Exact and Freeman-Halton-Tests, as appropriate. To explore interrelations between IRS characteristics and context factors (i.e. hospital and CRM characteristics) we computed Cramer's V. Comparing participating hospitals across countries, Swiss hospitals had implemented IRS earlier, more frequently and more often provided introductory IRS training systematically. German hospitals had more frequently systematically implemented standardized procedures for event analyses. IRS characteristics were significantly associated with hospital characteristics such as hospital type as well as with CRM characteristics such as existence of strategic CRM objectives and of a dedicated position for central CRM coordination. This study contributes to an improved understanding of differences in the way IRS are set up in two European countries and explores related context factors. This opens up new possibilities for empirically informed, strategic interventions to further improve dissemination of IRS and thus support hospitals in their efforts to move patient safety forward. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  13. Lost in folding space? Comparing four variants of the thermodynamic model for RNA secondary structure prediction.

    PubMed

    Janssen, Stefan; Schudoma, Christian; Steger, Gerhard; Giegerich, Robert

    2011-11-03

    Many bioinformatics tools for RNA secondary structure analysis are based on a thermodynamic model of RNA folding. They predict a single, "optimal" structure by free energy minimization, they enumerate near-optimal structures, they compute base pair probabilities and dot plots, representative structures of different abstract shapes, or Boltzmann probabilities of structures and shapes. Although all programs refer to the same physical model, they implement it with considerable variation for different tasks, and little is known about the effects of heuristic assumptions and model simplifications used by the programs on the outcome of the analysis. We extract four different models of the thermodynamic folding space which underlie the programs RNAFOLD, RNASHAPES, and RNASUBOPT. Their differences lie within the details of the energy model and the granularity of the folding space. We implement probabilistic shape analysis for all models, and introduce the shape probability shift as a robust measure of model similarity. Using four data sets derived from experimentally solved structures, we provide a quantitative evaluation of the model differences. We find that search space granularity affects the computed shape probabilities less than the over- or underapproximation of free energy by a simplified energy model. Still, the approximations perform similar enough to implementations of the full model to justify their continued use in settings where computational constraints call for simpler algorithms. On the side, we observe that the rarely used level 2 shapes, which predict the complete arrangement of helices, multiloops, internal loops and bulges, include the "true" shape in a rather small number of predicted high probability shapes. This calls for an investigation of new strategies to extract high probability members from the (very large) level 2 shape space of an RNA sequence. We provide implementations of all four models, written in a declarative style that makes them easy to be modified. Based on our study, future work on thermodynamic RNA folding may make a choice of model based on our empirical data. It can take our implementations as a starting point for further program development.

  14. Empirical Analysis of Optical Attenuator Performance in Quantum Key Distribution Systems Using a Particle Model

    DTIC Science & Technology

    2012-03-01

    EMPIRICAL ANALYSIS OF OPTICAL ATTENUATOR PERFORMANCE IN QUANTUM KEY DISTRIBUTION SYSTEMS USING A...DISTRIBUTION IS UNLIMITED AFIT/GCS/ENG/12-01 EMPIRICAL ANALYSIS OF OPTICAL ATTENUATOR PERFORMANCE IN QUANTUM KEY DISTRIBUTION SYSTEMS USING ...challenging as the complexity of actual implementation specifics are considered. Two components common to most quantum key distribution

  15. Integrating computers in physics teaching: An Indian perspective

    NASA Astrophysics Data System (ADS)

    Jolly, Pratibha

    1997-03-01

    The University of Delhi has around twenty affiliated undergraduate colleges that offer a three-year physics major program to nearly five hundred students. All follow a common curriculum and submit to a centralized examination. This structure of tertiary education makes it relatively difficult to implement radical or rapid changes in the formal curriculum. The technology onslaught has, at last, irrevocably altered this; computers are carving new windows in old citadels and defining the agenda in teaching-learning environments the world over. In 1992, we formally introduced Computational Physics as a core paper in the second year of the Bachelor's program. As yet, the emphasis is on imparting familiarity with computers, a programming language and rudiments of numerical algorithms. In a parallel development, we also introduced a strong component of instrumentation with modern day electronic devices, including microprocessors. Many of us, however, would like to see not just computer presence in our curriculum but a totally new curriculum and teaching strategy that exploits, befittingly, the new technology. The current challenge is to realize in practice the full potential of the computer as the proverbial versatile tool: interfacing laboratory experiments for real-time acquisition and control of data; enabling rigorous analysis and data modeling; simulating micro-worlds and real life phenomena; establishing new cognitive linkages between theory and empirical observation; and between abstract constructs and visual representations.

  16. Prep-ME Software Implementation and Enhancement

    DOT National Transportation Integrated Search

    2017-09-01

    Highway agencies across the United States are moving from empirical design procedures towards the mechanistic-empirical (ME) based pavement design. Even though the Pavement ME Design presents a new paradigm shift with several dramatic improvements, i...

  17. Calibrating the mechanistic-empirical pavement design guide for Kansas.

    DOT National Transportation Integrated Search

    2015-04-01

    The Kansas Department of Transportation (KDOT) is moving toward the implementation of the new American : Association of State Highway and Transportation Officials (AASHTO) Mechanistic-Empirical Pavement Design Guide (MEPDG) : for pavement design. The...

  18. A generic method for evaluating crowding in the emergency department.

    PubMed

    Eiset, Andreas Halgreen; Erlandsen, Mogens; Møllekær, Anders Brøns; Mackenhauer, Julie; Kirkegaard, Hans

    2016-06-14

    Crowding in the emergency department (ED) has been studied intensively using complicated non-generic methods that may prove difficult to implement in a clinical setting. This study sought to develop a generic method to describe and analyse crowding from measurements readily available in the ED and to test the developed method empirically in a clinical setting. We conceptualised a model with ED patient flow divided into separate queues identified by timestamps for predetermined events. With temporal resolution of 30 min, queue lengths were computed as Q(t + 1) = Q(t) + A(t) - D(t), with A(t) = number of arrivals, D(t) = number of departures and t = time interval. Maximum queue lengths for each shift of each day were found and risks of crowding computed. All tests were performed using non-parametric methods. The method was applied in the ED of Aarhus University Hospital, Denmark utilising an open cohort design with prospectively collected data from a one-year observation period. By employing the timestamps already assigned to the patients while in the ED, a generic queuing model can be computed from which crowding can be described and analysed in detail. Depending on availability of data, the model can be extended to include several queues increasing the level of information. When applying the method empirically, 41,693 patients were included. The studied ED had a high risk of bed occupancy rising above 100 % during day and evening shift, especially on weekdays. Further, a 'carry over' effect was shown between shifts and days. The presented method offers an easy and generic way to get detailed insight into the dynamics of crowding in an ED.

  19. Empirical source strength correlations for rans-based acoustic analogy methods

    NASA Astrophysics Data System (ADS)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate that there are underlying flaws in JeNo's ability to predict the behavior of a hot jet's acoustic signature at certain rear observer angles, and that this correlation correction is not able to correct these flaws.

  20. Implementation strategies for guidelines at ICUs: a systematic review.

    PubMed

    Jordan, Portia; Mpasa, Ferestas; Ten Ham-Baloyi, Wilma; Bowers, Candice

    2017-05-08

    Purpose The purpose of this paper is to critically analyze empirical studies related to the implementation strategies for clinical practice guidelines (CPGs) in intensive care units (ICUs). Design/methodology/approach A systematic review with a narrative synthesis adapted from Popay et al.'s method for a narrative synthesis was conducted. A search using CINAHL, Google Scholar, Academic search complete, Cochrane Register for Randomized Controlled Trials, MEDLINE via PUBMED and grey literature was conducted in 2014 and updated in 2016 (August). After reading the abstracts, titles and full-text articles, 11 ( n=11) research studies met the inclusion criteria. Findings After critical appraisal, using the Joanna Briggs Critical Appraisal Tools, eight randomized controlled trials conducted in adult and neonatal ICUs using implementation strategies remained. Popay et al.'s method for narrative synthesis was adapted and used to analyze and synthesize the data and formulate concluding statements. Included studies found that multi-faceted strategies appear to be more effective than single strategies. Strategies mostly used were printed educational materials, information/ sessions, audit, feedback, use of champion leaders, educational outreach visits, and computer or internet usage. Practical training, monitoring visits and grand rounds were less used. Practical implications Findings can be used by clinicians to implement the best combination of multi-faceted implementation strategies in the ICUs in order to enhance the optimal use of CPGs. Originality/value No systematic review was previously done on the implementation strategies that should be used best for optimal CPG implementation in the ICU.

  1. A Systematic Review of Strategies for Implementing Empirically Supported Mental Health Interventions

    PubMed Central

    Powell, Byron J.; Proctor, Enola K.; Glass, Joseph E.

    2013-01-01

    Objective This systematic review examines experimental studies that test the effectiveness of strategies intended to integrate empirically supported mental health interventions into routine care settings. Our goal was to characterize the state of the literature and to provide direction for future implementation studies. Methods A literature search was conducted using electronic databases and a manual search. Results Eleven studies were identified that tested implementation strategies with a randomized (n = 10) or controlled clinical trial design (n = 1). The wide range of clinical interventions, implementation strategies, and outcomes evaluated precluded meta-analysis. However, the majority of studies (n = 7; 64%) found a statistically significant effect in the hypothesized direction for at least one implementation or clinical outcome. Conclusions There is a clear need for more rigorous research on the effectiveness of implementation strategies, and we provide several suggestions that could improve this research area. PMID:24791131

  2. Empirical Analysis of Green Supply Chain Management Practices in Indian Automobile Industry

    NASA Astrophysics Data System (ADS)

    Luthra, S.; Garg, D.; Haleem, A.

    2014-04-01

    Environmental sustainability and green environmental issues have an increasing popularity among researchers and supply chain practitioners. An attempt has been made to identify and empirically analyze green supply chain management (GSCM) practices in Indian automobile industry. Six main GSCM practices (having 37 sub practices) and four expected performance outcomes (having 16 performances) have been identified by implementing GSCM practices from literature review. Questionnaire based survey has been made to validate these practices and performance outcomes. 123 complete questionnaires were collected from Indian automobile organizations and used for empirical analysis of GSCM practices in Indian automobile industry. Descriptive statistics have been used to know current implementation status of GSCM practices in Indian automobile industry and multiple regression analysis has been carried out to know the impact on expected organizational performance outcomes by current GSCM practices adopted by Indian automobile industry. The results of study suggested that environmental, economic, social and operational performances improve with the implementation of GSCM practices. This paper may play an important role to understand various GSCM implementation issues and help practicing managers to improve their performances in the supply chain.

  3. Developing an Empirical Model for Jet-Surface Interaction Noise

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2014-01-01

    The process of developing an empirical model for jet-surface interaction noise is described and the resulting model evaluated. Jet-surface interaction noise is generated when the high-speed engine exhaust from modern tightly integrated or conventional high-bypass ratio engine aircraft strikes or flows over the airframe surfaces. An empirical model based on an existing experimental database is developed for use in preliminary design system level studies where computation speed and range of configurations is valued over absolute accuracy to select the most promising (or eliminate the worst) possible designs. The model developed assumes that the jet-surface interaction noise spectra can be separated from the jet mixing noise and described as a parabolic function with three coefficients: peak amplitude, spectral width, and peak frequency. These coefficients are fit to functions of surface length and distance from the jet lipline to form a characteristic spectra which is then adjusted for changes in jet velocity and/or observer angle using scaling laws from published theoretical and experimental work. The resulting model is then evaluated for its ability to reproduce the characteristic spectra and then for reproducing spectra measured at other jet velocities and observer angles; successes and limitations are discussed considering the complexity of the jet-surface interaction noise versus the desire for a model that is simple to implement and quick to execute.

  4. Developing an Empirical Model for Jet-Surface Interaction Noise

    NASA Technical Reports Server (NTRS)

    Brown, Clif

    2014-01-01

    The process of developing an empirical model for jet-surface interaction noise is described and the resulting model evaluated. Jet-surface interaction noise is generated when the high-speed engine exhaust from modern tightly integrated or conventional high-bypass ratio engine aircraft strikes or flows over the airframe surfaces. An empirical model based on an existing experimental database is developed for use in preliminary design system level studies where computation speed and range of configurations is valued over absolute accuracy to select the most promising (or eliminate the worst) possible designs. The model developed assumes that the jet-surface interaction noise spectra can be separated from the jet mixing noise and described as a parabolic function with three coefficients: peak amplitude, spectral width, and peak frequency. These coefficients are t to functions of surface length and distance from the jet lipline to form a characteristic spectra which is then adjusted for changes in jet velocity and/or observer angle using scaling laws from published theoretical and experimental work. The resulting model is then evaluated for its ability to reproduce the characteristic spectra and then for reproducing spectra measured at other jet velocities and observer angles; successes and limitations are discussed considering the complexity of the jet-surface interaction noise versus the desire for a model that is simple to implement and quick to execute.

  5. Implementation and local calibration of the MEPDG transfer functions in Wyoming.

    DOT National Transportation Integrated Search

    2015-11-01

    The Wyoming Department of Transportation (WYDOT) currently uses the empirical AASHTO Design for Design of : Pavement Structures as their standard pavement design procedure. WYDOT plans to transition to the Mechanistic : Empirical Pavement Design Guid...

  6. Mechanistic-empirical pavement design guide calibration for pavement rehabilitation.

    DOT National Transportation Integrated Search

    2013-01-01

    The Oregon Department of Transportation (ODOT) is in the process of implementing the recently introduced AASHTO : Mechanistic-Empirical Pavement Design Guide (MEPDG) for new pavement sections. The majority of pavement work : conducted by ODOT involve...

  7. Calibrating the mechanistic-empirical pavement design guide for Kansas : [technical summary].

    DOT National Transportation Integrated Search

    2015-04-01

    The Kansas Department of Transportation (KDOT) is moving toward the implementation : of the new American Association of State Highway and Transportation Officials : (AASHTO) Mechanistic-Empirical Pavement Design Guide (MEPDG) for pavement : design. T...

  8. Dynamic Divisive Normalization Predicts Time-Varying Value Coding in Decision-Related Circuits

    PubMed Central

    LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W.

    2014-01-01

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145

  9. Supporting Implementation of Evidence-Based Practices through Practice-Based Coaching

    ERIC Educational Resources Information Center

    Snyder, Patricia A; Hemmeter, Mary Louise; Fox, Lise

    2015-01-01

    In active implementation science frameworks, coaching has been described as an important competency "driver" to ensure evidence-based practices are implemented as intended. Empirical evidence also has identified coaching as a promising job-embedded professional development strategy to support implementation of quality teaching practices.…

  10. An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment.

    PubMed

    Huang, Chien-Feng; Li, Hsu-Chih

    2017-01-01

    The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications.

  11. Modeling Magnetic Properties in EZTB

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; vonAllmen, Paul

    2007-01-01

    A software module that calculates magnetic properties of a semiconducting material has been written for incorporation into, and execution within, the Easy (Modular) Tight-Binding (EZTB) software infrastructure. [EZTB is designed to model the electronic structures of semiconductor devices ranging from bulk semiconductors, to quantum wells, quantum wires, and quantum dots. EZTB implements an empirical tight-binding mathematical model of the underlying physics.] This module can model the effect of a magnetic field applied along any direction and does not require any adjustment of model parameters. The module has thus far been applied to study the performances of silicon-based quantum computers in the presence of magnetic fields and of miscut angles in quantum wells. The module is expected to assist experimentalists in fabricating a spin qubit in a Si/SiGe quantum dot. This software can be executed in almost any Unix operating system, utilizes parallel computing, can be run as a Web-portal application program. The module has been validated by comparison of its predictions with experimental data available in the literature.

  12. Density-functional approach to the three-body dispersion interaction based on the exchange dipole moment

    PubMed Central

    Proynov, Emil; Liu, Fenglai; Gan, Zhengting; Wang, Matthew; Kong, Jing

    2015-01-01

    We implement and compute the density functional nonadditive three-body dispersion interaction using a combination of Tang-Karplus formalism and the exchange-dipole moment model of Becke and Johnson. The computation of the C9 dispersion coefficients is done in a non-empirical fashion. The obtained C9 values of a series of noble atom triplets agree well with highly accurate values in the literature. We also calculate the C9 values for a series of benzene trimers and find a good agreement with high-level ab initio values reported recently in the literature. For the question of damping of the three-body dispersion at short distances, we propose two damping schemes and optimize them based on the benzene trimers data, and the fitted analytic potentials of He3 and Ar3 trimers fitted to the results of high-level wavefunction theories available from the literature. Both damping schemes respond well to the optimization of two parameters. PMID:26328836

  13. Density-functional approach to the three-body dispersion interaction based on the exchange dipole moment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proynov, Emil; Wang, Matthew; Kong, Jing, E-mail: jing.kong@mtsu.edu

    We implement and compute the density functional nonadditive three-body dispersion interaction using a combination of Tang-Karplus formalism and the exchange-dipole moment model of Becke and Johnson. The computation of the C{sub 9} dispersion coefficients is done in a non-empirical fashion. The obtained C{sub 9} values of a series of noble atom triplets agree well with highly accurate values in the literature. We also calculate the C{sub 9} values for a series of benzene trimers and find a good agreement with high-level ab initio values reported recently in the literature. For the question of damping of the three-body dispersion at shortmore » distances, we propose two damping schemes and optimize them based on the benzene trimers data, and the fitted analytic potentials of He{sub 3} and Ar{sub 3} trimers fitted to the results of high-level wavefunction theories available from the literature. Both damping schemes respond well to the optimization of two parameters.« less

  14. An Enduring Dialogue between Computational and Empirical Vision.

    PubMed

    Martinez-Conde, Susana; Macknik, Stephen L; Heeger, David J

    2018-04-01

    In the late 1970s, key discoveries in neurophysiology, psychophysics, computer vision, and image processing had reached a tipping point that would shape visual science for decades to come. David Marr and Ellen Hildreth's 'Theory of edge detection', published in 1980, set out to integrate the newly available wealth of data from behavioral, physiological, and computational approaches in a unifying theory. Although their work had wide and enduring ramifications, their most important contribution may have been to consolidate the foundations of the ongoing dialogue between theoretical and empirical vision science. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. System and method for measuring residual stress

    DOEpatents

    Prime, Michael B.

    2002-01-01

    The present invention is a method and system for determining the residual stress within an elastic object. In the method, an elastic object is cut along a path having a known configuration. The cut creates a portion of the object having a new free surface. The free surface then deforms to a contour which is different from the path. Next, the contour is measured to determine how much deformation has occurred across the new free surface. Points defining the contour are collected in an empirical data set. The portion of the object is then modeled in a computer simulator. The points in the empirical data set are entered into the computer simulator. The computer simulator then calculates the residual stress along the path which caused the points within the object to move to the positions measured in the empirical data set. The calculated residual stress is then presented in a useful format to an analyst.

  16. An empirical method for computing leeside centerline heating on the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Helms, V. T., III

    1981-01-01

    An empirical method is presented for computing top centerline heating on the Space Shuttle Orbiter at simulated reentry conditions. It is shown that the Shuttle's top centerline can be thought of as being under the influence of a swept cylinder flow field. The effective geometry of the flow field, as well as top centerline heating, are directly related to oil-flow patterns on the upper surface of the fuselage. An empirical turbulent swept cylinder heating method was developed based on these considerations. The method takes into account the effects of the vortex-dominated leeside flow field without actually having to compute the detailed properties of such a complex flow. The heating method closely predicts experimental heat-transfer values on the top centerline of a Shuttle model at Mach numbers of 6 and 10 over a wide range in Reynolds number and angle of attack.

  17. Impact of AMS-02 Measurements on Reducing GCR Model Uncertainties

    NASA Technical Reports Server (NTRS)

    Slaba, T. C.; O'Neill, P. M.; Golge, S.; Norbury, J. W.

    2015-01-01

    For vehicle design, shield optimization, mission planning, and astronaut risk assessment, the exposure from galactic cosmic rays (GCR) poses a significant and complex problem both in low Earth orbit and in deep space. To address this problem, various computational tools have been developed to quantify the exposure and risk in a wide range of scenarios. Generally, the tool used to describe the ambient GCR environment provides the input into subsequent computational tools and is therefore a critical component of end-to-end procedures. Over the past few years, several researchers have independently and very carefully compared some of the widely used GCR models to more rigorously characterize model differences and quantify uncertainties. All of the GCR models studied rely heavily on calibrating to available near-Earth measurements of GCR particle energy spectra, typically over restricted energy regions and short time periods. In this work, we first review recent sensitivity studies quantifying the ions and energies in the ambient GCR environment of greatest importance to exposure quantities behind shielding. Currently available measurements used to calibrate and validate GCR models are also summarized within this context. It is shown that the AMS-II measurements will fill a critically important gap in the measurement database. The emergence of AMS-II measurements also provides a unique opportunity to validate existing models against measurements that were not used to calibrate free parameters in the empirical descriptions. Discussion is given regarding rigorous approaches to implement the independent validation efforts, followed by recalibration of empirical parameters.

  18. Machine learning strategies for systems with invariance properties

    NASA Astrophysics Data System (ADS)

    Ling, Julia; Jones, Reese; Templeton, Jeremy

    2016-08-01

    In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.

  19. Implementation of the AASHTO mechanistic-empirical pavement design guide for Colorado.

    DOT National Transportation Integrated Search

    2000-01-01

    The objective of this project was to integrate the American Association of State Highway and Transportation Officials (AASHTO) Mechanistic-Empirical Pavement Design Guide, Interim Edition: A Manual of Practice and its accompanying software into the d...

  20. First principles materials design of novel functional oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, Valentino R.; Voas, Brian K.; Bridges, Craig A.

    2016-05-31

    We review our efforts to develop and implement robust computational approaches for exploring phase stability to facilitate the prediction-to-synthesis process of novel functional oxides. These efforts focus on a synergy between (i) electronic structure calculations for properties predictions, (ii) phenomenological/empirical methods for examining phase stability as related to both phase segregation and temperature-dependent transitions and (iii) experimental validation through synthesis and characterization. We illustrate this philosophy by examining an inaugural study that seeks to discover novel functional oxides with high piezoelectric responses. Lastly, our results show progress towards developing a framework through which solid solutions can be studied to predictmore » materials with enhanced properties that can be synthesized and remain active under device relevant conditions.« less

  1. Advances in analytical chemistry

    NASA Technical Reports Server (NTRS)

    Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.

    1991-01-01

    Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.

  2. Dynamic mechanistic explanation: computational modeling of circadian rhythms as an exemplar for cognitive science.

    PubMed

    Bechtel, William; Abrahamsen, Adele

    2010-09-01

    We consider computational modeling in two fields: chronobiology and cognitive science. In circadian rhythm models, variables generally correspond to properties of parts and operations of the responsible mechanism. A computational model of this complex mechanism is grounded in empirical discoveries and contributes a more refined understanding of the dynamics of its behavior. In cognitive science, on the other hand, computational modelers typically advance de novo proposals for mechanisms to account for behavior. They offer indirect evidence that a proposed mechanism is adequate to produce particular behavioral data, but typically there is no direct empirical evidence for the hypothesized parts and operations. Models in these two fields differ in the extent of their empirical grounding, but they share the goal of achieving dynamic mechanistic explanation. That is, they augment a proposed mechanistic explanation with a computational model that enables exploration of the mechanism's dynamics. Using exemplars from circadian rhythm research, we extract six specific contributions provided by computational models. We then examine cognitive science models to determine how well they make the same types of contributions. We suggest that the modeling approach used in circadian research may prove useful in cognitive science as researchers develop procedures for experimentally decomposing cognitive mechanisms into parts and operations and begin to understand their nonlinear interactions.

  3. On the feasibility of the computational modelling of the endoluminal vacuum-assisted closure of an oesophageal anastomotic leakage

    PubMed Central

    Bellomo, Facundo J.; Rosales, Iván; del Castillo, Luis F.; Sánchez, Ricardo; Turon, Pau

    2018-01-01

    Endoluminal vacuum-assisted closure (E-VAC) is a promising therapy to treat anastomotic leakages of the oesophagus and bowel which are associated with high morbidity and mortality rates. An open-pore polyurethane foam is introduced into the leakage cavity and connected to a device that applies a suction pressure to accelerate the closure of the defect. Computational analysis of this healing process can advance our understanding of the biomechanical mechanisms at play. To this aim, we use a dual-stage finite-element analysis in which (i) the structural problem addresses the cavity reduction caused by the suction and (ii) a new constitutive formulation models tissue healing via permanent deformations coupled to a stiffness increase. The numerical implementation in an in-house code is described and a qualitative example illustrates the basic characteristics of the model. The computational model successfully reproduces the generic closure of an anastomotic leakage cavity, supporting the hypothesis that suction pressure promotes healing by means of the aforementioned mechanisms. However, the current framework needs to be enriched with empirical data to help advance device designs and treatment guidelines. Nonetheless, this conceptual study confirms that computational analysis can reproduce E-VAC of anastomotic leakages and establishes the bases for better understanding the mechanobiology of anastomotic defect healing. PMID:29515846

  4. A traffic data plan for mechanistic-empirical pavement designs (2002 pavement design guide).

    DOT National Transportation Integrated Search

    2003-01-01

    The Virginia Department of Transportation (VDOT) is preparing to implement the mechanistic-empirical pavement design methodology being developed under the National Cooperative Research Program's Project 1-37A, commonly referred to as the 2002 Pavemen...

  5. Verification and implementation of set-up empirical models in pile design : research project capsule.

    DOT National Transportation Integrated Search

    2016-08-01

    The primary objectives of this research include: performing static and dynamic load tests on : newly instrumented test piles to better understand the set-up mechanism for individual soil : layers, verifying or recalibrating previously developed empir...

  6. Asphalt materials characterization in support of implementation of the proposed mechanistic-empirical pavement design guide.

    DOT National Transportation Integrated Search

    2007-01-01

    The proposed Mechanistic-Empirical Pavement Design Guide (MEPDG) procedure is an improved methodology for pavement design and evaluation of paving materials. Since this new procedure depends heavily on the characterization of the fundamental engineer...

  7. Mechanistic-empirical design, implementation, and monitoring for flexible pavements : a project summary.

    DOT National Transportation Integrated Search

    2014-05-01

    This document is a summary of tasks performed for Project ICT-R27-060. : Mechanistic-empirical (M-E)based flexible pavement design concepts and procedures were : developed in previous Illinois Cooperative Highway Research Program projects (IHR-510...

  8. Ab Initio and Improved Empirical Potentials for the Calculation of the Anharmonic Vibrational States and Intramolecular Mode Coupling of N-Methylacetamide

    NASA Technical Reports Server (NTRS)

    Gregurick, Susan K.; Chaban, Galina M.; Gerber, R. Benny; Kwak, Dochou (Technical Monitor)

    2001-01-01

    The second-order Moller-Plesset ab initio electronic structure method is used to compute points for the anharmonic mode-coupled potential energy surface of N-methylacetamide (NMA) in the trans(sub ct) configuration, including all degrees of freedom. The vibrational states and the spectroscopy are directly computed from this potential surface using the Correlation Corrected Vibrational Self-Consistent Field (CC-VSCF) method. The results are compared with CC-VSCF calculations using both the standard and improved empirical Amber-like force fields and available low temperature experimental matrix data. Analysis of our calculated spectroscopic results show that: (1) The excellent agreement between the ab initio CC-VSCF calculated frequencies and the experimental data suggest that the computed anharmonic potentials for N-methylacetamide are of a very high quality; (2) For most transitions, the vibrational frequencies obtained from the ab initio CC-VSCF method are superior to those obtained using the empirical CC-VSCF methods, when compared with experimental data. However, the improved empirical force field yields better agreement with the experimental frequencies as compared with a standard AMBER-type force field; (3) The empirical force field in particular overestimates anharmonic couplings for the amide-2 mode, the methyl asymmetric bending modes, the out-of-plane methyl bending modes, and the methyl distortions; (4) Disagreement between the ab initio and empirical anharmonic couplings is greater than the disagreement between the frequencies, and thus the anharmonic part of the empirical potential seems to be less accurate than the harmonic contribution;and (5) Both the empirical and ab initio CC-VSCF calculations predict a negligible anharmonic coupling between the amide-1 and other internal modes. The implication of this is that the intramolecular energy flow between the amide-1 and the other internal modes may be smaller than anticipated. These results may have important implications for the anharmonic force fields of peptides, for which N-methylacetamide is a model.

  9. Global computer-assisted appraisal of osteoporosis risk in Asian women: an innovative study.

    PubMed

    Chang, Shu F; Hong, Chin M; Yang, Rong S

    2011-05-01

    To develop a computer-assisted appraisal system of osteoporosis that can predict osteoporosis health risk in community-dwelling women and to use it in an empirical analysis of the risk in Asian women. As the literature indicates, health risk assessment tools are generally applied in clinical practice for patient diagnosis. However, few studies have explored how to assist community-dwelling women to understand the risk of osteoporosis without invasive data. A longitudinal, evidence-based study. The first stage of this study is to establish a system that combines expertise in nursing, medicine and information technology. This part includes information from random samples (n = 700), including data on bone mineral density, osteoporosis risk factors, knowledge, beliefs and behaviour, which are used as the health risk appraisal system database. The second stage is to apply an empirical study. The relative risks of osteoporosis of the participants (n = 300) were determined with the system. The participants that were classified as at-risk were randomly grouped into experimental and control groups. Each group was treated using different nursing intervention methods. The sensitivity and specificity of the analytical tools was 75%. In empirical study, analysis results indicate that the prevalence of osteoporosis was 14.0%. Data indicate that strategic application of multiple nursing interventions can promote osteoporosis prevention knowledge in high-risk women and enhance the effectiveness of preventive action. The system can also provide people in remote areas or with insufficient medical resources a simple and effective means of managing health risk and implement the idea of self-evaluation and self-caring among community-dwelling women at home to achieve the final goal of early detection and early treatment of osteoporosis. This study developed a useful approach for providing Asia women with a reliable, valid, convenient and economical self-health management model. Health care professionals can explore the use of advanced information systems and nursing interventions to increase the effectiveness of osteoporosis prevention programmes for women. © 2011 Blackwell Publishing Ltd.

  10. An empirical analysis of journal policy effectiveness for computational reproducibility.

    PubMed

    Stodden, Victoria; Seiler, Jennifer; Ma, Zhaokun

    2018-03-13

    A key component of scientific communication is sufficient information for other researchers in the field to reproduce published findings. For computational and data-enabled research, this has often been interpreted to mean making available the raw data from which results were generated, the computer code that generated the findings, and any additional information needed such as workflows and input parameters. Many journals are revising author guidelines to include data and code availability. This work evaluates the effectiveness of journal policy that requires the data and code necessary for reproducibility be made available postpublication by the authors upon request. We assess the effectiveness of such a policy by ( i ) requesting data and code from authors and ( ii ) attempting replication of the published findings. We chose a random sample of 204 scientific papers published in the journal Science after the implementation of their policy in February 2011. We found that we were able to obtain artifacts from 44% of our sample and were able to reproduce the findings for 26%. We find this policy-author remission of data and code postpublication upon request-an improvement over no policy, but currently insufficient for reproducibility.

  11. An empirical analysis of journal policy effectiveness for computational reproducibility

    PubMed Central

    Seiler, Jennifer; Ma, Zhaokun

    2018-01-01

    A key component of scientific communication is sufficient information for other researchers in the field to reproduce published findings. For computational and data-enabled research, this has often been interpreted to mean making available the raw data from which results were generated, the computer code that generated the findings, and any additional information needed such as workflows and input parameters. Many journals are revising author guidelines to include data and code availability. This work evaluates the effectiveness of journal policy that requires the data and code necessary for reproducibility be made available postpublication by the authors upon request. We assess the effectiveness of such a policy by (i) requesting data and code from authors and (ii) attempting replication of the published findings. We chose a random sample of 204 scientific papers published in the journal Science after the implementation of their policy in February 2011. We found that we were able to obtain artifacts from 44% of our sample and were able to reproduce the findings for 26%. We find this policy—author remission of data and code postpublication upon request—an improvement over no policy, but currently insufficient for reproducibility. PMID:29531050

  12. Real-time Adaptive EEG Source Separation using Online Recursive Independent Component Analysis

    PubMed Central

    Hsu, Sheng-Hsiou; Mullen, Tim; Jung, Tzyy-Ping; Cauwenberghs, Gert

    2016-01-01

    Independent Component Analysis (ICA) has been widely applied to electroencephalographic (EEG) biosignal processing and brain-computer interfaces. The practical use of ICA, however, is limited by its computational complexity, data requirements for convergence, and assumption of data stationarity, especially for high-density data. Here we study and validate an optimized online recursive ICA algorithm (ORICA) with online recursive least squares (RLS) whitening for blind source separation of high-density EEG data, which offers instantaneous incremental convergence upon presentation of new data. Empirical results of this study demonstrate the algorithm's: (a) suitability for accurate and efficient source identification in high-density (64-channel) realistically-simulated EEG data; (b) capability to detect and adapt to non-stationarity in 64-ch simulated EEG data; and (c) utility for rapidly extracting principal brain and artifact sources in real 61-channel EEG data recorded by a dry and wearable EEG system in a cognitive experiment. ORICA was implemented as functions in BCILAB and EEGLAB and was integrated in an open-source Real-time EEG Source-mapping Toolbox (REST), supporting applications in ICA-based online artifact rejection, feature extraction for real-time biosignal monitoring in clinical environments, and adaptable classifications in brain-computer interfaces. PMID:26685257

  13. i-PI: A Python interface for ab initio path integral molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Ceriotti, Michele; More, Joshua; Manolopoulos, David E.

    2014-03-01

    Recent developments in path integral methodology have significantly reduced the computational expense of including quantum mechanical effects in the nuclear motion in ab initio molecular dynamics simulations. However, the implementation of these developments requires a considerable programming effort, which has hindered their adoption. Here we describe i-PI, an interface written in Python that has been designed to minimise the effort required to bring state-of-the-art path integral techniques to an electronic structure program. While it is best suited to first principles calculations and path integral molecular dynamics, i-PI can also be used to perform classical molecular dynamics simulations, and can just as easily be interfaced with an empirical forcefield code. To give just one example of the many potential applications of the interface, we use it in conjunction with the CP2K electronic structure package to showcase the importance of nuclear quantum effects in high-pressure water. Catalogue identifier: AERN_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AERN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 138626 No. of bytes in distributed program, including test data, etc.: 3128618 Distribution format: tar.gz Programming language: Python. Computer: Multiple architectures. Operating system: Linux, Mac OSX, Windows. RAM: Less than 256 Mb Classification: 7.7. External routines: NumPy Nature of problem: Bringing the latest developments in the modelling of nuclear quantum effects with path integral molecular dynamics to ab initio electronic structure programs with minimal implementational effort. Solution method: State-of-the-art path integral molecular dynamics techniques are implemented in a Python interface. Any electronic structure code can be patched to receive the atomic coordinates from the Python interface, and to return the forces and energy that are used to integrate the equations of motion. Restrictions: This code only deals with distinguishable particles. It does not include fermonic or bosonic exchanges between equivalent nuclei, which can become important at very low temperatures. Running time: Depends dramatically on the nature of the simulation being performed. A few minutes for short tests with empirical force fields, up to several weeks for production calculations with ab initio forces. The examples provided with the code run in less than an hour.

  14. Guidelines for Implementing NCHRP 1-37A M-E Design Procedures in Ohio : Volume 1 -- Summary of Findings, Implementation Plan, and Next Steps

    DOT National Transportation Integrated Search

    2009-11-01

    Highway agencies across the nation are moving towards implementation of the new AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) for pavement design. The benefits of implementing the MEPDG for routine use in Ohio includes (1) achieving more...

  15. Mechanistic-Empirical (M-E) Design Implementation & Monitoring for Flexible Pavements : 2018 PROJECT SUMMARY

    DOT National Transportation Integrated Search

    2018-06-01

    This document is a summary of the tasks performed for Project ICT-R27-149-1. Mechanistic-empirical (M-E)based flexible pavement design concepts and procedures were previously developed in Illinois Cooperative Highway Research Program projects IHR-...

  16. Layer moduli of Nebraska pavements for the new Mechanistic-Empirical Pavement Design Guide (MEPDG).

    DOT National Transportation Integrated Search

    2010-12-01

    As a step-wise implementation effort of the Mechanistic-Empirical Pavement Design Guide (MEPDG) for the design : and analysis of Nebraska flexible pavement systems, this research developed a database of layer moduli dynamic : modulus, creep compl...

  17. An empirical analysis of strategy implementation process and performance of construction companies

    NASA Astrophysics Data System (ADS)

    Zaidi, F. I.; Zawawi, E. M. A.; Nordin, R. M.; Ahnuar, E. M.

    2018-02-01

    Strategy implementation is known as action stage where it is to be considered as the most difficult stage in strategic planning. Strategy implementation can influence the whole texture of a company including its performance. The aim of this research is to provide the empirical relationship between strategy implementation process and performance of construction companies. This research establishes the strategy implementation process and how it influences the performance of construction companies. This research used quantitative method approached via questionnaire survey. Respondents were G7 construction companies in Klang Valley, Selangor. Pearson correlation analysis indicate a strong positive relationship between strategy implementation process and construction companies’ performance. The most importance part of strategy implementation process is to provide sufficient training for employees which directly influence the construction companies’ profit growth and employees’ growth. This research results will benefit top management in the construction companies to conduct strategy implementation in their companies. This research may not reflect the whole construction industry in Malaysia. Future research may be resumed to small and medium grades contractors and perhaps in other areas in Malaysia.

  18. Foundations for computer simulation of a low pressure oil flooded single screw air compressor

    NASA Astrophysics Data System (ADS)

    Bein, T. W.

    1981-12-01

    The necessary logic to construct a computer model to predict the performance of an oil flooded, single screw air compressor is developed. The geometric variables and relationships used to describe the general single screw mechanism are developed. The governing equations to describe the processes are developed from their primary relationships. The assumptions used in the development are also defined and justified. The computer model predicts the internal pressure, temperature, and flowrates through the leakage paths throughout the compression cycle of the single screw compressor. The model uses empirical external values as the basis for the internal predictions. The computer values are compared to the empirical values, and conclusions are drawn based on the results. Recommendations are made for future efforts to improve the computer model and to verify some of the conclusions that are drawn.

  19. National Trainers’ Perspectives on Challenges to Implementation of an Empirically-Supported Mental Health Treatment

    PubMed Central

    Hanson, Rochelle F.; Gros, Kirstin Stauffacher; Davidson, Tatiana M.; Barr, Simone; Cohen, Judith; Deblinger, Esther; Mannarino, Anthony P.; Ruggiero, Kenneth J.

    2013-01-01

    This study examined perceived challenges to implementation of an empirically supported mental health treatment for youth (Trauma-Focused Cognitive Behavioral Therapy; TF-CBT) and explored the potential use of technology-based resources in treatment delivery. Thematic interviews were conducted with 19 approved national TF-CBT trainers to assess their perspectives about challenges to implementation of TF-CBT and to explore their perceptions about the potential value of innovative, technology-based solutions to enhance provider fidelity and improve quality of care. These data offer some important insights and implications for training in evidence-based treatments, provider fidelity and competence, and patient engagement, particularly for those interventions targeting trauma-related symptoms among youth. PMID:23605292

  20. EMPIRE: A code for nuclear astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palumbo, A.

    The nuclear reaction code EMPIRE is presented as a useful tool for nuclear astrophysics. EMPIRE combines a variety of the reaction models with a comprehensive library of input parameters providing a diversity of options for the user. With exclusion of the directsemidirect capture all reaction mechanisms relevant to the nuclear astrophysics energy range of interest are implemented in the code. Comparison to experimental data show consistent agreement for all relevant channels.

  1. Computational Thinking in the Wild: Uncovering Complex Collaborative Thinking through Gameplay

    ERIC Educational Resources Information Center

    Berland, Matthew; Duncan, Sean

    2016-01-01

    Surprisingly few empirical studies address how computational thinking works "in the wild" or how games and simulations can support developing computational thinking skills. In this article, the authors report results from a study of computational thinking (CT) as evinced through player discussions around the collaborative board game…

  2. Validating an operational physical method to compute surface radiation from geostationary satellites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Dhere, Neelkanth G.; Wohlgemuth, John H.

    We developed models to compute global horizontal irradiance (GHI) and direct normal irradiance (DNI) over the last three decades. These models can be classified as empirical or physical based on the approach. Empirical models relate ground-based observations with satellite measurements and use these relations to compute surface radiation. Physical models consider the physics behind the radiation received at the satellite and create retrievals to estimate surface radiation. Furthermore, while empirical methods have been traditionally used for computing surface radiation for the solar energy industry, the advent of faster computing has made operational physical models viable. The Global Solar Insolation Projectmore » (GSIP) is a physical model that computes DNI and GHI using the visible and infrared channel measurements from a weather satellite. GSIP uses a two-stage scheme that first retrieves cloud properties and uses those properties in a radiative transfer model to calculate GHI and DNI. Developed for polar orbiting satellites, GSIP has been adapted to NOAA's Geostationary Operation Environmental Satellite series and can run operationally at high spatial resolutions. Our method holds the possibility of creating high quality datasets of GHI and DNI for use by the solar energy industry. We present an outline of the methodology and results from running the model as well as a validation study using ground-based instruments.« less

  3. CLOSED-FIELD CORONAL HEATING DRIVEN BY WAVE TURBULENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downs, Cooper; Lionello, Roberto; Mikić, Zoran

    To simulate the energy balance of coronal plasmas on macroscopic scales, we often require the specification of the coronal heating mechanism in some functional form. To go beyond empirical formulations and to build a more physically motivated heating function, we investigate the wave-turbulence-driven (WTD) phenomenology for the heating of closed coronal loops. Our implementation is designed to capture the large-scale propagation, reflection, and dissipation of wave turbulence along a loop. The parameter space of this model is explored by solving the coupled WTD and hydrodynamic evolution in 1D for an idealized loop. The relevance to a range of solar conditionsmore » is also established by computing solutions for over one hundred loops extracted from a realistic 3D coronal field. Due to the implicit dependence of the WTD heating model on loop geometry and plasma properties along the loop and at the footpoints, we find that this model can significantly reduce the number of free parameters when compared to traditional empirical heating models, and still robustly describe a broad range of quiet-Sun and active region conditions. The importance of the self-reflection term in producing relatively short heating scale heights and thermal nonequilibrium cycles is also discussed.« less

  4. Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs.

    PubMed

    Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua

    2018-01-13

    The establishment of the Aircraft Dynamic Model(ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters.

  5. Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs

    PubMed Central

    Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua

    2018-01-01

    The establishment of the Aircraft Dynamic Model (ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter (EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters. PMID:29342856

  6. Quantum mechanics implementation in drug-design workflows: does it really help?

    PubMed

    Arodola, Olayide A; Soliman, Mahmoud Es

    2017-01-01

    The pharmaceutical industry is progressively operating in an era where development costs are constantly under pressure, higher percentages of drugs are demanded, and the drug-discovery process is a trial-and-error run. The profit that flows in with the discovery of new drugs has always been the motivation for the industry to keep up the pace and keep abreast with the endless demand for medicines. The process of finding a molecule that binds to the target protein using in silico tools has made computational chemistry a valuable tool in drug discovery in both academic research and pharmaceutical industry. However, the complexity of many protein-ligand interactions challenges the accuracy and efficiency of the commonly used empirical methods. The usefulness of quantum mechanics (QM) in drug-protein interaction cannot be overemphasized; however, this approach has little significance in some empirical methods. In this review, we discuss recent developments in, and application of, QM to medically relevant biomolecules. We critically discuss the different types of QM-based methods and their proposed application to incorporating them into drug-design and -discovery workflows while trying to answer a critical question: are QM-based methods of real help in drug-design and -discovery research and industry?

  7. A general intermolecular force field based on tight-binding quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Grimme, Stefan; Bannwarth, Christoph; Caldeweyher, Eike; Pisarek, Jana; Hansen, Andreas

    2017-10-01

    A black-box type procedure is presented for the generation of a molecule-specific, intermolecular potential energy function. The method uses quantum chemical (QC) information from our recently published extended tight-binding semi-empirical scheme (GFN-xTB) and can treat non-covalently bound complexes and aggregates with almost arbitrary chemical structure. The necessary QC information consists of the equilibrium structure, Mulliken atomic charges, charge centers of localized molecular orbitals, and also of frontier orbitals and orbital energies. The molecular pair potential includes model density dependent Pauli repulsion, penetration, as well as point charge electrostatics, the newly developed D4 dispersion energy model, Drude oscillators for polarization, and a charge-transfer term. Only one element-specific and about 20 global empirical parameters are needed to cover systems with nuclear charges up to radon (Z = 86). The method is tested for standard small molecule interaction energy benchmark sets where it provides accurate intermolecular energies and equilibrium distances. Examples for structures with a few hundred atoms including charged systems demonstrate the versatility of the approach. The method is implemented in a stand-alone computer code which enables rigid-body, global minimum energy searches for molecular aggregation or alignment.

  8. Model of community emergence in weighted social networks

    NASA Astrophysics Data System (ADS)

    Kumpula, J. M.; Onnela, J.-P.; Saramäki, J.; Kertész, J.; Kaski, K.

    2009-04-01

    Over the years network theory has proven to be rapidly expanding methodology to investigate various complex systems and it has turned out to give quite unparalleled insight to their structure, function, and response through data analysis, modeling, and simulation. For social systems in particular the network approach has empirically revealed a modular structure due to interplay between the network topology and link weights between network nodes or individuals. This inspired us to develop a simple network model that could catch some salient features of mesoscopic community and macroscopic topology formation during network evolution. Our model is based on two fundamental mechanisms of network sociology for individuals to find new friends, namely cyclic closure and focal closure, which are mimicked by local search-link-reinforcement and random global attachment mechanisms, respectively. In addition we included to the model a node deletion mechanism by removing all its links simultaneously, which corresponds for an individual to depart from the network. Here we describe in detail the implementation of our model algorithm, which was found to be computationally efficient and produce many empirically observed features of large-scale social networks. Thus this model opens a new perspective for studying such collective social phenomena as spreading, structure formation, and evolutionary processes.

  9. Defects diagnosis in laser brazing using near-infrared signals based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Cheng, Liyong; Mi, Gaoyang; Li, Shuo; Wang, Chunming; Hu, Xiyuan

    2018-03-01

    Real-time monitoring of laser welding plays a very important role in the modern automated production and online defects diagnosis is necessary to be implemented. In this study, the status of laser brazing was monitored in real time using an infrared photoelectric sensor. Four kinds of braze seams (including healthy weld, unfilled weld, hole weld and rough surface weld) along with corresponding near-infrared signals were obtained. Further, a new method called Empirical Mode Decomposition (EMD) was proposed to analyze the near-infrared signals. The results showed that the EMD method had a good performance in eliminating the noise on the near-infrared signals. And then, the correlation coefficient was developed for selecting the Intrinsic Mode Function (IMF) more sensitive to the weld defects. A more accurate signal was reconstructed with the selected IMF components. Simultaneously, the spectrum of selected IMF components was solved using fast Fourier transform, and the frequency characteristics were clearly revealed. The frequency energy of different frequency bands was computed to diagnose the defects. There was a significant difference in four types of weld defects. This approach has been proved to be an effective and efficient method for monitoring laser brazing defects.

  10. Compensating the intensity fall-off effect in cone-beam tomography by an empirical weight formula.

    PubMed

    Chen, Zikuan; Calhoun, Vince D; Chang, Shengjiang

    2008-11-10

    The Feldkamp-David-Kress (FDK) algorithm is widely adopted for cone-beam reconstruction due to its one-dimensional filtered backprojection structure and parallel implementation. In a reconstruction volume, the conspicuous cone-beam artifact manifests as intensity fall-off along the longitudinal direction (the gantry rotation axis). This effect is inherent to circular cone-beam tomography due to the fact that a cone-beam dataset acquired from circular scanning fails to meet the data sufficiency condition for volume reconstruction. Upon observations of the intensity fall-off phenomenon associated with the FDK reconstruction of a ball phantom, we propose an empirical weight formula to compensate for the fall-off degradation. Specifically, a reciprocal cosine can be used to compensate the voxel values along longitudinal direction during three-dimensional backprojection reconstruction, in particular for boosting the values of voxels at positions with large cone angles. The intensity degradation within the z plane, albeit insignificant, can also be compensated by using the same weight formula through a parameter for radial distance dependence. Computer simulations and phantom experiments are presented to demonstrate the compensation effectiveness of the fall-off effect inherent in circular cone-beam tomography.

  11. Closed-field Coronal Heating Driven by Wave Turbulence

    NASA Astrophysics Data System (ADS)

    Downs, Cooper; Lionello, Roberto; Mikić, Zoran; Linker, Jon A.; Velli, Marco

    2016-12-01

    To simulate the energy balance of coronal plasmas on macroscopic scales, we often require the specification of the coronal heating mechanism in some functional form. To go beyond empirical formulations and to build a more physically motivated heating function, we investigate the wave-turbulence-driven (WTD) phenomenology for the heating of closed coronal loops. Our implementation is designed to capture the large-scale propagation, reflection, and dissipation of wave turbulence along a loop. The parameter space of this model is explored by solving the coupled WTD and hydrodynamic evolution in 1D for an idealized loop. The relevance to a range of solar conditions is also established by computing solutions for over one hundred loops extracted from a realistic 3D coronal field. Due to the implicit dependence of the WTD heating model on loop geometry and plasma properties along the loop and at the footpoints, we find that this model can significantly reduce the number of free parameters when compared to traditional empirical heating models, and still robustly describe a broad range of quiet-Sun and active region conditions. The importance of the self-reflection term in producing relatively short heating scale heights and thermal nonequilibrium cycles is also discussed.

  12. Generalized empirical Bayesian methods for discovery of differential data in high-throughput biology.

    PubMed

    Hardcastle, Thomas J

    2016-01-15

    High-throughput data are now commonplace in biological research. Rapidly changing technologies and application mean that novel methods for detecting differential behaviour that account for a 'large P, small n' setting are required at an increasing rate. The development of such methods is, in general, being done on an ad hoc basis, requiring further development cycles and a lack of standardization between analyses. We present here a generalized method for identifying differential behaviour within high-throughput biological data through empirical Bayesian methods. This approach is based on our baySeq algorithm for identification of differential expression in RNA-seq data based on a negative binomial distribution, and in paired data based on a beta-binomial distribution. Here we show how the same empirical Bayesian approach can be applied to any parametric distribution, removing the need for lengthy development of novel methods for differently distributed data. Comparisons with existing methods developed to address specific problems in high-throughput biological data show that these generic methods can achieve equivalent or better performance. A number of enhancements to the basic algorithm are also presented to increase flexibility and reduce computational costs. The methods are implemented in the R baySeq (v2) package, available on Bioconductor http://www.bioconductor.org/packages/release/bioc/html/baySeq.html. tjh48@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Empirically Based Myths: Astrology, Biorhythms, and ATIs.

    ERIC Educational Resources Information Center

    Ragsdale, Ronald G.

    1980-01-01

    A myth may have an empirical basis through chance occurrence; perhaps Aptitude Treatment Interactions (ATIs) are in this category. While ATIs have great utility in describing, planning, and implementing instruction, few disordinal interactions have been found. Article suggests narrowing of ATI research with replications and estimates of effect…

  14. The Quantum Measurement Problem and Physical reality: A Computation Theoretic Perspective

    NASA Astrophysics Data System (ADS)

    Srikanth, R.

    2006-11-01

    Is the universe computable? If yes, is it computationally a polynomial place? In standard quantum mechanics, which permits infinite parallelism and the infinitely precise specification of states, a negative answer to both questions is not ruled out. On the other hand, empirical evidence suggests that NP-complete problems are intractable in the physical world. Likewise, computational problems known to be algorithmically uncomputable do not seem to be computable by any physical means. We suggest that this close correspondence between the efficiency and power of abstract algorithms on the one hand, and physical computers on the other, finds a natural explanation if the universe is assumed to be algorithmic; that is, that physical reality is the product of discrete sub-physical information processing equivalent to the actions of a probabilistic Turing machine. This assumption can be reconciled with the observed exponentiality of quantum systems at microscopic scales, and the consequent possibility of implementing Shor's quantum polynomial time algorithm at that scale, provided the degree of superposition is intrinsically, finitely upper-bounded. If this bound is associated with the quantum-classical divide (the Heisenberg cut), a natural resolution to the quantum measurement problem arises. From this viewpoint, macroscopic classicality is an evidence that the universe is in BPP, and both questions raised above receive affirmative answers. A recently proposed computational model of quantum measurement, which relates the Heisenberg cut to the discreteness of Hilbert space, is briefly discussed. A connection to quantum gravity is noted. Our results are compatible with the philosophy that mathematical truths are independent of the laws of physics.

  15. A universal preconditioner for simulating condensed phase materials.

    PubMed

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  16. A universal preconditioner for simulating condensed phase materials

    NASA Astrophysics Data System (ADS)

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-01

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  17. Numerical assessment of residual formability in sheet metal products: towards design for sustainability

    NASA Astrophysics Data System (ADS)

    Falsafi, Javad; Demirci, Emrah; Silberschmidt, Vadim. V.

    2016-08-01

    A new computational scheme is presented to addresses cold recyclability of sheet- metal products. Cold recycling or re-manufacturing is an emerging area studied mostly empirically; in its current form, it lacks theoretical foundation especially in the area of sheet metals. In this study, a re-formability index was introduced based on post-manufacture residual formability in sheet metal products. This index accounts for possible levels of deformation along different strain paths based on Polar Effective Plastic Strain (PEPS) technique. PEPS is strain-path independent, hence provides a foundation for residual formability analysis. A user- friendly code was developed to implement this assessment in conjunction with advanced finite- element (FE) analysis. The significance of this approach is the advancement towards recycling of sheet metal products without melting them.

  18. Getting ahead: forward models and their place in cognitive architecture.

    PubMed

    Pickering, Martin J; Clark, Andy

    2014-09-01

    The use of forward models (mechanisms that predict the future state of a system) is well established in cognitive and computational neuroscience. We compare and contrast two recent, but interestingly divergent, accounts of the place of forward models in the human cognitive architecture. On the Auxiliary Forward Model (AFM) account, forward models are special-purpose prediction mechanisms implemented by additional circuitry distinct from core mechanisms of perception and action. On the Integral Forward Model (IFM) account, forward models lie at the heart of all forms of perception and action. We compare these neighbouring but importantly different visions and consider their implications for the cognitive sciences. We end by asking what kinds of empirical research might offer evidence favouring one or the other of these approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Complete description of all self-similar models driven by Lévy stable noise

    NASA Astrophysics Data System (ADS)

    Weron, Aleksander; Burnecki, Krzysztof; Mercik, Szymon; Weron, Karina

    2005-01-01

    A canonical decomposition of H -self-similar Lévy symmetric α -stable processes is presented. The resulting components completely described by both deterministic kernels and the corresponding stochastic integral with respect to the Lévy symmetric α -stable motion are shown to be related to the dissipative and conservative parts of the dynamics. This result provides stochastic analysis tools for study the anomalous diffusion phenomena in the Langevin equation framework. For example, a simple computer test for testing the origins of self-similarity is implemented for four real empirical time series recorded from different physical systems: an ionic current flow through a single channel in a biological membrane, an energy of solar flares, a seismic electric signal recorded during seismic Earth activity, and foreign exchange rate daily returns.

  20. The Use of Empirical Studies in the Development of High End Computing Applications

    DTIC Science & Technology

    2009-12-01

    34, Proceeding of 5th ACM-IEEE International Symposium on Empirical Software Engineering (ISESE󈧊), Rio de Janeiro , Brazil, September, 2006. 8. Jeffrey C...Symposium on Empirical Software Engineering, (ISESE), Rio de Janeiro , September, 2006. [26] Zelkowitz M. , V. Basili, S. Asgari, L. Hochstein, J...data is consistently collected across studies. 4. Sanitization of sensitive data. The framework provides external researcher with access to the

  1. Incorporating Applied Behavior Analysis to Assess and Support Educators' Treatment Integrity

    ERIC Educational Resources Information Center

    Collier-Meek, Melissa A.; Sanetti, Lisa M. H.; Fallon, Lindsay M.

    2017-01-01

    For evidence-based interventions to be effective for students they must be consistently implemented, however, many teachers struggle with treatment integrity and require support. Although many implementation support strategies are research based, there is little empirical guidance about the types of treatment integrity, implementers, and contexts…

  2. Computer Games for the Math Achievement of Diverse Students

    ERIC Educational Resources Information Center

    Kim, Sunha; Chang, Mido

    2010-01-01

    Although computer games as a way to improve students' learning have received attention by many educational researchers, no consensus has been reached on the effects of computer games on student achievement. Moreover, there is lack of empirical research on differential effects of computer games on diverse learners. In response, this study…

  3. Improving public transportation systems with self-organization: A headway-based model and regulation of passenger alighting and boarding.

    PubMed

    Carreón, Gustavo; Gershenson, Carlos; Pineda, Luis A

    2017-01-01

    The equal headway instability-the fact that a configuration with regular time intervals between vehicles tends to be volatile-is a common regulation problem in public transportation systems. An unsatisfactory regulation results in low efficiency and possible collapses of the service. Computational simulations have shown that self-organizing methods can regulate the headway adaptively beyond the theoretical optimum. In this work, we develop a computer simulation for metro systems fed with real data from the Mexico City Metro to test the current regulatory method with a novel self-organizing approach. The current model considers overall system's data such as minimum and maximum waiting times at stations, while the self-organizing method regulates the headway in a decentralized manner using local information such as the passenger's inflow and the positions of neighboring trains. The simulation shows that the self-organizing method improves the performance over the current one as it adapts to environmental changes at the timescale they occur. The correlation between the simulation of the current model and empirical observations carried out in the Mexico City Metro provides a base to calculate the expected performance of the self-organizing method in case it is implemented in the real system. We also performed a pilot study at the Balderas station to regulate the alighting and boarding of passengers through guide signs on platforms. The analysis of empirical data shows a delay reduction of the waiting time of trains at stations. Finally, we provide recommendations to improve public transportation systems.

  4. Improving public transportation systems with self-organization: A headway-based model and regulation of passenger alighting and boarding

    PubMed Central

    Gershenson, Carlos; Pineda, Luis A.

    2017-01-01

    The equal headway instability—the fact that a configuration with regular time intervals between vehicles tends to be volatile—is a common regulation problem in public transportation systems. An unsatisfactory regulation results in low efficiency and possible collapses of the service. Computational simulations have shown that self-organizing methods can regulate the headway adaptively beyond the theoretical optimum. In this work, we develop a computer simulation for metro systems fed with real data from the Mexico City Metro to test the current regulatory method with a novel self-organizing approach. The current model considers overall system’s data such as minimum and maximum waiting times at stations, while the self-organizing method regulates the headway in a decentralized manner using local information such as the passenger’s inflow and the positions of neighboring trains. The simulation shows that the self-organizing method improves the performance over the current one as it adapts to environmental changes at the timescale they occur. The correlation between the simulation of the current model and empirical observations carried out in the Mexico City Metro provides a base to calculate the expected performance of the self-organizing method in case it is implemented in the real system. We also performed a pilot study at the Balderas station to regulate the alighting and boarding of passengers through guide signs on platforms. The analysis of empirical data shows a delay reduction of the waiting time of trains at stations. Finally, we provide recommendations to improve public transportation systems. PMID:29287120

  5. Examining the Premises Supporting the Empirically Supported Intervention Approach to Social Work Practice

    ERIC Educational Resources Information Center

    McBeath, Bowen; Briggs, Harold E.; Aisenberg, Eugene

    2010-01-01

    Federal, state, and local policymakers and funders have increasingly organized human service delivery functions around the selection and implementation of empirically supported interventions (ESIs), under the expectation that service delivery through such intervention frameworks results in improvements in cost-effectiveness and system performance.…

  6. Technological Advances in the Treatment of Trauma: A Review of Promising Practices

    ERIC Educational Resources Information Center

    Paul, Lisa A.; Hassija, Christina M.; Clapp, Joshua D.

    2012-01-01

    Given the availability of empirically supported practices for addressing posttraumatic stress disorder and other forms of trauma-related distress, the development and implementation of new technology to deliver these treatments is exciting. Technological innovations in this literature aim to expand availability of empirically based intervention,…

  7. Guidelines for Implementing NCHRP 1-37A M-E Design Procedures in Ohio : Volume 3 -- Sensitivity Analysis

    DOT National Transportation Integrated Search

    2009-11-01

    The new Mechanistic-Empirical Pavement Design Guide (NCHRP 1-37A and 1-40D) is based on fundamental engineering principles and is far more comprehensive than the current empirical AASHTO Design Guide developed for conditions more than 40 years previo...

  8. The effect of environmental factors on the implementation of the Mechanistic-empirical pavement design guide (MEPDG).

    DOT National Transportation Integrated Search

    2011-07-01

    Current pavement design based on the AASHTO Design Guide uses an empirical approach from the results of the AASHO Road Test conducted in 1958. To address some of the limitations of the original design guide, AASHTO developed a new guide: Mechanistic ...

  9. Integration of least angle regression with empirical Bayes for multi-locus genome-wide association studies

    USDA-ARS?s Scientific Manuscript database

    Multi-locus genome-wide association studies has become the state-of-the-art procedure to identify quantitative trait loci (QTL) associated with traits simultaneously. However, implementation of multi-locus model is still difficult. In this study, we integrated least angle regression with empirical B...

  10. Empirically Supported Treatment's Impact on Organizational Culture and Climate

    ERIC Educational Resources Information Center

    Patterson-Silver Wolf, David A.; Dulmus, Catherine N.; Maguin, Eugene

    2012-01-01

    Objectives: With the continued push to implement empirically supported treatments (ESTs) into community-based organizations, it is important to investigate whether working condition disruptions occur during this process. While there are many studies investigating best practices and how to adopt them, the literature lacks studies investigating the…

  11. A New Look at the Eclipse Timing Variation Diagram Analysis of Selected 3-body W UMa Systems

    NASA Astrophysics Data System (ADS)

    Christopoulou, P.-E.; Papageorgiou, A.

    2015-07-01

    The light travel effect produced by the presence of tertiary components can reveal much about the origin and evolution of over-contact binaries. Monitoring of W UMa systems over the last decade and/or the use of publicly available photometric surveys (NSVS, ASAS, etc.) has uncovered or suggested the presence of many unseen companions, which calls for an in-depth investigation of the parameters derived from cyclic period variations in order to confirm or reject the assumption of hidden companion(s). Progress in the analysis of eclipse timing variations is summarized here both from the empirical and the theoretical points of view, and a more extensive investigation of the proposed orbital parameters of third bodies is proposed. The code we have developed for this, implemented in Python, is set up to handle heuristic scanning with parameter perturbation in parameter space, and to establish realistic uncertainties from the least squares fitting. A computational example is given for TZ Boo, a W UMa system with a spectroscopically detected third component. Future options to be implemented include MCMC and bootstrapping.

  12. Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.

    PubMed

    Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W

    2014-11-26

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. Copyright © 2014 the authors 0270-6474/14/3416046-12$15.00/0.

  13. Computerized modeling techniques predict the 3D structure of H₄R: facts and fiction.

    PubMed

    Zaid, Hilal; Ismael-Shanak, Siba; Michaeli, Amit; Rayan, Anwar

    2012-01-01

    The functional characterization of proteins presents a daily challenge r biochemical, medical and computational sciences, especially when the structures are undetermined empirically, as in the case of the Histamine H4 Receptor (H₄R). H₄R is a member of the GPCR superfamily that plays a vital role in immune and inflammatory responses. To date, the concept of GPCRs modeling is highlighted in textbooks and pharmaceutical pamphlets, and this group of proteins has been the subject of almost 3500 publications in the scientific literature. The dynamic nature of determining the GPCRs structure was elucidated through elegant and creative modeling methodologies, implemented by many groups around the world. H₄R which belongs to the GPCR family was cloned in 2000; understandably, its biological activity was reported only 65 times in pubmed. Here we attempt to cover the fundamental concepts of H₄R structure modeling and its implementation in drug discovery, especially those that have been experimentally tested and to highlight some ideas that are currently being discussed on the dynamic nature of H₄R and GPCRs computerized techniques for 3D structure modeling.

  14. Characterization of computer network events through simultaneous feature selection and clustering of intrusion alerts

    NASA Astrophysics Data System (ADS)

    Chen, Siyue; Leung, Henry; Dondo, Maxwell

    2014-05-01

    As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.

  15. Implementation of Multispectral Image Classification on a Remote Adaptive Computer

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna

    1999-01-01

    As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).

  16. A neural network based reputation bootstrapping approach for service selection

    NASA Astrophysics Data System (ADS)

    Wu, Quanwang; Zhu, Qingsheng; Li, Peng

    2015-10-01

    With the concept of service-oriented computing becoming widely accepted in enterprise application integration, more and more computing resources are encapsulated as services and published online. Reputation mechanism has been studied to establish trust on prior unknown services. One of the limitations of current reputation mechanisms is that they cannot assess the reputation of newly deployed services as no record of their previous behaviours exists. Most of the current bootstrapping approaches merely assign default reputation values to newcomers. However, by this kind of methods, either newcomers or existing services will be favoured. In this paper, we present a novel reputation bootstrapping approach, where correlations between features and performance of existing services are learned through an artificial neural network (ANN) and they are then generalised to establish a tentative reputation when evaluating new and unknown services. Reputations of services published previously by the same provider are also incorporated for reputation bootstrapping if available. The proposed reputation bootstrapping approach is seamlessly embedded into an existing reputation model and implemented in the extended service-oriented architecture. Empirical studies of the proposed approach are shown at last.

  17. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  18. An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment

    PubMed Central

    Li, Hsu-Chih

    2017-01-01

    The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications. PMID:28316618

  19. Sessional, Weekly and Diurnal Patterns of Computer Lab Usage by Students Attending a Regional University in Australia

    ERIC Educational Resources Information Center

    Spennemann, Dirk H. R.; Atkinson, John; Cornforth, David

    2007-01-01

    Most universities have invested in extensive infrastructure in the form of computer laboratories and computer kiosks. However, is this investment justified when it is suggested that students work predominantly from home using their own computers? This paper provides an empirical study investigating how students at a regional multi-campus…

  20. A Study of Student-Teachers' Readiness to Use Computers in Teaching: An Empirical Study

    ERIC Educational Resources Information Center

    Padmavathi, M.

    2016-01-01

    This study attempts to analyze student-teachers' attitude towards the use of computers for classroom teaching. Four dimensions of computer attitude on a Likert-type five-point scale were used: Affect (liking), Perceived usefulness, Perceived Control, and Behaviour Intention to use computers. The effect of student-teachers' subject area, years of…

  1. Multi-Angle Implementation of Atmospheric Correction for MODIS (MAIAC). Part 3: Atmospheric Correction

    NASA Technical Reports Server (NTRS)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Hilker, T.; Hall, F.; Sellers, P.; Tucker, J.; Korkin, S.

    2012-01-01

    This paper describes the atmospheric correction (AC) component of the Multi-Angle Implementation of Atmospheric Correction algorithm (MAIAC) which introduces a new way to compute parameters of the Ross-Thick Li-Sparse (RTLS) Bi-directional reflectance distribution function (BRDF), spectral surface albedo and bidirectional reflectance factors (BRF) from satellite measurements obtained by the Moderate Resolution Imaging Spectroradiometer (MODIS). MAIAC uses a time series and spatial analysis for cloud detection, aerosol retrievals and atmospheric correction. It implements a moving window of up to 16 days of MODIS data gridded to 1 km resolution in a selected projection. The RTLS parameters are computed directly by fitting the cloud-free MODIS top of atmosphere (TOA) reflectance data stored in the processing queue. The RTLS retrieval is applied when the land surface is stable or changes slowly. In case of rapid or large magnitude change (as for instance caused by disturbance), MAIAC follows the MODIS operational BRDF/albedo algorithm and uses a scaling approach where the BRDF shape is assumed stable but its magnitude is adjusted based on the latest single measurement. To assess the stability of the surface, MAIAC features a change detection algorithm which analyzes relative change of reflectance in the Red and NIR bands during the accumulation period. To adjust for the reflectance variability with the sun-observer geometry and allow comparison among different days (view geometries), the BRFs are normalized to the fixed view geometry using the RTLS model. An empirical analysis of MODIS data suggests that the RTLS inversion remains robust when the relative change of geometry-normalized reflectance stays below 15%. This first of two papers introduces the algorithm, a second, companion paper illustrates its potential by analyzing MODIS data over a tropical rainforest and assessing errors and uncertainties of MAIAC compared to conventional MODIS products.

  2. Content-based VLE designs improve learning efficiency in constructivist statistics education.

    PubMed

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific-purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a content-based design outperforms the traditional VLE-based design.

  3. Fast randomization of large genomic datasets while preserving alteration counts.

    PubMed

    Gobbi, Andrea; Iorio, Francesco; Dawson, Kevin J; Wedge, David C; Tamborero, David; Alexandrov, Ludmil B; Lopez-Bigas, Nuria; Garnett, Mathew J; Jurman, Giuseppe; Saez-Rodriguez, Julio

    2014-09-01

    Studying combinatorial patterns in cancer genomic datasets has recently emerged as a tool for identifying novel cancer driver networks. Approaches have been devised to quantify, for example, the tendency of a set of genes to be mutated in a 'mutually exclusive' manner. The significance of the proposed metrics is usually evaluated by computing P-values under appropriate null models. To this end, a Monte Carlo method (the switching-algorithm) is used to sample simulated datasets under a null model that preserves patient- and gene-wise mutation rates. In this method, a genomic dataset is represented as a bipartite network, to which Markov chain updates (switching-steps) are applied. These steps modify the network topology, and a minimal number of them must be executed to draw simulated datasets independently under the null model. This number has previously been deducted empirically to be a linear function of the total number of variants, making this process computationally expensive. We present a novel approximate lower bound for the number of switching-steps, derived analytically. Additionally, we have developed the R package BiRewire, including new efficient implementations of the switching-algorithm. We illustrate the performances of BiRewire by applying it to large real cancer genomics datasets. We report vast reductions in time requirement, with respect to existing implementations/bounds and equivalent P-value computations. Thus, we propose BiRewire to study statistical properties in genomic datasets, and other data that can be modeled as bipartite networks. BiRewire is available on BioConductor at http://www.bioconductor.org/packages/2.13/bioc/html/BiRewire.html. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  4. The use of wireless laptop computers for computer-assisted learning in pharmacokinetics.

    PubMed

    Munar, Myrna Y; Singh, Harleen; Belle, Donna; Brackett, Carolyn C; Earle, Sandra B

    2006-02-15

    To implement computer-assisted learning workshops into pharmacokinetics courses in a doctor of pharmacy (PharmD) program. Workshops were designed for students to utilize computer software programs on laptop computers to build pharmacokinetic models to predict drug concentrations resulting from various dosage regimens. In addition, students were able to visualize through graphing programs how altering different parameters changed drug concentration-time curves. Surveys were conducted to measure students' attitudes toward computer technology before and after implementation. Finally, traditional examinations were used to evaluate student learning. Doctor of pharmacy students responded favorably to the use of wireless laptop computers in problem-based pharmacokinetic workshops. Eighty-eight percent (n = 61/69) and 82% (n = 55/67) of PharmD students completed surveys before and after computer implementation, respectively. Prior to implementation, 95% of students agreed that computers would enhance learning in pharmacokinetics. After implementation, 98% of students strongly agreed (p < 0.05) that computers enhanced learning. Examination results were significantly higher after computer implementation (89% with computers vs. 84% without computers; p = 0.01). Implementation of wireless laptop computers in a pharmacokinetic course enabled students to construct their own pharmacokinetic models that could respond to changing parameters. Students had greater comprehension and were better able to interpret results and provide appropriate recommendations. Computer-assisted pharmacokinetic techniques can be powerful tools when making decisions about drug therapy.

  5. The Use of Wireless Laptop Computers for Computer-Assisted Learning in Pharmacokinetics

    PubMed Central

    Munar, Myrna Y.; Singh, Harleen; Belle, Donna; Brackett, Carolyn C.; Earle, Sandra B.

    2006-01-01

    Objective To implement computer-assisted learning workshops into pharmacokinetics courses in a doctor of pharmacy (PharmD) program. Design Workshops were designed for students to utilize computer software programs on laptop computers to build pharmacokinetic models to predict drug concentrations resulting from various dosage regimens. In addition, students were able to visualize through graphing programs how altering different parameters changed drug concentration-time curves. Surveys were conducted to measure students’ attitudes toward computer technology before and after implementation. Finally, traditional examinations were used to evaluate student learning. Assessment Doctor of pharmacy students responded favorably to the use of wireless laptop computers in problem-based pharmacokinetic workshops. Eighty-eight percent (n = 61/69) and 82% (n = 55/67) of PharmD students completed surveys before and after computer implementation, respectively. Prior to implementation, 95% of students agreed that computers would enhance learning in pharmacokinetics. After implementation, 98% of students strongly agreed (p < 0.05) that computers enhanced learning. Examination results were significantly higher after computer implementation (89% with computers vs. 84% without computers; p = 0.01). Conclusion Implementation of wireless laptop computers in a pharmacokinetic course enabled students to construct their own pharmacokinetic models that could respond to changing parameters. Students had greater comprehension and were better able to interpret results and provide appropriate recommendations. Computer-assisted pharmacokinetic techniques can be powerful tools when making decisions about drug therapy. PMID:17136147

  6. Implementation Issues of Adaptive Energy Detection in Heterogeneous Wireless Networks

    PubMed Central

    Sobron, Iker; Eizmendi, Iñaki; Martins, Wallace A.; Diniz, Paulo S. R.; Ordiales, Juan Luis; Velez, Manuel

    2017-01-01

    Spectrum sensing (SS) enables the coexistence of non-coordinated heterogeneous wireless systems operating in the same band. Due to its computational simplicity, energy detection (ED) technique has been widespread employed in SS applications; nonetheless, the conventional ED may be unreliable under environmental impairments, justifying the use of ED-based variants. Assessing ED algorithms from theoretical and simulation viewpoints relies on several assumptions and simplifications which, eventually, lead to conclusions that do not necessarily meet the requirements imposed by real propagation environments. This work addresses those problems by dealing with practical implementation issues of adaptive least mean square (LMS)-based ED algorithms. The paper proposes a new adaptive ED algorithm that uses a variable step-size guaranteeing the LMS convergence in time-varying environments. Several implementation guidelines are provided and, additionally, an empirical assessment and validation with a software defined radio-based hardware is carried out. Experimental results show good performance in terms of probabilities of detection (Pd>0.9) and false alarm (Pf∼0.05) in a range of low signal-to-noise ratios around [-4,1] dB, in both single-node and cooperative modes. The proposed sensing methodology enables a seamless monitoring of the radio electromagnetic spectrum in order to provide band occupancy information for an efficient usage among several wireless communications systems. PMID:28441751

  7. Implementation of the MEPDG for flexible pavements in Idaho.

    DOT National Transportation Integrated Search

    2012-05-01

    This study was conducted to assist the Idaho Transportation Department (ITD) in the implementation of the Mechanistic-Empirical Pavement Design Guide (MEPDG) for flexible pavements. The main research work in this study focused on establishing a mater...

  8. A Model of Therapist Competencies for the Empirically Supported Interpersonal Psychotherapy for Adolescent Depression

    ERIC Educational Resources Information Center

    Sburlati, Elizabeth S.; Lyneham, Heidi J.; Mufson, Laura H.; Schniering, Carolyn A.

    2012-01-01

    In order to treat adolescent depression, a number of empirically supported treatments (ESTs) have been developed from both the cognitive behavioral therapy (CBT) and interpersonal psychotherapy (IPT-A) frameworks. Research has shown that in order for these treatments to be implemented in routine clinical practice (RCP), effective therapist…

  9. Theoretical Implementations of Various Mobile Applications Used in English Language Learning

    ERIC Educational Resources Information Center

    Small, Melissa

    2014-01-01

    This review of the theoretical framework for Mastery Learning Theory and Sense of Community theories is provided in conjunction with a review of the literature for mobile technology in relation to language learning. Although empirical research is minimal for mobile phone technology as an aid for language learning, the empirical research that…

  10. A Review of Empirical Evidence on Scaffolding for Science Education

    ERIC Educational Resources Information Center

    Lin, Tzu-Chiang; Hsu, Ying-Shao; Lin, Shu-Sheng; Changlai, Maio-Li; Yang, Kun-Yuan; Lai, Ting-Ling

    2012-01-01

    This content analysis of articles in the Social Science Citation Index journals from 1995 to 2009 was conducted to provide science educators with empirical evidence regarding the effects of scaffolding on science learning. It clarifies the definition, design, and implementation of scaffolding in science classrooms and research studies. The results…

  11. Empirical Histograms in Item Response Theory with Ordinal Data

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2007-01-01

    The purpose of this research is to describe, test, and illustrate a new implementation of the empirical histogram (EH) method for ordinal items. The EH method involves the estimation of item response model parameters simultaneously with the approximation of the distribution of the random latent variable (theta) as a histogram. Software for the EH…

  12. The Role of Empirical Evidence in Modeling Speech Segmentation

    ERIC Educational Resources Information Center

    Phillips, Lawrence

    2015-01-01

    Choosing specific implementational details is one of the most important aspects of creating and evaluating a model. In order to properly model cognitive processes, choices for these details must be made based on empirical research. Unfortunately, modelers are often forced to make decisions in the absence of relevant data. My work investigates the…

  13. The Theoretical and Empirical Basis for Meditation as an Intervention for PTSD

    ERIC Educational Resources Information Center

    Lang, Ariel J.; Strauss, Jennifer L.; Bomyea, Jessica; Bormann, Jill E.; Hickman, Steven D.; Good, Raquel C.; Essex, Michael

    2012-01-01

    In spite of the existence of good empirically supported treatments for posttraumatic stress disorder (PTSD), consumers and providers continue to ask for more options for managing this common and often chronic condition. Meditation-based approaches are being widely implemented, but there is minimal research rigorously assessing their effectiveness.…

  14. Sending Scholarship Students Abroad in Ottoman Empire

    ERIC Educational Resources Information Center

    Kulaç, Onur; Özgür, Hüseyin

    2017-01-01

    The implementation of sending scholarship students abroad that started in the 19th century by Sultan Selim III in Ottoman Empire continued during the period of other Sultans became a significant reference point for the abroad scholarship policy of Turkey. The students that were firstly sent abroad especially for military training, were sent to…

  15. Preparation for implementation of the mechanistic-empirical pavement design guide in Michigan, part 3 : local calibration and validation of the pavement-ME performance models.

    DOT National Transportation Integrated Search

    2014-11-01

    The main objective of Part 3 was to locally calibrate and validate the mechanistic-empirical pavement : design guide (Pavement-ME) performance models to Michigan conditions. The local calibration of the : performance models in the Pavement-ME is a ch...

  16. Deriving Empirically-Based Design Guidelines for Advanced Learning Technologies that Foster Disciplinary Comprehension

    ERIC Educational Resources Information Center

    Poitras, Eric; Trevors, Gregory

    2012-01-01

    Planning, conducting, and reporting leading-edge research requires professionals who are capable of highly skilled reading. This study reports the development of an empirically informed computer-based learning environment designed to foster the acquisition of reading comprehension strategies that mediate expertise in the social sciences. Empirical…

  17. An Empirical Investigation into Programming Language Syntax

    ERIC Educational Resources Information Center

    Stefik, Andreas; Siebert, Susanna

    2013-01-01

    Recent studies in the literature have shown that syntax remains a significant barrier to novice computer science students in the field. While this syntax barrier is known to exist, whether and how it varies across programming languages has not been carefully investigated. For this article, we conducted four empirical studies on programming…

  18. Agent-Based Models in Empirical Social Research

    ERIC Educational Resources Information Center

    Bruch, Elizabeth; Atwell, Jon

    2015-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first…

  19. Evoking Knowledge and Information Awareness for Enhancing Computer-Supported Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Engelmann, Tanja; Tergan, Sigmar-Olaf; Hesse, Friedrich W.

    2010-01-01

    Computer-supported collaboration by spatially distributed group members still involves interaction problems within the group. This article presents an empirical study investigating the question of whether computer-supported collaborative problem solving by spatially distributed group members can be fostered by evoking knowledge and information…

  20. Causal diagrams for empirical legal research: a methodology for identifying causation, avoiding bias and interpreting results

    PubMed Central

    VanderWeele, Tyler J.; Staudt, Nancy

    2014-01-01

    In this paper we introduce methodology—causal directed acyclic graphs—that empirical researchers can use to identify causation, avoid bias, and interpret empirical results. This methodology has become popular in a number of disciplines, including statistics, biostatistics, epidemiology and computer science, but has yet to appear in the empirical legal literature. Accordingly we outline the rules and principles underlying this new methodology and then show how it can assist empirical researchers through both hypothetical and real-world examples found in the extant literature. While causal directed acyclic graphs are certainly not a panacea for all empirical problems, we show they have potential to make the most basic and fundamental tasks, such as selecting covariate controls, relatively easy and straightforward. PMID:25685055

  1. Health Status and Health Dynamics in an Empirical Model of Expected Longevity*

    PubMed Central

    Benítez-Silva, Hugo; Ni, Huan

    2010-01-01

    Expected longevity is an important factor influencing older individuals’ decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman (1972), has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics. PMID:18187217

  2. Machine learning strategies for systems with invariance properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan

    Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less

  3. Machine learning strategies for systems with invariance properties

    DOE PAGES

    Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan

    2016-05-06

    Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less

  4. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  5. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical.

    PubMed

    Baaquie, Belal E; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  6. Determinants of quality management systems implementation in hospitals.

    PubMed

    Wardhani, Viera; Utarini, Adi; van Dijk, Jitse Pieter; Post, Doeke; Groothoff, Johan Willem

    2009-03-01

    To identify the problems and facilitating factors in the implementation of quality management system (QMS) in hospitals through a systematic review. A search strategy was performed on the Medline database for articles written in English published between 1992 and early 2006. Using the thesaurus terms 'Total Quality Management' and 'Quality Assurance Health Care', combined with the term 'hospital' and 'implement*', we identified 533 publications. The screening process was based on empirical articles describing organization-wide QMS implementation. Fourteen empirical articles fulfilled the inclusion criteria and were reviewed in this paper. An organization culture emphasizing standards and values associated with affiliation, teamwork and innovation, assumption of change and risk taking, play as the key success factor in QMS implementation. This culture needs to be supported by sufficient technical competence to apply a scientific problem-solving approach. A clear distribution of QMS function within the organizational structure is more important than establishing a formal quality structure. In addition to management leadership, physician involvement also plays an important role in implementing QMS. Six supporting and limiting factors determining QMS implementation are identified in this review. These are the organization culture, design, leadership for quality, physician involvement, quality structure and technical competence.

  7. Empirical algorithms for ocean optics parameters

    NASA Astrophysics Data System (ADS)

    Smart, Jeffrey H.

    2007-06-01

    As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.

  8. The theory of reasoned action as parallel constraint satisfaction: towards a dynamic computational model of health behavior.

    PubMed

    Orr, Mark G; Thrush, Roxanne; Plaut, David C

    2013-01-01

    The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual's pre-existing belief structure and the beliefs of others in the individual's social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics.

  9. The locus of serial processing in reading aloud: orthography-to-phonology computation or speech planning?

    PubMed

    Mousikou, Petroula; Rastle, Kathleen; Besner, Derek; Coltheart, Max

    2015-07-01

    Dual-route theories of reading posit that a sublexical reading mechanism that operates serially and from left to right is involved in the orthography-to-phonology computation. These theories attribute the masked onset priming effect (MOPE) and the phonological Stroop effect (PSE) to the serial left-to-right operation of this mechanism. However, both effects may arise during speech planning, in the phonological encoding process, which also occurs serially and from left to right. In the present paper, we sought to determine the locus of serial processing in reading aloud by testing the contrasting predictions that the dual-route and speech planning accounts make in relation to the MOPE and the PSE. The results from three experiments that used the MOPE and the PSE paradigms in English are inconsistent with the idea that these effects arise during speech planning, and consistent with the claim that a sublexical serially operating reading mechanism is involved in the print-to-sound translation. Simulations of the empirical data on the MOPE with the dual route cascaded (DRC) and connectionist dual process (CDP++) models, which are computational implementations of the dual-route theory of reading, provide further support for the dual-route account. (c) 2015 APA, all rights reserved.

  10. The Theory of Reasoned Action as Parallel Constraint Satisfaction: Towards a Dynamic Computational Model of Health Behavior

    PubMed Central

    Orr, Mark G.; Thrush, Roxanne; Plaut, David C.

    2013-01-01

    The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual’s pre-existing belief structure and the beliefs of others in the individual’s social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics. PMID:23671603

  11. Rethinking our approach to gender and disasters: Needs, responsibilities, and solutions.

    PubMed

    Montano, Samantha; Savitt, Amanda

    2016-01-01

    To explore how the existing literature has discussed the vulnerability and needs of women in a disaster context. It will consider the literature's suggestions of how to minimize vulnerability and address the needs of women, including who involved in emergency management should be responsible for such efforts. Empirical journal articles and book chapters from disaster literature were collected that focused on "women" or "gender," and their results and recommendations were analyzed. This review found existing empirical research on women during disasters focuses on their vulnerabilities more than their needs. Second, when researchers do suggest solutions, they tend not to be comprehensive or supported by empirical evidence. Finally, it is not clear from existing research who is responsible for addressing these needs and implementing solutions. Future research should study the intersection of gender and disasters in terms of needs and solutions including who is responsible for implementing solutions.

  12. Assessing the Claims of Participatory Measurement, Reporting and Verification (PMRV) in Achieving REDD+ Outcomes: A Systematic Review

    PubMed Central

    Hawthorne, Sandra; Boissière, Manuel; Felker, Mary Elizabeth; Atmadja, Stibniati

    2016-01-01

    Participation of local communities in the Measurement, Reporting and Verification (MRV) of forest changes has been promoted as a strategy that lowers the cost of MRV and increases their engagement with REDD+. This systematic review of literature assessed the claims of participatory MRV (PMRV) in achieving REDD+ outcomes. We identified 29 PMRV publications that consisted of 20 peer-reviewed and 9 non peer-reviewed publications, with 14 publications being empirically based studies. The evidence supporting PMRV claims was categorized into empirical finding, citation or assumption. Our analysis of the empirical studies showed that PMRV projects were conducted in 17 countries in three tropical continents and across various forest and land tenure types. Most of these projects tested the feasibility of participatory measurement or monitoring, which limited the participation of local communities to data gathering. PMRV claims of providing accurate local biomass measurements and lowering MRV cost were well-supported with empirical evidence. Claims that PMRV supports REDD+ social outcomes that affect local communities directly, such as increased environmental awareness and equity in benefit sharing, were supported with less empirical evidence than REDD+ technical outcomes. This may be due to the difficulties in measuring social outcomes and the slow progress in the development and implementation of REDD+ components outside of experimental research contexts. Although lessons from other monitoring contexts have been used to support PMRV claims, they are only applicable when the enabling conditions can be replicated in REDD+ contexts. There is a need for more empirical evidence to support PMRV claims on achieving REDD+ social outcomes, which may be addressed with more opportunities and rigorous methods for assessing REDD+ social outcomes. Integrating future PMRV studies into local REDD+ implementations may help create those opportunities, while increasing the participation of local communities as local REDD+ stakeholders. Further development and testing of participatory reporting framework are required to integrate PMRV data with the national database. Publication of empirical PMRV studies is encouraged to guide when, where and how PMRV should be implemented. PMID:27812110

  13. Interactive activation and mutual constraint satisfaction in perception and cognition.

    PubMed

    McClelland, James L; Mirman, Daniel; Bolger, Donald J; Khaitan, Pranav

    2014-08-01

    In a seminal 1977 article, Rumelhart argued that perception required the simultaneous use of multiple sources of information, allowing perceivers to optimally interpret sensory information at many levels of representation in real time as information arrives. Building on Rumelhart's arguments, we present the Interactive Activation hypothesis-the idea that the mechanism used in perception and comprehension to achieve these feats exploits an interactive activation process implemented through the bidirectional propagation of activation among simple processing units. We then examine the interactive activation model of letter and word perception and the TRACE model of speech perception, as early attempts to explore this hypothesis, and review the experimental evidence relevant to their assumptions and predictions. We consider how well these models address the computational challenge posed by the problem of perception, and we consider how consistent they are with evidence from behavioral experiments. We examine empirical and theoretical controversies surrounding the idea of interactive processing, including a controversy that swirls around the relationship between interactive computation and optimal Bayesian inference. Some of the implementation details of early versions of interactive activation models caused deviation from optimality and from aspects of human performance data. More recent versions of these models, however, overcome these deficiencies. Among these is a model called the multinomial interactive activation model, which explicitly links interactive activation and Bayesian computations. We also review evidence from neurophysiological and neuroimaging studies supporting the view that interactive processing is a characteristic of the perceptual processing machinery in the brain. In sum, we argue that a computational analysis, as well as behavioral and neuroscience evidence, all support the Interactive Activation hypothesis. The evidence suggests that contemporary versions of models based on the idea of interactive activation continue to provide a basis for efforts to achieve a fuller understanding of the process of perception. Copyright © 2014 Cognitive Science Society, Inc.

  14. Student Perceptions in the Design of a Computer Card Game for Learning Computer Literacy Issues: A Case Study

    ERIC Educational Resources Information Center

    Kordaki, Maria; Papastergiou, Marina; Psomos, Panagiotis

    2016-01-01

    The aim of this work was twofold. First, an empirical study was designed aimed at investigating the perceptions that entry-level non-computing majors--namely Physical Education and Sport Science (PESS) undergraduate students--hold about basic Computer Literacy (CL) issues. The participants were 90 first-year PESS students, and their perceptions…

  15. Characteristics and Differences of Lifelong Learning Policy Implementation for the Elderly in Thailand

    ERIC Educational Resources Information Center

    Dhirathiti, Nopraenue S.; Pichitpatja, Pojjana

    2018-01-01

    The study examined the process of policy implementation of lifelong learning for the elderly in Thailand, covering four main regions within the country. The study empirically compared inputs, processes, outputs, and outcomes of policy implementation in the north, south, northeast, and central regions of Thailand and captured the rigor of policy…

  16. Pebbles, Rocks, and Boulders: The Implementation of a School-Based Social Engagement Intervention for Children with Autism

    ERIC Educational Resources Information Center

    Locke, Jill; Wolk, Courtney Benjamin; Harker, Colleen; Olsen, Anne; Shingledecker, Travis; Barg, Frances; Mandell, David; Beidas, Rinad

    2017-01-01

    Few evidence-based practices, defined as the use of empirically supported research and clinical expertise for children with autism, have been successfully implemented and sustained in schools. This study examined the perspectives of school personnel (n = 39) on implementing a social engagement intervention for children with autism. Semi-structured…

  17. Implementation Fidelity of MyTeachingPartner Literacy and Language Activities: Association with Preschoolers' Language and Literacy Growth

    ERIC Educational Resources Information Center

    Hamre, Bridget K.; Justice, Laura M.; Pianta, Robert C.; Kilday, Carolyn; Sweeney, Beverly; Downer, Jason T.; Leach, Allison

    2010-01-01

    There is surprisingly little empirical research examining issues of fidelity of implementation within the early childhood education literature. In the MyTeachingPartner project, 154 teachers were provided with materials to implement a supplemental classroom curriculum addressing six aspects of literacy and language development. The present study…

  18. Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort.

    PubMed

    Vassena, Eliana; Holroyd, Clay B; Alexander, William H

    2017-01-01

    In the last two decades the anterior cingulate cortex (ACC) has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.

  19. A universal preconditioner for simulating condensed phase materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Packwood, David; Ortner, Christoph, E-mail: c.ortner@warwick.ac.uk; Kermode, James, E-mail: j.r.kermode@warwick.ac.uk

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor ofmore » two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.« less

  20. Obscure Severe Infrarenal Aortoiliac Stenosis With Severe Transient Lactic Acidosis

    PubMed Central

    Nantsupawat, Teerapat; Mankongpaisarnrung, Charoen; Soontrapa, Suthipong; Limsuwat, Chok

    2013-01-01

    A 57-year-old man presented with sudden onset of leg pain, right-sided weakness, aphasia, confusion, drooling, and severe lactic acidosis (15 mmol/L). He had normal peripheral pulses and demonstrated no pain, pallor, poikilothermia, paresthesia, or paralysis. Empiric antibiotics, aspirin, full-dose enoxaparin, and intravenous fluid were initiated. Lactic acid level decreased to 2.5 mmol/L. The patient was subsequently extubated and was alert and oriented with no complaints of leg or abdominal pain. Unexpectedly, the patient developed cardiac arrest, rebound severe lactic acidosis (8.13 mmol/L), and signs of acute limb ischemia. Emergent computed tomography of the aorta confirmed infrarenal aortoiliac thrombosis. Transient leg pain and transient severe lactic acidosis can be unusual presentations of severe infrarenal aortoiliac stenosis. When in doubt, vascular studies should be implemented without delay to identify this catastrophic diagnosis. PMID:26425569

  1. Dispersion correction derived from first principles for density functional theory and Hartree-Fock theory.

    PubMed

    Guidez, Emilie B; Gordon, Mark S

    2015-03-12

    The modeling of dispersion interactions in density functional theory (DFT) is commonly performed using an energy correction that involves empirically fitted parameters for all atom pairs of the system investigated. In this study, the first-principles-derived dispersion energy from the effective fragment potential (EFP) method is implemented for the density functional theory (DFT-D(EFP)) and Hartree-Fock (HF-D(EFP)) energies. Overall, DFT-D(EFP) performs similarly to the semiempirical DFT-D corrections for the test cases investigated in this work. HF-D(EFP) tends to underestimate binding energies and overestimate intermolecular equilibrium distances, relative to coupled cluster theory, most likely due to incomplete accounting for electron correlation. Overall, this first-principles dispersion correction yields results that are in good agreement with coupled-cluster calculations at a low computational cost.

  2. Quantum optimization for training support vector machines.

    PubMed

    Anguita, Davide; Ridella, Sandro; Rivieccio, Fabio; Zunino, Rodolfo

    2003-01-01

    Refined concepts, such as Rademacher estimates of model complexity and nonlinear criteria for weighting empirical classification errors, represent recent and promising approaches to characterize the generalization ability of Support Vector Machines (SVMs). The advantages of those techniques lie in both improving the SVM representation ability and yielding tighter generalization bounds. On the other hand, they often make Quadratic-Programming algorithms no longer applicable, and SVM training cannot benefit from efficient, specialized optimization techniques. The paper considers the application of Quantum Computing to solve the problem of effective SVM training, especially in the case of digital implementations. The presented research compares the behavioral aspects of conventional and enhanced SVMs; experiments in both a synthetic and real-world problems support the theoretical analysis. At the same time, the related differences between Quadratic-Programming and Quantum-based optimization techniques are considered.

  3. An Index and Test of Linear Moderated Mediation.

    PubMed

    Hayes, Andrew F

    2015-01-01

    I describe a test of linear moderated mediation in path analysis based on an interval estimate of the parameter of a function linking the indirect effect to values of a moderator-a parameter that I call the index of moderated mediation. This test can be used for models that integrate moderation and mediation in which the relationship between the indirect effect and the moderator is estimated as linear, including many of the models described by Edwards and Lambert ( 2007 ) and Preacher, Rucker, and Hayes ( 2007 ) as well as extensions of these models to processes involving multiple mediators operating in parallel or in serial. Generalization of the method to latent variable models is straightforward. Three empirical examples describe the computation of the index and the test, and its implementation is illustrated using Mplus and the PROCESS macro for SPSS and SAS.

  4. Comparative study of internet cloud and cloudlet over wireless mesh networks for real-time applications

    NASA Astrophysics Data System (ADS)

    Khan, Kashif A.; Wang, Qi; Luo, Chunbo; Wang, Xinheng; Grecos, Christos

    2014-05-01

    Mobile cloud computing is receiving world-wide momentum for ubiquitous on-demand cloud services for mobile users provided by Amazon, Google etc. with low capital cost. However, Internet-centric clouds introduce wide area network (WAN) delays that are often intolerable for real-time applications such as video streaming. One promising approach to addressing this challenge is to deploy decentralized mini-cloud facility known as cloudlets to enable localized cloud services. When supported by local wireless connectivity, a wireless cloudlet is expected to offer low cost and high performance cloud services for the users. In this work, we implement a realistic framework that comprises both a popular Internet cloud (Amazon Cloud) and a real-world cloudlet (based on Ubuntu Enterprise Cloud (UEC)) for mobile cloud users in a wireless mesh network. We focus on real-time video streaming over the HTTP standard and implement a typical application. We further perform a comprehensive comparative analysis and empirical evaluation of the application's performance when it is delivered over the Internet cloud and the cloudlet respectively. The study quantifies the influence of the two different cloud networking architectures on supporting real-time video streaming. We also enable movement of the users in the wireless mesh network and investigate the effect of user's mobility on mobile cloud computing over the cloudlet and Amazon cloud respectively. Our experimental results demonstrate the advantages of the cloudlet paradigm over its Internet cloud counterpart in supporting the quality of service of real-time applications.

  5. Integrated, Not Isolated: Defining Typological Proximity in an Integrated Multilingual Architecture

    PubMed Central

    Putnam, Michael T.; Carlson, Matthew; Reitter, David

    2018-01-01

    On the surface, bi- and multilingualism would seem to be an ideal context for exploring questions of typological proximity. The obvious intuition is that the more closely related two languages are, the easier it should be to implement the two languages in one mind. This is the starting point adopted here, but we immediately run into the difficulty that the overwhelming majority of cognitive, computational, and linguistic research on bi- and multilingualism exhibits a monolingual bias (i.e., where monolingual grammars are used as the standard of comparison for outputs from bilingual grammars). The primary questions so far have focused on how bilinguals balance and switch between their two languages, but our perspective on typology leads us to consider the nature of bi- and multi-lingual systems as a whole. Following an initial proposal from Hsin (2014), we conjecture that bilingual grammars are neither isolated, nor (completely) conjoined with one another in the bilingual mind, but rather exist as integrated source grammars that are further mitigated by a common, combined grammar (Cook, 2016; Goldrick et al., 2016a,b; Putnam and Klosinski, 2017). Here we conceive such a combined grammar in a parallel, distributed, and gradient architecture implemented in a shared vector-space model that employs compression through routinization and dimensionality reduction. We discuss the emergence of such representations and their function in the minds of bilinguals. This architecture aims to be consistent with empirical results on bilingual cognition and memory representations in computational cognitive architectures. PMID:29354079

  6. Characterization and effectiveness of pay-for-performance in ophthalmology: a systematic review.

    PubMed

    Herbst, Tim; Emmert, Martin

    2017-06-05

    To identify, characterize and compare existing pay-for-performance approaches and their impact on the quality of care and efficiency in ophthalmology. A systematic evidence-based review was conducted. English, French and German written literature published between 2000 and 2015 were searched in the following databases: Medline (via PubMed), NCBI web site, Scopus, Web of Knowledge, Econlit and the Cochrane Library. Empirical as well as descriptive articles were included. Controlled clinical trials, meta-analyses, randomized controlled studies as well as observational studies were included as empirical articles. Systematic characterization of identified pay-for-performance approaches (P4P approaches) was conducted according to the "Model for Implementing and Monitoring Incentives for Quality" (MIMIQ). Methodological quality of empirical articles was assessed according to the Critical Appraisal Skills Programme (CASP) checklists. Overall, 13 relevant articles were included. Eleven articles were descriptive and two articles included empirical analyses. Based on these articles, four different pay-for-performance approaches implemented in the United States were identified. With regard to quality and incentive elements, systematic comparison showed numerous differences between P4P approaches. Empirical studies showed isolated cost or quality effects, while a simultaneous examination of these effects was missing. Research results show that experiences with pay-for-performance approaches in ophthalmology are limited. Identified approaches differ with regard to quality and incentive elements restricting comparability. Two empirical studies are insufficient to draw strong conclusions about the effectiveness and efficiency of these approaches.

  7. A theoretical method for the analysis and design of axisymmetric bodies. [flow distribution and incompressible fluids

    NASA Technical Reports Server (NTRS)

    Beatty, T. D.

    1975-01-01

    A theoretical method is presented for the computation of the flow field about an axisymmetric body operating in a viscous, incompressible fluid. A potential flow method was used to determine the inviscid flow field and to yield the boundary conditions for the boundary layer solutions. Boundary layer effects in the forces of displacement thickness and empirically modeled separation streamlines are accounted for in subsequent potential flow solutions. This procedure is repeated until the solutions converge. An empirical method was used to determine base drag allowing configuration drag to be computed.

  8. Feasibility of an Empirically Based Program for Parents of Preschoolers with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Dababnah, Sarah; Parish, Susan L.

    2016-01-01

    This article reports on the feasibility of implementing an existing empirically based program, "The Incredible Years," tailored to parents of young children with autism spectrum disorder. Parents raising preschool-aged children (aged 3-6?years) with autism spectrum disorder (N?=?17) participated in a 15-week pilot trial of the…

  9. Preparation of the implementation plan of AASHTO Mechanistic-Empirical Pavement Design Guide (M-EPDG) in Connecticut : Phase II : expanded sensitivity analysis and validation with pavement management data.

    DOT National Transportation Integrated Search

    2017-02-08

    The study re-evaluates distress prediction models using the Mechanistic-Empirical Pavement Design Guide (MEPDG) and expands the sensitivity analysis to a wide range of pavement structures and soils. In addition, an extensive validation analysis of th...

  10. Study on the leakage flow through a clearance gap between two stationary walls

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Billdal, J. T.; Nielsen, T. K.; Brekke, H.

    2012-11-01

    In the present paper, the leakage flow in the clearance gap between stationary walls was studied experimentally, theoretically and numerically by the computational fluid dynamics (CFD) in order to find the relationship between leakage flow, pressure difference and clearance gap. The experimental set-up of the clearance gap between two stationary walls is the simplification of the gap between the guide vane faces and facing plates in Francis turbines. This model was built in the Waterpower laboratory at Norwegian University of Science and Technology (NTNU). The empirical formula for calculating the leakage flow rate between the two stationary walls was derived from the empirical study. The experimental model is simulated by computational fluid dynamics employing the ANSYS CFX commercial software in order to study the flow structure. Both numerical simulation results and empirical formula results are in good agreement with the experimental results. The correction of the empirical formula is verified by experimental data and has been proven to be very useful in terms of quickly predicting the leakage flow rate in the guide vanes for hydraulic turbines.

  11. Undergraduate Paramedic Students' Attitudes to E-Learning: Findings from Five University Programs

    ERIC Educational Resources Information Center

    Williams, Brett; Boyle, Malcolm; Molloy, Andrew; Brightwell, Richard; Munro, Graham; Service, Melinda; Brown, Ted

    2011-01-01

    Computers and computer-assisted instruction are being used with increasing frequency in the area of undergraduate paramedic education. Paramedic students' attitudes towards the use of e-learning technology and computer-assisted instruction have received limited attention in the empirical literature to date. The objective of this study was to…

  12. A Meta-Analysis of Effectiveness Studies on Computer Technology-Supported Language Learning

    ERIC Educational Resources Information Center

    Grgurovic, Maja; Chapelle, Carol A.; Shelley, Mack C.

    2013-01-01

    With the aim of summarizing years of research comparing pedagogies for second/foreign language teaching supported with computer technology and pedagogy not-supported by computer technology, a meta-analysis was conducted of empirical research investigating language outcomes. Thirty-seven studies yielding 52 effect sizes were included, following a…

  13. Factors Influencing Skilled Use of the Computer Mouse by School-Aged Children

    ERIC Educational Resources Information Center

    Lane, Alison E.; Ziviani, Jenny M.

    2010-01-01

    Effective use of computers in education for children requires consideration of individual and developmental characteristics of users. There is limited empirical evidence, however, to guide educational programming when it comes to children and their acquisition of computing skills. This paper reports on the influence of previous experience and…

  14. An Empirical Measure of Computer Security Strength for Vulnerability Remediation

    ERIC Educational Resources Information Center

    Villegas, Rafael

    2010-01-01

    Remediating all vulnerabilities on computer systems in a timely and cost effective manner is difficult given that the window of time between the announcement of a new vulnerability and an automated attack has decreased. Hence, organizations need to prioritize the vulnerability remediation process on their computer systems. The goal of this…

  15. Implementing an empirical scalar constitutive relation for ice with flow-induced polycrystalline anisotropy in large-scale ice sheet models

    NASA Astrophysics Data System (ADS)

    Graham, Felicity S.; Morlighem, Mathieu; Warner, Roland C.; Treverrow, Adam

    2018-03-01

    The microstructure of polycrystalline ice evolves under prolonged deformation, leading to anisotropic patterns of crystal orientations. The response of this material to applied stresses is not adequately described by the ice flow relation most commonly used in large-scale ice sheet models - the Glen flow relation. We present a preliminary assessment of the implementation in the Ice Sheet System Model (ISSM) of a computationally efficient, empirical, scalar, constitutive relation which addresses the influence of the dynamically steady-state flow-compatible induced anisotropic crystal orientation patterns that develop when ice is subjected to the same stress regime for a prolonged period - sometimes termed tertiary flow. We call this the ESTAR flow relation. The effect on ice flow dynamics is investigated by comparing idealised simulations using ESTAR and Glen flow relations, where we include in the latter an overall flow enhancement factor. For an idealised embayed ice shelf, the Glen flow relation overestimates velocities by up to 17 % when using an enhancement factor equivalent to the maximum value prescribed in the ESTAR relation. Importantly, no single Glen enhancement factor can accurately capture the spatial variations in flow across the ice shelf generated by the ESTAR flow relation. For flow line studies of idealised grounded flow over varying topography or variable basal friction - both scenarios dominated at depth by bed-parallel shear - the differences between simulated velocities using ESTAR and Glen flow relations depend on the value of the enhancement factor used to calibrate the Glen flow relation. These results demonstrate the importance of describing the deformation of anisotropic ice in a physically realistic manner, and have implications for simulations of ice sheet evolution used to reconstruct paleo-ice sheet extent and predict future ice sheet contributions to sea level.

  16. Linking customisation of ERP systems to support effort: an empirical study

    NASA Astrophysics Data System (ADS)

    Koch, Stefan; Mitteregger, Kurt

    2016-01-01

    The amount of customisation to an enterprise resource planning (ERP) system has always been a major concern in the context of the implementation. This article focuses on the phase of maintenance and presents an empirical study about the relationship between the amount of customising and the resulting support effort. We establish a structural equation modelling model that explains support effort using customisation effort, organisational characteristics and scope of implementation. The findings using data from an ERP provider show that there is a statistically significant effect: with an increasing amount of customisation, the quantity of telephone calls to support increases, as well as the duration of each call.

  17. Delivering patient decision aids on the Internet: definitions, theories, current evidence, and emerging research areas

    PubMed Central

    2013-01-01

    Background In 2005, the International Patient Decision Aids Standards Collaboration identified twelve quality dimensions to guide assessment of patient decision aids. One dimension—the delivery of patient decision aids on the Internet—is relevant when the Internet is used to provide some or all components of a patient decision aid. Building on the original background chapter, this paper provides an updated definition for this dimension, outlines a theoretical rationale, describes current evidence, and discusses emerging research areas. Methods An international, multidisciplinary panel of authors examined the relevant theoretical literature and empirical evidence through 2012. Results The updated definition distinguishes Internet-delivery of patient decision aids from online health information and clinical practice guidelines. Theories in cognitive psychology, decision psychology, communication, and education support the value of Internet features for providing interactive information and deliberative support. Dissemination and implementation theories support Internet-delivery for providing the right information (rapidly updated), to the right person (tailored), at the right time (the appropriate point in the decision making process). Additional efforts are needed to integrate the theoretical rationale and empirical evidence from health technology perspectives, such as consumer health informatics, user experience design, and human-computer interaction. Despite Internet usage ranging from 74% to 85% in developed countries and 80% of users searching for health information, it is unknown how many individuals specifically seek patient decision aids on the Internet. Among the 86 randomized controlled trials in the 2011 Cochrane Collaboration’s review of patient decision aids, only four studies focused on Internet-delivery. Given the limited number of published studies, this paper particularly focused on identifying gaps in the empirical evidence base and identifying emerging areas of research. Conclusions As of 2012, the updated theoretical rationale and emerging evidence suggest potential benefits to delivering patient decision aids on the Internet. However, additional research is needed to identify best practices and quality metrics for Internet-based development, evaluation, and dissemination, particularly in the areas of interactivity, multimedia components, socially-generated information, and implementation strategies. PMID:24625064

  18. Empirical knowledge engine of local governance Senegalese artisanal fisheries Empirical knowledge engine of local governance Senegalese artisanal fisheries

    NASA Astrophysics Data System (ADS)

    Mbaye, A.

    2016-02-01

    Fishery resources has always been an administrative management faced with the supposed irrationality of artisanal fishermen and the state has always had a monopoly over such management. The state rules well established, synonyms of denial local populations knowledge on management, and expropriation of their fisheries territories, came into conflict with the existing rules thus weakening the traditional management system.However, aware of the threats to their survival because of the limitations of state rules and technicist perception of management, some populations of fishermen tried to organize and implement management measures.These measures are implemented on the basis of their own knowledge of the environmentsThis is the case in Kayar, Nianing, Bétenty, where local management initiatives began to bear fruit despite some difficulties.These examples of successful local management have prompted the Senegalese administration to have more consideration for the knowledge and know-how of fishermen and to be open to co-management of the fisheries resource. his communication shows how this is implemented new co-management approach in the governance of the Senegalese artisanal fisheries through the consideration of empirical knowledge of fishermen.

  19. Sensitivity of ab Initio vs Empirical Methods in Computing Structural Effects on NMR Chemical Shifts for the Example of Peptides.

    PubMed

    Sumowski, Chris Vanessa; Hanni, Matti; Schweizer, Sabine; Ochsenfeld, Christian

    2014-01-14

    The structural sensitivity of NMR chemical shifts as computed by quantum chemical methods is compared to a variety of empirical approaches for the example of a prototypical peptide, the 38-residue kaliotoxin KTX comprising 573 atoms. Despite the simplicity of empirical chemical shift prediction programs, the agreement with experimental results is rather good, underlining their usefulness. However, we show in our present work that they are highly insensitive to structural changes, which renders their use for validating predicted structures questionable. In contrast, quantum chemical methods show the expected high sensitivity to structural and electronic changes. This appears to be independent of the quantum chemical approach or the inclusion of solvent effects. For the latter, explicit solvent simulations with increasing number of snapshots were performed for two conformers of an eight amino acid sequence. In conclusion, the empirical approaches neither provide the expected magnitude nor the patterns of NMR chemical shifts determined by the clearly more costly ab initio methods upon structural changes. This restricts the use of empirical prediction programs in studies where peptide and protein structures are utilized for the NMR chemical shift evaluation such as in NMR refinement processes, structural model verifications, or calculations of NMR nuclear spin relaxation rates.

  20. Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.

    PubMed

    Lee, Won Hee; Bullmore, Ed; Frangou, Sophia

    2017-02-01

    There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Health Professionals' readiness to implement electronic medical record system at three hospitals in Ethiopia: a cross sectional study.

    PubMed

    Biruk, Senafekesh; Yilma, Tesfahun; Andualem, Mulusew; Tilahun, Binyam

    2014-12-12

    Electronic medical record systems are being implemented in many countries to support healthcare services. However, its adoption rate remains low, especially in developing countries due to technological, financial, and organizational factors. There is lack of solid evidence and empirical research regarding the pre implementation readiness of healthcare providers. The aim of this study is to assess health professionals' readiness and to identify factors that affect the acceptance and use of electronic medical recording system in the pre implementation phase at hospitals of North Gondar Zone, Ethiopia. An institution based cross-sectional quantitative study was conducted on 606 study participants from January to July 2013 at 3 hospitals in northwest Ethiopia. A pretested self-administered questionnaire was used to collect the required data. The data were entered using the Epi-Info version 3.5.1 software and analyzed using SPSS version 16 software. Descriptive statistics, bi-variate, and multi-variate logistic regression analyses were used to describe the study objectives and assess the determinants of health professionals' readiness for the system. Odds ratio at 95% CI was used to describe the association between the study and the outcome variables. Out of 606 study participants only 328 (54.1%) were found ready to use the electronic medical recording system according to our criteria assessment. The majority of the study participants, 432 (71.3%) and 331(54.6%) had good knowledge and attitude for EMR system, respectively. Gender (AOR = 1.87, 95% CI: [1.26, 2.78]), attitude (AOR = 1.56, 95% CI: [1.03, 2.49]), knowledge (AOR = 2.12, 95% CI: [1.32, 3.56]), and computer literacy (AOR =1.64, 95% CI: [0.99, 2.68]) were significantly associated with the readiness for EMR system. In this study, the overall health professionals' readiness for electronic medical record system and utilization was 54.1% and 46.5%, respectively. Gender, knowledge, attitude, and computer related skills were the determinants of the presence of a relatively low readiness and utilization of the system. Increasing awareness, knowledge, and skills of healthcare professionals on EMR system before system implementation is necessary to increase its adoption.

  2. An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei

    2006-01-01

    The objective of this paper is to define empirical parameters for an initial thermal non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two codes currently used at Glenn Research Center for Stirling modeling are Fluent and CFD-ACE. The codes porous-media models are equilibrium models, which assume solid matrix and fluid are in thermal equilibrium. This is believed to be a poor assumption for Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, use non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location and time during the cycle. Experimentally based information was used to define: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity was also estimated. Determination of model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Converter (TDC), which uses a random-fiber regenerator matrix. Emphasis is on use of available data to define empirical parameters needed in a thermal non-equilibrium porous media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates.

  3. Measuring target detection performance in paradigms with high event rates.

    PubMed

    Bendixen, Alexandra; Andersen, Søren K

    2013-05-01

    Combining behavioral and neurophysiological measurements inevitably implies mutual constraints, such as when the neurophysiological measurement requires fast-paced stimulus presentation and hence the attribution of a behavioral response to a particular preceding stimulus becomes ambiguous. We develop and test a method for validly assessing behavioral detection performance in spite of this ambiguity. We examine four approaches taken in the literature to treat such situations. We analytically derive a new variant of computing the classical parameters of signal detection theory, hit and false alarm rates, adapted to fast-paced paradigms. Each of the previous approaches shows specific shortcomings (susceptibility towards response window choice, biased estimates of behavioral detection performance). Superior performance of our new approach is demonstrated for both simulated and empirical behavioral data. Further evidence is provided by reliable correspondence between behavioral performance and the N2b component as an electrophysiological indicator of target detection. The appropriateness of our approach is substantiated by both theoretical and empirical arguments. We demonstrate an easy-to-implement solution for measuring target detection performance independent of the rate of event presentation. Thus overcoming the measurement bias of previous approaches, our method will help to clarify the behavioral relevance of different measures of cortical activation. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  4. MrBayes tgMC3++: A High Performance and Resource-Efficient GPU-Oriented Phylogenetic Analysis Method.

    PubMed

    Ling, Cheng; Hamada, Tsuyoshi; Gao, Jingyang; Zhao, Guoguang; Sun, Donghong; Shi, Weifeng

    2016-01-01

    MrBayes is a widespread phylogenetic inference tool harnessing empirical evolutionary models and Bayesian statistics. However, the computational cost on the likelihood estimation is very expensive, resulting in undesirably long execution time. Although a number of multi-threaded optimizations have been proposed to speed up MrBayes, there are bottlenecks that severely limit the GPU thread-level parallelism of likelihood estimations. This study proposes a high performance and resource-efficient method for GPU-oriented parallelization of likelihood estimations. Instead of having to rely on empirical programming, the proposed novel decomposition storage model implements high performance data transfers implicitly. In terms of performance improvement, a speedup factor of up to 178 can be achieved on the analysis of simulated datasets by four Tesla K40 cards. In comparison to the other publicly available GPU-oriented MrBayes, the tgMC 3 ++ method (proposed herein) outperforms the tgMC 3 (v1.0), nMC 3 (v2.1.1) and oMC 3 (v1.00) methods by speedup factors of up to 1.6, 1.9 and 2.9, respectively. Moreover, tgMC 3 ++ supports more evolutionary models and gamma categories, which previous GPU-oriented methods fail to take into analysis.

  5. IMPACT: a generic tool for modelling and simulating public health policy.

    PubMed

    Ainsworth, J D; Carruthers, E; Couch, P; Green, N; O'Flaherty, M; Sperrin, M; Williams, R; Asghar, Z; Capewell, S; Buchan, I E

    2011-01-01

    Populations are under-served by local health policies and management of resources. This partly reflects a lack of realistically complex models to enable appraisal of a wide range of potential options. Rising computing power coupled with advances in machine learning and healthcare information now enables such models to be constructed and executed. However, such models are not generally accessible to public health practitioners who often lack the requisite technical knowledge or skills. To design and develop a system for creating, executing and analysing the results of simulated public health and healthcare policy interventions, in ways that are accessible and usable by modellers and policy-makers. The system requirements were captured and analysed in parallel with the statistical method development for the simulation engine. From the resulting software requirement specification the system architecture was designed, implemented and tested. A model for Coronary Heart Disease (CHD) was created and validated against empirical data. The system was successfully used to create and validate the CHD model. The initial validation results show concordance between the simulation results and the empirical data. We have demonstrated the ability to connect health policy-modellers and policy-makers in a unified system, thereby making population health models easier to share, maintain, reuse and deploy.

  6. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  7. On the Hilbert-Huang Transform Data Processing System Development

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Huang, Norden E.; Cornwell, Evette; Smith, Darell

    2003-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The Fourier view of nonlinear mechanics that had existed for a long time, and the associated FFT (fairly recent development), carry strong a-priori assumptions about the source data, such as linearity and of being stationary. Natural phenomena measurements are essentially nonlinear and nonstationary. A very recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT) proposes a novel approach to the solution for the nonlinear class of spectrum analysis problems. Using the Empirical Mode Decomposition (EMD) followed by the Hilbert Transform of the empirical decomposition data (HT), the HHT allows spectrum analysis of nonlinear and nonstationary data by using an engineering a-posteriori data processing, based on the EMD algorithm. This results in a non-constrained decomposition of a source real value data vector into a finite set of Intrinsic Mode Functions (IMF) that can be further analyzed for spectrum interpretation by the classical Hilbert Transform. This paper describes phase one of the development of a new engineering tool, the HHT Data Processing System (HHTDPS). The HHTDPS allows applying the "T to a data vector in a fashion similar to the heritage FFT. It is a generic, low cost, high performance personal computer (PC) based system that implements the HHT computational algorithms in a user friendly, file driven environment. This paper also presents a quantitative analysis for a complex waveform data sample, a summary of technology commercialization efforts and the lessons learned from this new technology development.

  8. Modelling soil erosion at European scale: towards harmonization and reproducibility

    NASA Astrophysics Data System (ADS)

    Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.

    2015-02-01

    Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.

  9. Attention Demands of Spoken Word Planning: A Review

    PubMed Central

    Roelofs, Ardi; Piai, Vitória

    2011-01-01

    Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot proceed without paying some form of attention. Here, we review evidence that word planning requires some but not full attention. The evidence comes from chronometric studies of word planning in picture naming and word reading under divided attention conditions. It is generally assumed that the central attention demands of a process are indexed by the extent that the process delays the performance of a concurrent unrelated task. The studies measured the speed and accuracy of linguistic and non-linguistic responding as well as eye gaze durations reflecting the allocation of attention. First, empirical evidence indicates that in several task situations, processes up to and including phonological encoding in word planning delay, or are delayed by, the performance of concurrent unrelated non-linguistic tasks. These findings suggest that word planning requires central attention. Second, empirical evidence indicates that conflicts in word planning may be resolved while concurrently performing an unrelated non-linguistic task, making a task decision, or making a go/no-go decision. These findings suggest that word planning does not require full central attention. We outline a computationally implemented theory of attention and word planning, and describe at various points the outcomes of computer simulations that demonstrate the utility of the theory in accounting for the key findings. Finally, we indicate how attention deficits may contribute to impaired language performance, such as in individuals with specific language impairment. PMID:22069393

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, A.; Sengupta, M.; Wilcox, S.

    Models to compute Global Horizontal Irradiance (GHI) and Direct Normal Irradiance (DNI) have been in development over the last 3 decades. These models can be classified as empirical or physical, based on the approach. Empirical models relate ground based observations with satellite measurements and use these relations to compute surface radiation. Physical models consider the radiation received from the earth at the satellite and create retrievals to estimate surface radiation. While empirical methods have been traditionally used for computing surface radiation for the solar energy industry the advent of faster computing has made operational physical models viable. The Global Solarmore » Insolation Project (GSIP) is an operational physical model from NOAA that computes GHI using the visible and infrared channel measurements from the GOES satellites. GSIP uses a two-stage scheme that first retrieves cloud properties and uses those properties in a radiative transfer model to calculate surface radiation. NREL, University of Wisconsin and NOAA have recently collaborated to adapt GSIP to create a 4 km GHI and DNI product every 30 minutes. This paper presents an outline of the methodology and a comprehensive validation using high quality ground based solar data from the National Oceanic and Atmospheric Administration (NOAA) Surface Radiation (SURFRAD) (http://www.srrb.noaa.gov/surfrad/sitepage.html) and Integrated Surface Insolation Study (ISIS) http://www.srrb.noaa.gov/isis/isissites.html), the Solar Radiation Research Laboratory (SRRL) at National Renewable Energy Laboratory (NREL), and Sun Spot One (SS1) stations.« less

  11. Email networks and the spread of computer viruses

    NASA Astrophysics Data System (ADS)

    Newman, M. E.; Forrest, Stephanie; Balthrop, Justin

    2002-09-01

    Many computer viruses spread via electronic mail, making use of computer users' email address books as a source for email addresses of new victims. These address books form a directed social network of connections between individuals over which the virus spreads. Here we investigate empirically the structure of this network using data drawn from a large computer installation, and discuss the implications of this structure for the understanding and prevention of computer virus epidemics.

  12. A Praxeological Perspective for the Design and Implementation of a Digital Role-Play Game

    ERIC Educational Resources Information Center

    Sanchez, Eric; Monod-Ansaldi, Réjane; Vincent, Caroline; Safadi-Katouzian, Sina

    2017-01-01

    This paper draws on an empirical work dedicated to discussing a theoretical model for design-based research. The context of our study is a research project for the design, the implementation and the analysis of Insectophagia, a digital role-play game implemented in secondary schools. The model presented in this paper aims at conceptualizing…

  13. An Empirical Examination of Fear Appeal's Effect on Behavioral Intention to Comply with Anti-Spyware Software Information Security Recommendations among College Students

    ERIC Educational Resources Information Center

    Brown, David A.

    2017-01-01

    Information security is a concern for managers implementing protection measures. Implementing information security measures requires communicating both the reason and remediation for the protection measure. Examining how an anti-spyware security communication affects an individual's intention to implement a protection measure could help improve…

  14. The feasibility of implementing cognitive remediation for work in community based psychiatric rehabilitation programs.

    PubMed

    McGurk, Susan R; Mueser, Kim T; Watkins, Melanie A; Dalton, Carline M; Deutsch, Heather

    2017-03-01

    Adding cognitive remediation to vocational rehabilitation services improves cognitive and work functioning in people with serious mental illness, but despite interest, the uptake of cognitive programs into community services has been slow. This study evaluated the feasibility of implementing an empirically supported cognitive remediation program in routine rehabilitation services at 2 sites. The Thinking Skills for Work (TSW) program was adapted for implementation at 2 sites of a large psychiatric rehabilitation agency providing prevocational services, but not community-based vocational services, which were provided off-site. Agency staff were trained to deliver TSW to clients with work or educational goals. Cognitive assessments were conducted at baseline and posttreatment, with work and school activity tracked for 2 years. Eighty-three participants enrolled in TSW, of whom 79.5% completed at least 6 of the 24 computer cognitive exercise sessions (M = 16.7) over an average of 18 weeks. Participants improved significantly from baseline to posttreatment in verbal learning and memory, speed of processing, and overall cognitive functioning. Over the follow-up, 25.3% of participants worked and 47.0% were involved in work or school activity. Higher work rates were observed at the site where participants had easier access to vocational services. The results support the feasibility of implementing the TSW program by frontline staff in agencies providing psychiatric rehabilitation, and suggest that ease of access to vocational services may influence work outcomes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Integrated primary care: an inclusive three-world view through process metrics and empirical discrimination.

    PubMed

    Miller, Benjamin F; Mendenhall, Tai J; Malik, Alan D

    2009-03-01

    Integrating behavioral health services within the primary care setting drives higher levels of collaborative care, and is proving to be an essential part of the solution for our struggling American healthcare system. However, justification for implementing and sustaining integrated and collaborative care has shown to be a formidable task. In an attempt to move beyond conflicting terminology found in the literature, we delineate terms and suggest a standardized nomenclature. Further, we maintain that addressing the three principal worlds of healthcare (clinical, operational, financial) is requisite in making sense of the spectrum of available implementations and ultimately transitioning collaborative care into the mainstream. Using a model that deconstructs process metrics into factors/barriers and generalizes behavioral health provider roles into major categories provides a framework to empirically discriminate between implementations across specific settings. This approach offers practical guidelines for care sites implementing integrated and collaborative care and defines a research framework to produce the evidence required for the aforementioned clinical, operational and financial worlds of this important movement.

  16. THE CHROMOSPHERIC SOLAR LIMB BRIGHTENING AT RADIO, MILLIMETER, SUB-MILLIMETER, AND INFRARED WAVELENGTHS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De la Luz, V.

    2016-07-10

    Observations of the emission at radio, millimeter, sub-millimeter, and infrared wavelengths in the center of the solar disk validate the autoconsistence of semi-empirical models of the chromosphere. Theoretically, these models must reproduce the emission at the solar limb. In this work, we tested both the VALC and C7 semi-empirical models by computing their emission spectrum in the frequency range from 2 GHz to 10 THz at solar limb altitudes. We calculate the Sun's theoretical radii as well as their limb brightening. Non-local thermodynamic equilibrium was computed for hydrogen, electron density, and H{sup −}. In order to solve the radiative transfermore » equation, a three-dimensional (3D) geometry was employed to determine the ray paths, and Bremsstrahlung, H{sup −}, and inverse Bremsstrahlung opacity sources were integrated in the optical depth. We compared the computed solar radii with high-resolution observations at the limb obtained by Clark. We found that there are differences between the observed and computed solar radii of 12,000 km at 20 GHz, 5000 km at 100 GHz, and 1000 km at 3 THz for both semi-empirical models. A difference of 8000 km in the solar radii was found when comparing our results against the heights obtained from H α observations of spicules-off at the solar limb. We conclude that the solar radii cannot be reproduced by VALC and C7 semi-empirical models at radio—infrared wavelengths. Therefore, the structures in the high chromosphere provide a better measurement of the solar radii and their limb brightening as shown in previous investigations.« less

  17. Health status and health dynamics in an empirical model of expected longevity.

    PubMed

    Benítez-Silva, Hugo; Ni, Huan

    2008-05-01

    Expected longevity is an important factor influencing older individuals' decisions such as consumption, savings, purchase of life insurance and annuities, claiming of Social Security benefits, and labor supply. It has also been shown to be a good predictor of actual longevity, which in turn is highly correlated with health status. A relatively new literature on health investments under uncertainty, which builds upon the seminal work by Grossman [Grossman, M., 1972. On the concept of health capital and demand for health. Journal of Political Economy 80, 223-255] has directly linked longevity with characteristics, behaviors, and decisions by utility maximizing agents. Our empirical model can be understood within that theoretical framework as estimating a production function of longevity. Using longitudinal data from the Health and Retirement Study, we directly incorporate health dynamics in explaining the variation in expected longevities, and compare two alternative measures of health dynamics: the self-reported health change, and the computed health change based on self-reports of health status. In 38% of the reports in our sample, computed health changes are inconsistent with the direct report on health changes over time. And another 15% of the sample can suffer from information losses if computed changes are used to assess changes in actual health. These potentially serious problems raise doubts regarding the use and interpretation of the computed health changes and even the lagged measures of self-reported health as controls for health dynamics in a variety of empirical settings. Our empirical results, controlling for both subjective and objective measures of health status and unobserved heterogeneity in reporting, suggest that self-reported health changes are a preferred measure of health dynamics.

  18. Low-Order Modeling of Dynamic Stall on Airfoils in Incompressible Flow

    NASA Astrophysics Data System (ADS)

    Narsipur, Shreyas

    Unsteady aerodynamics has been a topic of research since the late 1930's and has increased in popularity among researchers studying dynamic stall in helicopters, insect/bird flight, micro air vehicles, wind-turbine aerodynamics, and ow-energy harvesting devices. Several experimental and computational studies have helped researchers gain a good understanding of the unsteady ow phenomena, but have proved to be expensive and time-intensive for rapid design and analysis purposes. Since the early 1970's, the push to develop low-order models to solve unsteady ow problems has resulted in several semi-empirical models capable of effectively analyzing unsteady aerodynamics in a fraction of the time required by high-order methods. However, due to the various complexities associated with time-dependent flows, several empirical constants and curve fits derived from existing experimental and computational results are required by the semi-empirical models to be an effective analysis tool. The aim of the current work is to develop a low-order model capable of simulating incompressible dynamic-stall type ow problems with a focus on accurately modeling the unsteady ow physics with the aim of reducing empirical dependencies. The lumped-vortex-element (LVE) algorithm is used as the baseline unsteady inviscid model to which augmentations are applied to model unsteady viscous effects. The current research is divided into two phases. The first phase focused on augmentations aimed at modeling pure unsteady trailing-edge boundary-layer separation and stall without leading-edge vortex (LEV) formation. The second phase is targeted at including LEV shedding capabilities to the LVE algorithm and combining with the trailing-edge separation model from phase one to realize a holistic, optimized, and robust low-order dynamic stall model. In phase one, initial augmentations to theory were focused on modeling the effects of steady trailing-edge separation by implementing a non-linear decambering flap to model the effect of the separated boundary-layer. Unsteady RANS results for several pitch and plunge motions showed that the differences in aerodynamic loads between steady and unsteady flows can be attributed to the boundary-layer convection lag, which can be modeled by choosing an appropriate value of the time lag parameter, tau2. In order to provide appropriate viscous corrections to inviscid unsteady calculations, the non-linear decambering flap is applied with a time lag determined by the tau2 value, which was found to be independent of motion kinematics for a given airfoil and Reynolds number. The predictions of the aerodynamic loads, unsteady stall, hysteresis loops, and ow reattachment from the low-order model agree well with CFD and experimental results, both for individual cases and for trends between motions. The model was also found to perform as well as existing semi-empirical models while using only a single empirically defined parameter. Inclusion of LEV shedding capabilities and combining the resulting algorithm with phase one's trailing-edge separation model was the primary objective of phase two. Computational results at low and high Reynolds numbers were used to analyze the ow morphology of the LEV to identify the common surface signature associated with LEV initiation at both low and high Reynolds numbers and relate it to the critical leading-edge suction parameter (LESP ) to control the initiation and termination of LEV shedding in the low-order model. The critical LESP, like the tau2 parameter, was found to be independent of motion kinematics for a given airfoil and Reynolds number. Results from the final low-order model compared excellently with CFD and experimental solutions, both in terms of aerodynamic loads and vortex ow pattern predictions. Overall, the final combined dynamic stall model that resulted from the current research was successful in accurately modeling the physics of unsteady ow thereby helping restrict the number of empirical coefficients to just two variables while successfully modeling the aerodynamic forces and ow patterns in a simple and precise manner.

  19. Teaching Mathematics: Computers in the Classroom.

    ERIC Educational Resources Information Center

    Borba, Marcelo C.

    1995-01-01

    Discusses some major changes that computers, calculators, and graphing calculators have brought to the mathematics classroom, including quasi-empirical studies in the classroom, use of multiple representations, emphasis on visualization, emphasis on tables, an altered classroom "ecology," and increasing complexity for students. (SR)

  20. Real-time video streaming in mobile cloud over heterogeneous wireless networks

    NASA Astrophysics Data System (ADS)

    Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos

    2012-06-01

    Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets are captured for analytical purposes on the mobile user node. Experimental results are obtained and analysed. Future work is identified towards further improvement of the current design and implementation. With this new mobile video networking concept and paradigm implemented and evaluated, results and observations obtained from this study would form the basis of a more in-depth, comprehensive understanding of various challenges and opportunities in supporting high-quality real-time video streaming in mobile cloud over heterogeneous wireless networks.

  1. A Model of Therapist Competencies for the Empirically Supported Cognitive Behavioral Treatment of Child and Adolescent Anxiety and Depressive Disorders

    ERIC Educational Resources Information Center

    Sburlati, Elizabeth S.; Schniering, Carolyn A.; Lyneham, Heidi J.; Rapee, Ronald M.

    2011-01-01

    While a plethora of cognitive behavioral empirically supported treatments (ESTs) are available for treating child and adolescent anxiety and depressive disorders, research has shown that these are not as effective when implemented in routine practice settings. Research is now indicating that is partly due to ineffective EST training methods,…

  2. Implementing Geographical Key Concepts: Design of a Symbiotic Teacher Training Course Based on Empirical and Theoretical Evidence

    ERIC Educational Resources Information Center

    Fögele, Janis; Mehren, Rainer

    2015-01-01

    A central desideratum for the professionalization of qualified teachers is an improved practice of further teacher education. The present work constitutes a course of in-service training, which is built upon both a review of empirical findings concerning the efficacy of in-service training courses for teachers and theoretical assumptions about the…

  3. Issues in Implementation of Coeducation in Turkish Education System: A Historical Research on 1869 Statute on General Education

    ERIC Educational Resources Information Center

    Kamer, Selman Tunay

    2017-01-01

    Though the Imperial Edict of Gülhane, which is regarded as the real beginning of modernization in the Ottoman Empire, does not contain any direct article on education, "Tanzimat" (Reorganization of the Ottoman Empire) and the process following it directly affected the education system in the country. The boards formed and the regulations…

  4. A collaborative institutional model for integrating computer applications in the medical curriculum.

    PubMed Central

    Friedman, C. P.; Oxford, G. S.; Juliano, E. L.

    1991-01-01

    The introduction and promotion of information technology in an established medical curriculum with existing academic and technical support structures poses a number of challenges. The UNC School of Medicine has developed the Taskforce on Educational Applications in Medicine (TEAM), to coordinate this effort. TEAM works as a confederation of existing research and support units with interests in computers and education, along with a core of interested faculty with curricular responsibilities. Constituent units of the TEAM confederation include the medical center library, medical television studios, basic science teaching laboratories, educational development office, microcomputer and network support groups, academic affairs administration, and a subset of course directors and teaching faculty. Among our efforts have been the establishment of (1) a mini-grant program to support faculty initiated development and implementation of computer applications in the curriculum, (2) a symposium series with visiting speakers to acquaint faculty with current developments in medical informatics and related curricular efforts at other institution, (3) 20 computer workstations located in the multipurpose teaching labs where first and second year students do much of their academic work, (4) a demonstration center for evaluation of courseware and technologically advanced delivery systems. The student workstations provide convenient access to electronic mail, University schedules and calendars, the CoSy computer conferencing system, and several software applications integral to their courses in pathology, histology, microbiology, biochemistry, and neurobiology. The progress achieved toward the primary goal has modestly exceeded our initial expectations, while the collegiality and interest expressed toward TEAM activities in the local environment stand as empirical measures of the success of the concept. PMID:1807705

  5. Worldwide Ocean Optics Database (WOOD)

    DTIC Science & Technology

    2001-09-30

    user can obtain values computed from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error ...from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for...properties, including diffuse attenuation, beam attenuation, and scattering. The database shall be easy to use, Internet accessible, and frequently updated

  6. Providing Feedback on Computer-Based Algebra Homework in Middle-School Classrooms

    ERIC Educational Resources Information Center

    Fyfe, Emily R.

    2016-01-01

    Homework is transforming at a rapid rate with continuous advances in educational technology. Computer-based homework, in particular, is gaining popularity across a range of schools, with little empirical evidence on how to optimize student learning. The current aim was to test the effects of different types of feedback on computer-based homework.…

  7. Impacts of Mobile Computing on Student Learning in the University: A Comparison of Course Assessment Data

    ERIC Educational Resources Information Center

    Hawkes, Mark; Hategekimana, Claver

    2010-01-01

    This study focuses on the impact of wireless, mobile computing tools on student assessment outcomes. In a campus-wide wireless, mobile computing environment at an upper Midwest university, an empirical analysis is applied to understand the relationship between student performance and Tablet PC use. An experimental/control group comparison of…

  8. An Empirical Look at Business Students' Attitudes towards Laptop Computers in the Classroom

    ERIC Educational Resources Information Center

    Dykstra, DeVee E.; Tracy, Daniel L.; Wergin, Rand

    2013-01-01

    Mobile computing technology has proliferated across university campuses with the goals of enhancing student learning outcomes and making courses more accessible. An increasing amount of research has been conducted about mobile computing's benefits in classroom settings. Yet, the research is still in its infancy. The purpose of this paper is to add…

  9. Synthesizing Results From Empirical Research on Computer-Based Scaffolding in STEM Education

    PubMed Central

    Belland, Brian R.; Walker, Andrew E.; Kim, Nam Ju; Lefler, Mason

    2016-01-01

    Computer-based scaffolding assists students as they generate solutions to complex problems, goals, or tasks, helping increase and integrate their higher order skills in the process. However, despite decades of research on scaffolding in STEM (science, technology, engineering, and mathematics) education, no existing comprehensive meta-analysis has synthesized the results of these studies. This review addresses that need by synthesizing the results of 144 experimental studies (333 outcomes) on the effects of computer-based scaffolding designed to assist the full range of STEM learners (primary through adult education) as they navigated ill-structured, problem-centered curricula. Results of our random effect meta-analysis (a) indicate that computer-based scaffolding showed a consistently positive (ḡ = 0.46) effect on cognitive outcomes across various contexts of use, scaffolding characteristics, and levels of assessment and (b) shed light on many scaffolding debates, including the roles of customization (i.e., fading and adding) and context-specific support. Specifically, scaffolding’s influence on cognitive outcomes did not vary on the basis of context-specificity, presence or absence of scaffolding change, and logic by which scaffolding change is implemented. Scaffolding’s influence was greatest when measured at the principles level and among adult learners. Still scaffolding’s effect was substantial and significantly greater than zero across all age groups and assessment levels. These results suggest that scaffolding is a highly effective intervention across levels of different characteristics and can largely be designed in many different ways while still being highly effective. PMID:28344365

  10. Synthesizing Results From Empirical Research on Computer-Based Scaffolding in STEM Education: A Meta-Analysis.

    PubMed

    Belland, Brian R; Walker, Andrew E; Kim, Nam Ju; Lefler, Mason

    2017-04-01

    Computer-based scaffolding assists students as they generate solutions to complex problems, goals, or tasks, helping increase and integrate their higher order skills in the process. However, despite decades of research on scaffolding in STEM (science, technology, engineering, and mathematics) education, no existing comprehensive meta-analysis has synthesized the results of these studies. This review addresses that need by synthesizing the results of 144 experimental studies (333 outcomes) on the effects of computer-based scaffolding designed to assist the full range of STEM learners (primary through adult education) as they navigated ill-structured, problem-centered curricula. Results of our random effect meta-analysis (a) indicate that computer-based scaffolding showed a consistently positive (ḡ = 0.46) effect on cognitive outcomes across various contexts of use, scaffolding characteristics, and levels of assessment and (b) shed light on many scaffolding debates, including the roles of customization (i.e., fading and adding) and context-specific support. Specifically, scaffolding's influence on cognitive outcomes did not vary on the basis of context-specificity, presence or absence of scaffolding change, and logic by which scaffolding change is implemented. Scaffolding's influence was greatest when measured at the principles level and among adult learners. Still scaffolding's effect was substantial and significantly greater than zero across all age groups and assessment levels. These results suggest that scaffolding is a highly effective intervention across levels of different characteristics and can largely be designed in many different ways while still being highly effective.

  11. A secure protocol for protecting the identity of providers when disclosing data for disease surveillance

    PubMed Central

    Hu, Jun; Mercer, Jay; Peyton, Liam; Kantarcioglu, Murat; Malin, Bradley; Buckeridge, David; Samet, Saeed; Earle, Craig

    2011-01-01

    Background Providers have been reluctant to disclose patient data for public-health purposes. Even if patient privacy is ensured, the desire to protect provider confidentiality has been an important driver of this reluctance. Methods Six requirements for a surveillance protocol were defined that satisfy the confidentiality needs of providers and ensure utility to public health. The authors developed a secure multi-party computation protocol using the Paillier cryptosystem to allow the disclosure of stratified case counts and denominators to meet these requirements. The authors evaluated the protocol in a simulated environment on its computation performance and ability to detect disease outbreak clusters. Results Theoretical and empirical assessments demonstrate that all requirements are met by the protocol. A system implementing the protocol scales linearly in terms of computation time as the number of providers is increased. The absolute time to perform the computations was 12.5 s for data from 3000 practices. This is acceptable performance, given that the reporting would normally be done at 24 h intervals. The accuracy of detection disease outbreak cluster was unchanged compared with a non-secure distributed surveillance protocol, with an F-score higher than 0.92 for outbreaks involving 500 or more cases. Conclusion The protocol and associated software provide a practical method for providers to disclose patient data for sentinel, syndromic or other indicator-based surveillance while protecting patient privacy and the identity of individual providers. PMID:21486880

  12. Fast variogram analysis of remotely sensed images in HPC environment

    NASA Astrophysics Data System (ADS)

    Pesquer, Lluís; Cortés, Anna; Masó, Joan; Pons, Xavier

    2013-04-01

    Exploring and describing spatial variation of images is one of the main applications of geostatistics to remote sensing. The variogram is a very suitable tool to carry out this spatial pattern analysis. Variogram analysis is composed of two steps: empirical variogram generation and fitting a variogram model. The empirical variogram generation is a very quick procedure for most analyses of irregularly distributed samples, but time consuming increases quite significantly for remotely sensed images, because number of samples (pixels) involved is usually huge (more than 30 million for a Landsat TM scene), basically depending on extension and spatial resolution of images. In several remote sensing applications this type of analysis is repeated for each image, sometimes hundreds of scenes and sometimes for each radiometric band (high number in the case of hyperspectral images) so that there is a need for a fast implementation. In order to reduce this high execution time, we carried out a parallel solution of the variogram analyses. The solution adopted is the master/worker programming paradigm in which the master process distributes and coordinates the tasks executed by the worker processes. The code is written in ANSI-C language, including MPI (Message Passing Interface) as a message-passing library in order to communicate the master with the workers. This solution (ANSI-C + MPI) guarantees portability between different computer platforms. The High Performance Computing (HPC) environment is formed by 32 nodes, each with two Dual Core Intel(R) Xeon (R) 3.0 GHz processors with 12 Gb of RAM, communicated with integrated dual gigabit Ethernet. This IBM cluster is located in the research laboratory of the Computer Architecture and Operating Systems Department of the Universitat Autònoma de Barcelona. The performance results for a 15km x 15km subcene of 198-31 path-row Landsat TM image are shown in table 1. The proximity between empirical speedup behaviour and theoretical linear speedup confirms a suitable parallel design and implementation applied. N Workers Time (s) Speedup 0 2975.03 2 2112.33 1.41 4 1067.45 2.79 8 534.18 5.57 12 357.54 8.32 16 269.00 11.06 20 216.24 13.76 24 186.31 15.97 Furthermore, very similar performance results are obtained for CASI images (hyperspectral and finer spatial resolution than Landsat), showed in table 2, and demonstrating that the distributed load design is not specifically defined and optimized for unique type of images, but it is a flexible design that maintains a good balance and scalability suitable for different range of image dimensions. N Workers Time (s) Speedup 0 5485.03 2 3847.47 1.43 4 1921.62 2.85 8 965.55 5.68 12 644.26 8.51 16 483.40 11.35 20 393.67 13.93 24 347.15 15.80 28 306.33 17.91 32 304.39 18.02 Finally, we conclude that this significant time reduction underlines the utility of distributed environments for processing large amount of data as remotely sensed images.

  13. Masks in Pedagogical Practice

    ERIC Educational Resources Information Center

    Roy, David

    2016-01-01

    In Drama Education mask work is undertaken and presented as both a methodology and knowledge base. There are numerous workshops and journal articles available for teachers that offer knowledge or implementation of mask work. However, empirical examination of the context or potential implementation of masks as a pedagogical tool remains…

  14. Universe creation on a computer

    NASA Astrophysics Data System (ADS)

    McCabe, Gordon

    The purpose of this paper is to provide an account of the epistemology and metaphysics of universe creation on a computer. The paper begins with F.J. Tipler's argument that our experience is indistinguishable from the experience of someone embedded in a perfect computer simulation of our own universe, hence we cannot know whether or not we are part of such a computer program ourselves. Tipler's argument is treated as a special case of epistemological scepticism, in a similar vein to 'brain-in-a-vat' arguments. It is argued that Tipler's hypothesis that our universe is a program running on a digital computer in another universe, generates empirical predictions, and is therefore a falsifiable hypothesis. The computer program hypothesis is also treated as a hypothesis about what exists beyond the physical world, and is compared with Kant's metaphysics of noumena. It is argued that if our universe is a program running on a digital computer, then our universe must have compact spatial topology, and the possibilities of observationally testing this prediction are considered. The possibility of testing the computer program hypothesis with the value of the density parameter Ω0 is also analysed. The informational requirements for a computer to represent a universe exactly and completely are considered. Consequent doubt is thrown upon Tipler's claim that if a hierarchy of computer universes exists, we would not be able to know which 'level of implementation' our universe exists at. It is then argued that a digital computer simulation of a universe, or any other physical system, does not provide a realisation of that universe or system. It is argued that a digital computer simulation of a physical system is not objectively related to that physical system, and therefore cannot exist as anything else other than a physical process occurring upon the components of the computer. It is concluded that Tipler's sceptical hypothesis, and a related hypothesis from Bostrom, cannot be true: it is impossible that our own experience is indistinguishable from the experience of somebody embedded in a digital computer simulation because it is impossible for anybody to be embedded in a digital computer simulation.

  15. PAGANI Toolkit: Parallel graph-theoretical analysis package for brain network big data.

    PubMed

    Du, Haixiao; Xia, Mingrui; Zhao, Kang; Liao, Xuhong; Yang, Huazhong; Wang, Yu; He, Yong

    2018-05-01

    The recent collection of unprecedented quantities of neuroimaging data with high spatial resolution has led to brain network big data. However, a toolkit for fast and scalable computational solutions is still lacking. Here, we developed the PArallel Graph-theoretical ANalysIs (PAGANI) Toolkit based on a hybrid central processing unit-graphics processing unit (CPU-GPU) framework with a graphical user interface to facilitate the mapping and characterization of high-resolution brain networks. Specifically, the toolkit provides flexible parameters for users to customize computations of graph metrics in brain network analyses. As an empirical example, the PAGANI Toolkit was applied to individual voxel-based brain networks with ∼200,000 nodes that were derived from a resting-state fMRI dataset of 624 healthy young adults from the Human Connectome Project. Using a personal computer, this toolbox completed all computations in ∼27 h for one subject, which is markedly less than the 118 h required with a single-thread implementation. The voxel-based functional brain networks exhibited prominent small-world characteristics and densely connected hubs, which were mainly located in the medial and lateral fronto-parietal cortices. Moreover, the female group had significantly higher modularity and nodal betweenness centrality mainly in the medial/lateral fronto-parietal and occipital cortices than the male group. Significant correlations between the intelligence quotient and nodal metrics were also observed in several frontal regions. Collectively, the PAGANI Toolkit shows high computational performance and good scalability for analyzing connectome big data and provides a friendly interface without the complicated configuration of computing environments, thereby facilitating high-resolution connectomics research in health and disease. © 2018 Wiley Periodicals, Inc.

  16. Using antibiograms to improve antibiotic prescribing in skilled nursing facilities.

    PubMed

    Furuno, Jon P; Comer, Angela C; Johnson, J Kristie; Rosenberg, Joseph H; Moore, Susan L; MacKenzie, Thomas D; Hall, Kendall K; Hirshon, Jon Mark

    2014-10-01

    Antibiograms have effectively improved antibiotic prescribing in acute-care settings; however, their effectiveness in skilled nursing facilities (SNFs) is currently unknown. To develop SNF-specific antibiograms and identify opportunities to improve antibiotic prescribing. Cross-sectional and pretest-posttest study among residents of 3 Maryland SNFs. Antibiograms were created using clinical culture data from a 6-month period in each SNF. We also used admission clinical culture data from the acute care facility primarily associated with each SNF for transferred residents. We manually collected all data from medical charts, and antibiograms were created using WHONET software. We then used a pretest-posttest study to evaluate the effectiveness of an antibiogram on changing antibiotic prescribing practices in a single SNF. Appropriate empirical antibiotic therapy was defined as an empirical antibiotic choice that sufficiently covered the infecting organism, considering antibiotic susceptibilities. We reviewed 839 patient charts from SNF and acute care facilities. During the initial assessment period, 85% of initial antibiotic use in the SNFs was empirical, and thus only 15% of initial antibiotics were based on culture results. Fluoroquinolones were the most frequently used empirical antibiotics, accounting for 54.5% of initial prescribing instances. Among patients with available culture data, only 35% of empirical antibiotic prescribing was determined to be appropriate. In the single SNF in which we evaluated antibiogram effectiveness, prevalence of appropriate antibiotic prescribing increased from 32% to 45% after antibiogram implementation; however, this was not statistically significant ([Formula: see text]). Implementation of antibiograms may be effective in improving empirical antibiotic prescribing in SNFs.

  17. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  18. Inflated speedups in parallel simulations via malloc()

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    Discrete-event simulation programs make heavy use of dynamic memory allocation in order to support simulation's very dynamic space requirements. When programming in C one is likely to use the malloc() routine. However, a parallel simulation which uses the standard Unix System V malloc() implementation may achieve an overly optimistic speedup, possibly superlinear. An alternate implementation provided on some (but not all systems) can avoid the speedup anomaly, but at the price of significantly reduced available free space. This is especially severe on most parallel architectures, which tend not to support virtual memory. It is shown how a simply implemented user-constructed interface to malloc() can both avoid artificially inflated speedups, and make efficient use of the dynamic memory space. The interface simply catches blocks on the basis of their size. The problem is demonstrated empirically, and the effectiveness of the solution is shown both empirically and analytically.

  19. Adaptive Educational Hypermedia Accommodating Learning Styles: A Content Analysis of Publications from 2000 to 2011

    ERIC Educational Resources Information Center

    Akbulut, Yavuz; Cardak, Cigdem Suzan

    2012-01-01

    Implementing instructional interventions to accommodate learner differences has received considerable attention. Among these individual difference variables, the empirical evidence regarding the pedagogical value of learning styles has been questioned, but the research on the issue continues. Recent developments in Web-based implementations have…

  20. An Evaluation of Strategies for Training Staff to Implement the Picture Exchange Communication System

    ERIC Educational Resources Information Center

    Barnes, Clarissa S.; Dunning, Johnna L.; Rehfeldt, Ruth Anne

    2011-01-01

    The picture exchange communication system (PECS) is a functional communication system frequently used with individuals diagnosed with autism spectrum disorders who experience severe language delays (Frost & Bondy, 2002). Few empirical investigations have evaluated strategies for training direct care staff how to effectively implement PECS with…

  1. A Contingent Analysis of the Relationship between IS Implementation Strategies and IS Success.

    ERIC Educational Resources Information Center

    Kim, Sang-Hoon; Lee, Jinjoo

    1991-01-01

    Considers approaches to dealing with user attitudes toward newly implemented information systems (IS), and suggests that behavioral management strategies relevant to IS fall into three categories: (1) empirical/rational; (2) normative/reeducative; and (3) power/coercive, based on "planned change" theories. An integrative contingent model…

  2. Graphic Narratives: Cognitive and Pedagogical Choices for Implementation in the English Language Arts Classroom

    ERIC Educational Resources Information Center

    Dulaney, Margaret Anne

    2012-01-01

    There is little empirical research that investigates the implementation of graphic narratives into the English language arts classroom, subsequently leading to misperceptions and misconceptions about their educative uses. Despite sequential arts' long history, graphic narratives continue to experience a marginalized existence within the…

  3. WebArray: an online platform for microarray data analysis

    PubMed Central

    Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng

    2005-01-01

    Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165

  4. The cognitive architecture of anxiety-like behavioral inhibition.

    PubMed

    Bach, Dominik R

    2017-01-01

    The combination of reward and potential threat is termed approach/avoidance conflict and elicits specific behaviors, including passive avoidance and behavioral inhibition (BI). Anxiety-relieving drugs reduce these behaviors, and a rich psychological literature has addressed how personality traits dominated by BI predispose for anxiety disorders. Yet, a formal understanding of the cognitive inference and planning processes underlying anxiety-like BI is lacking. Here, we present and empirically test such formalization in the terminology of reinforcement learning. We capitalize on a human computer game in which participants collect sequentially appearing monetary tokens while under threat of virtual "predation." First, we demonstrate that humans modulate BI according to experienced consequences. This suggests an instrumental implementation of BI generation rather than a Pavlovian mechanism that is agnostic about action outcomes. Second, an internal model that would make BI adaptive is expressed in an independent task that involves no threat. The existence of such internal model is a necessary condition to conclude that BI is under model-based control. These findings relate a plethora of human and nonhuman observations on BI to reinforcement learning theory, and crucially constrain the quest for its neural implementation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Comparison Analysis of Recognition Algorithms of Forest-Cover Objects on Hyperspectral Air-Borne and Space-Borne Images

    NASA Astrophysics Data System (ADS)

    Kozoderov, V. V.; Kondranin, T. V.; Dmitriev, E. V.

    2017-12-01

    The basic model for the recognition of natural and anthropogenic objects using their spectral and textural features is described in the problem of hyperspectral air-borne and space-borne imagery processing. The model is based on improvements of the Bayesian classifier that is a computational procedure of statistical decision making in machine-learning methods of pattern recognition. The principal component method is implemented to decompose the hyperspectral measurements on the basis of empirical orthogonal functions. Application examples are shown of various modifications of the Bayesian classifier and Support Vector Machine method. Examples are provided of comparing these classifiers and a metrical classifier that operates on finding the minimal Euclidean distance between different points and sets in the multidimensional feature space. A comparison is also carried out with the " K-weighted neighbors" method that is close to the nonparametric Bayesian classifier.

  6. Number of repetitions required to retain single-digit multiplication math facts for elementary students.

    PubMed

    Burns, Matthew K; Ysseldyke, Jim; Nelson, Peter M; Kanive, Rebecca

    2015-09-01

    Computational fluency is an important aspect of math proficiency. Despite widely held beliefs about the differential difficulty of single-digit multiplication math facts, little empirical work has examined this issue. The current study analyzed the number of repetitions needed to master multiplication math facts. Data from 15,402 3rd, 4th, and 5th graders were analyzed using a national database. Results suggested that (a) students with lower math skills required significantly (p < .001) more repetitions than more skilled students; (b) across all students, single-digit multiplication facts with 4s, 5s, 6s, and 7s required significantly (p < .001) more repetition than did 2s and 3s; and (c) the number of practice sessions needed to attain mastery significantly (p < .001) decreased with increase in grade level. Implications for instructional planning and implementation are discussed. (c) 2015 APA, all rights reserved).

  7. Investigation using data from ERTS-1 to develop and implement utilization of living marine resources. [availability and distribution of menhaden fish in Mississippi Sound and Gulf waters

    NASA Technical Reports Server (NTRS)

    Stevenson, W. H. (Principal Investigator); Pastula, E. J., Jr.

    1973-01-01

    The author has identified the following significant results. This 15-month ERTS-1 investigation produced correlations between satellite, aircraft, menhaden fisheries, and environmental sea truth data from the Mississippi Sound. Selected oceanographic, meteorological, and biological parameters were used as indirect indicators of the menhaden resource. Synoptic and near real time sea truth, fishery, satellite imagery, aircraft acquired multispectral, photo and thermal IR information were acquired as data inputs. Computer programs were developed to manipulate these data according to user requirements. Preliminary results indicate a correlation between backscattered light with chlorophyll concentration and water transparency in turbid waters. Eight empirical menhaden distribution models were constructed from combinations of four fisheries-significant oceanographic parameters: water depth, transparency, color, and surface salinity. The models demonstrated their potential for management utilization in areas of resource assessment, prediction, and monitoring.

  8. A system for routing arbitrary directed graphs on SIMD architectures

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl

    1987-01-01

    There are many problems which can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from connecting vertices. A method is given for parallelizing such problems on an SIMD machine model that is bit-serial and uses only nearest neighbor connections for communication. Each vertex of the graph will be assigned to a processor in the machine. Algorithms are given that will be used to implement movement of data along the arcs of the graph. This architecture and algorithms define a system that is relatively simple to build and can do graph processing. All arcs can be transversed in parallel in time O(T), where T is empirically proportional to the diameter of the interconnection network times the average degree of the graph. Modifying or adding a new arc takes the same time as parallel traversal.

  9. 76 FR 52353 - Assumption Buster Workshop: “Current Implementations of Cloud Computing Indicate a New Approach...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-22

    ... explored in this series is cloud computing. The workshop on this topic will be held in Gaithersburg, MD on October 21, 2011. Assertion: ``Current implementations of cloud computing indicate a new approach to security'' Implementations of cloud computing have provided new ways of thinking about how to secure data...

  10. Optimal Sequential Rules for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  11. Computer-Assisted Laboratory Stations.

    ERIC Educational Resources Information Center

    Snyder, William J., Hanyak, Michael E.

    1985-01-01

    Describes the advantages and features of computer-assisted laboratory stations for use in a chemical engineering program. Also describes a typical experiment at such a station: determining the response times of a solid state humidity sensor at various humidity conditions and developing an empirical model for the sensor. (JN)

  12. Impact of a clinical pathway on appropriate empiric vancomycin use in cancer patients with febrile neutropenia.

    PubMed

    Vicente, Mildred; Al-Nahedh, Mohammad; Parsad, Sandeep; Knoebel, Randall W; Pisano, Jennifer; Pettit, Natasha N

    2017-12-01

    Objectives Febrile neutropenia management guidelines recommend the use of vancomycin as part of an empiric antimicrobial regimen when specific criteria are met. Often, vancomycin use among patients with febrile neutropenia is not indicated and may be over utilized for this indication. We sought to evaluate the impact of implementing a febrile neutropenia clinical pathway on empiric vancomycin use for febrile neutropenia and to identify predictors of vancomycin use when not indicated. Methods Adult febrile neutropenia patients who received initial therapy with an anti-pseudomonal beta-lactam with or without vancomycin were identified before (June 2008 to November 2010) and after (June 2012 to June 2013) pathway implementation. Patients were assessed for appropriateness of therapy based on whether the patient received vancomycin consistent with guideline recommendations. Using a comorbidity index used for risk assessment in high risk hematology/oncology patients, we evaluated whether specific comorbidities are associated with inappropriate vancomycin use in the setting of febrile neutropenia. Results A total of 206 patients were included in the pre-pathway time period with 35.9% of patients receiving vancomycin therapy that was inconsistent with the pathway. A total of 131 patients were included in the post-pathway time period with 11.4% of patients receiving vancomycin inconsistent with the pathway ( p = 0.001). None of the comorbidities assessed, nor the comorbidity index score were found to be predictors of vancomycin use inconsistent with guideline recommendations. Conclusion Our study has demonstrated that implementation of a febrile neutropenia pathway can significantly improve adherence to national guideline recommendations with respect to empiric vancomycin utilization for febrile neutropenia.

  13. Implementation of cloud computing in higher education

    NASA Astrophysics Data System (ADS)

    Asniar; Budiawan, R.

    2016-04-01

    Cloud computing research is a new trend in distributed computing, where people have developed service and SOA (Service Oriented Architecture) based application. This technology is very useful to be implemented, especially for higher education. This research is studied the need and feasibility for the suitability of cloud computing in higher education then propose the model of cloud computing service in higher education in Indonesia that can be implemented in order to support academic activities. Literature study is used as the research methodology to get a proposed model of cloud computing in higher education. Finally, SaaS and IaaS are cloud computing service that proposed to be implemented in higher education in Indonesia and cloud hybrid is the service model that can be recommended.

  14. Past Research in Instructional Technology: Results of a Content Analysis of Empirical Studies Published in Three Prominent Instructional Technology Journals from the Year 2000 through 2004

    ERIC Educational Resources Information Center

    Hew, Khe Foon; Kale, Ugur; Kim, Nari

    2007-01-01

    This article reviews and categorizes empirical studies related to instructional technology that were published in three prominent journals: "Educational Technology Research and Development, Instructional Science," and the "Journal of Educational Computing Research" from the year 2000 through 2004. Four questions guided this review: 1) What…

  15. An Empirical Evaluation of Sonar Courseware Developed with Intelligent Tutoring Software (InTrain[TM]) at Naval Submarine School.

    ERIC Educational Resources Information Center

    Birchard, Marcy; Dye, Charles; Gordon, John

    With limits on both personnel and time available to conduct effective instruction, the decision is being made increasingly to enhance instructor-led courses with Computer-Based Training (CBT). The effectiveness of this conversion is often unknown and in many cases empirical evaluations are never conducted. This paper describes and discusses the…

  16. Demand for and Satisfaction with Places at University--An Empirical Comparative Study

    ERIC Educational Resources Information Center

    Bischoff, Florian; Gassmann, Freya; Emrich, E.

    2017-01-01

    What features lead a student to choose sport science, chemistry, physics, computer science or musicology as their subject of study and the Saarland University Saarbrücken as their place of study? Empirical analysis shows that study conditions for students of chemistry, physics and music do not play an important role in selecting the place of…

  17. Using a Touch-Based, Computer-Assisted Learning System to Promote Literacy and Math Skills for Low-Income Preschoolers

    ERIC Educational Resources Information Center

    McManis, Mark H.; McManis, Lilla Dale

    2016-01-01

    The use of touch-based technologies by young children to improve academic skills has seen growth outpacing empirical evidence of its effectiveness. Due to the educational challenges low-income children face, the stakes for providing instructional technology with demonstrated efficacy are high. The current work presents an empirical study of the…

  18. Computer Model of the Empirical Knowledge of Physics Formation: Coordination with Testing Results

    ERIC Educational Resources Information Center

    Mayer, Robert V.

    2016-01-01

    The use of method of imitational modeling to study forming the empirical knowledge in pupil's consciousness is discussed. The offered model is based on division of the physical facts into three categories: 1) the facts established in everyday life; 2) the facts, which the pupil can experimentally establish at a physics lesson; 3) the facts which…

  19. Using the TouchMath Program to Teach Mathematical Computation to At-Risk Students and Students with Disabilities

    ERIC Educational Resources Information Center

    Ellingsen, Ryleigh; Clinton, Elias

    2017-01-01

    This manuscript reviews the empirical literature of the TouchMath© instructional program. The TouchMath© program is a commercial mathematics series that uses a dot notation system to provide multisensory instruction of computation skills. Using the program, students are taught to solve computational tasks in a multisensory manner that does not…

  20. An Empirical Study of User Experience on Touch Mice

    ERIC Educational Resources Information Center

    Chou, Jyh Rong

    2016-01-01

    The touch mouse is a new type of computer mouse that provides users with a new way of touch-based environment to interact with computers. For more than a decade, user experience (UX) has grown into a core concept of human-computer interaction (HCI), describing a user's perceptions and responses that result from the use of a product in a particular…

  1. Evaluating a Computational Model of Social Causality and Responsibility

    DTIC Science & Technology

    2006-01-01

    Evaluating a Computational Model of Social Causality and Responsibility Wenji Mao University of Southern California Institute for Creative...empirically evaluate a computa- tional model of social causality and responsibility against human social judgments. Results from our experimental...developed a general computational model of social cau- sality and responsibility [10, 11] that formalizes the factors people use in reasoning about

  2. An Empirical Method Permitting Rapid Determination of the Area, Rate and Distribution of Water-Drop Impingement on an Airfoil of Arbitrary Section at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Bergrun, N. R.

    1951-01-01

    An empirical method for the determination of the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The procedure represents an initial step toward the development of a method which is generally applicable in the design of thermal ice-prevention equipment for airplane wing and tail surfaces. Results given by the proposed empirical method are expected to be sufficiently accurate for the purpose of heated-wing design, and can be obtained from a few numerical computations once the velocity distribution over the airfoil has been determined. The empirical method presented for incompressible flow is based on results of extensive water-drop. trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer. The method developed for incompressible flow is extended to the calculation of area and rate of impingement on straight wings in subsonic compressible flow to indicate the probable effects of compressibility for airfoils at low subsonic Mach numbers.

  3. Measurements and empirical model of the acoustic properties of reticulated vitreous carbon.

    PubMed

    Muehleisena, Ralph T; Beamer, C Walter; Tinianov, Brandon D

    2005-02-01

    Reticulated vitreous carbon (RVC) is a highly porous, rigid, open cell carbon foam structure with a high melting point, good chemical inertness, and low bulk thermal conductivity. For the proper design of acoustic devices such as acoustic absorbers and thermoacoustic stacks and regenerators utilizing RVC, the acoustic properties of RVC must be known. From knowledge of the complex characteristic impedance and wave number most other acoustic properties can be computed. In this investigation, the four-microphone transfer matrix measurement method is used to measure the complex characteristic impedance and wave number for 60 to 300 pore-per-inch RVC foams with flow resistivities from 1759 to 10,782 Pa s m(-2) in the frequency range of 330 Hz-2 kHz. The data are found to be poorly predicted by the fibrous material empirical model developed by Delany and Bazley, the open cell plastic foam empirical model developed by Qunli, or the Johnson-Allard microstructural model. A new empirical power law model is developed and is shown to provide good predictions of the acoustic properties over the frequency range of measurement. Uncertainty estimates for the constants of the model are also computed.

  4. Measurements and empirical model of the acoustic properties of reticulated vitreous carbon

    NASA Astrophysics Data System (ADS)

    Muehleisen, Ralph T.; Beamer, C. Walter; Tinianov, Brandon D.

    2005-02-01

    Reticulated vitreous carbon (RVC) is a highly porous, rigid, open cell carbon foam structure with a high melting point, good chemical inertness, and low bulk thermal conductivity. For the proper design of acoustic devices such as acoustic absorbers and thermoacoustic stacks and regenerators utilizing RVC, the acoustic properties of RVC must be known. From knowledge of the complex characteristic impedance and wave number most other acoustic properties can be computed. In this investigation, the four-microphone transfer matrix measurement method is used to measure the complex characteristic impedance and wave number for 60 to 300 pore-per-inch RVC foams with flow resistivities from 1759 to 10 782 Pa s m-2 in the frequency range of 330 Hz-2 kHz. The data are found to be poorly predicted by the fibrous material empirical model developed by Delany and Bazley, the open cell plastic foam empirical model developed by Qunli, or the Johnson-Allard microstructural model. A new empirical power law model is developed and is shown to provide good predictions of the acoustic properties over the frequency range of measurement. Uncertainty estimates for the constants of the model are also computed. .

  5. Secure Skyline Queries on Cloud Platform.

    PubMed

    Liu, Jinfei; Yang, Juncheng; Xiong, Li; Pei, Jian

    2017-04-01

    Outsourcing data and computation to cloud server provides a cost-effective way to support large scale data storage and query processing. However, due to security and privacy concerns, sensitive data (e.g., medical records) need to be protected from the cloud server and other unauthorized users. One approach is to outsource encrypted data to the cloud server and have the cloud server perform query processing on the encrypted data only. It remains a challenging task to support various queries over encrypted data in a secure and efficient way such that the cloud server does not gain any knowledge about the data, query, and query result. In this paper, we study the problem of secure skyline queries over encrypted data. The skyline query is particularly important for multi-criteria decision making but also presents significant challenges due to its complex computations. We propose a fully secure skyline query protocol on data encrypted using semantically-secure encryption. As a key subroutine, we present a new secure dominance protocol, which can be also used as a building block for other queries. Finally, we provide both serial and parallelized implementations and empirically study the protocols in terms of efficiency and scalability under different parameter settings, verifying the feasibility of our proposed solutions.

  6. Towards a sustainable framework for computer based health information systems (CHIS) for least developed countries (LDCs).

    PubMed

    Gordon, Abekah Nkrumah; Hinson, Robert Ebo

    2007-01-01

    The purpose of this paper is to argue for a theoretical framework by which development of computer based health information systems (CHIS) can be made sustainable. Health Management and promotion thrive on well-articulated CHIS. There are high levels of risk associated with the development of CHIS in the context of least developed countries (LDC), thereby making them unsustainable. This paper is based largely on literature survey on health promotion and information systems. The main factors accounting for the sustainability problem in less developed countries include poor infrastructure, inappropriate donor policies and strategies, poor infrastructure and inadequate human resource capacity. To counter these challenges and to ensure that CHIS deployment in LDCs is sustainable, it is proposed that the activities involved in the implementation of these systems be incorporated into organizational routines. This will ensure and secure the needed resources as well as the relevant support from all stakeholders of the system; on a continuous basis. This paper sets out to look at the issue of CHIS sustainability in LDCs, theoretically explains the factors that account for the sustainability problem and develops a conceptual model based on theoretical literature and existing empirical findings.

  7. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    NASA Astrophysics Data System (ADS)

    Tofighi, Elham; Mahdizadeh, Amin

    2016-09-01

    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  8. Computational prediction of atomic structures of helical membrane proteins aided by EM maps.

    PubMed

    Kovacs, Julio A; Yeager, Mark; Abagyan, Ruben

    2007-09-15

    Integral membrane proteins pose a major challenge for protein-structure prediction because only approximately 100 high-resolution structures are available currently, thereby impeding the development of rules or empirical potentials to predict the packing of transmembrane alpha-helices. However, when an intermediate-resolution electron microscopy (EM) map is available, it can be used to provide restraints which, in combination with a suitable computational protocol, make structure prediction feasible. In this work we present such a protocol, which proceeds in three stages: 1), generation of an ensemble of alpha-helices by flexible fitting into each of the density rods in the low-resolution EM map, spanning a range of rotational angles around the main helical axes and translational shifts along the density rods; 2), fast optimization of side chains and scoring of the resulting conformations; and 3), refinement of the lowest-scoring conformations with internal coordinate mechanics, by optimizing the van der Waals, electrostatics, hydrogen bonding, torsional, and solvation energy contributions. In addition, our method implements a penalty term through a so-called tethering map, derived from the EM map, which restrains the positions of the alpha-helices. The protocol was validated on three test cases: GpA, KcsA, and MscL.

  9. A Simulation Framework for Battery Cell Impact Safety Modeling Using LS-DYNA

    DOE PAGES

    Marcicki, James; Zhu, Min; Bartlett, Alexander; ...

    2017-02-04

    The development process of electrified vehicles can benefit significantly from computer-aided engineering tools that predict themultiphysics response of batteries during abusive events. A coupled structural, electrical, electrochemical, and thermal model framework has been developed within the commercially available LS-DYNA software. The finite element model leverages a three-dimensional mesh structure that fully resolves the unit cell components. The mechanical solver predicts the distributed stress and strain response with failure thresholds leading to the onset of an internal short circuit. In this implementation, an arbitrary compressive strain criterion is applied locally to each unit cell. A spatially distributed equivalent circuit model providesmore » an empirical representation of the electrochemical responsewith minimal computational complexity.The thermalmodel provides state information to index the electrical model parameters, while simultaneously accepting irreversible and reversible sources of heat generation. The spatially distributed models of the electrical and thermal dynamics allow for the localization of current density and corresponding temperature response. The ability to predict the distributed thermal response of the cell as its stored energy is completely discharged through the short circuit enables an engineering safety assessment. A parametric analysis of an exemplary model is used to demonstrate the simulation capabilities.« less

  10. Two decades of reforms. Appraisal of the financial reforms in the Russian public healthcare sector.

    PubMed

    Gordeev, Vladimir S; Pavlova, Milena; Groot, Wim

    2011-10-01

    This paper reviews the empirical evidence on the outcomes of the financial reforms in the Russian public healthcare sector. A systematic literature review identified 37 relevant publications that presented empirical evidence on changes in quality, equity, efficiency and sustainability in public healthcare provision due to the Russian public healthcare financial reforms. Evidence suggests that there are substantial inter-regional inequalities across income groups both in terms of financing and access to public healthcare services. There are large efficiency differences between regions, along with inter-regional variations in payment and reimbursement mechanisms. Informal and quasi-formal payments deteriorate access to public healthcare services and undermine the overall financing sustainability. The public healthcare sector is still underfinanced, although the implementation of health insurance gave some premises for future increases of efficiency. Overall, the available empirical data are not sufficient for an evidence-based evaluation of the reforms. More studies on the quality, equity, efficiency and sustainability impact of the reforms are needed. Future reforms should focus on the implementation of cost-efficiency and cost-control mechanisms; provide incentives for better allocation and distribution of resources; tackle problems in equity in access and financing; implement a system of quality controls; and stimulate healthy competition between insurance companies. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Modelling Trial-by-Trial Changes in the Mismatch Negativity

    PubMed Central

    Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.

    2013-01-01

    The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989

  12. Reduction of community alcohol problems: computer simulation experiments in three counties.

    PubMed

    Holder, H D; Blose, J O

    1987-03-01

    A series of alcohol abuse prevention strategies was evaluated using computer simulation for three counties in the United States: Wake County, North Carolina, Washington County, Vermont and Alameda County, California. A system dynamics model composed of a network of interacting variables was developed for the pattern of alcoholic beverage consumption in a community. The relationship of community drinking patterns to various stimulus factors was specified in the model based on available empirical research. Stimulus factors included disposable income, alcoholic beverage prices, advertising exposure, minimum drinking age and changes in cultural norms. After a generic model was developed and validated on the national level, a computer-based system dynamics model was developed for each county, and a series of experiments was conducted to project the potential impact of specific prevention strategies. The project concluded that prevention efforts can both lower current levels of alcohol abuse and reduce projected increases in alcohol-related problems. Without such efforts, already high levels of alcohol-related family disruptions in the three counties could be expected to rise an additional 6% and drinking-related work problems 1-5%, over the next 10 years after controlling for population growth. Of the strategies tested, indexing the price of alcoholic beverages to the consumer price index in conjunction with the implementation of a community educational program with well-defined target audiences has the best potential for significant problem reduction in all three counties.

  13. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  14. Mechanisms of Neurofeedback: A Computation-theoretic Approach.

    PubMed

    Davelaar, Eddy J

    2018-05-15

    Neurofeedback training is a form of brain training in which information about a neural measure is fed back to the trainee who is instructed to increase or decrease the value of that particular measure. This paper focuses on electroencephalography (EEG) neurofeedback in which the neural measures of interest are the brain oscillations. To date, the neural mechanisms that underlie successful neurofeedback training are still unexplained. Such an understanding would benefit researchers, funding agencies, clinicians, regulatory bodies, and insurance firms. Based on recent empirical work, an emerging theory couched firmly within computational neuroscience is proposed that advocates a critical role of the striatum in modulating EEG frequencies. The theory is implemented as a computer simulation of peak alpha upregulation, but in principle any frequency band at one or more electrode sites could be addressed. The simulation successfully learns to increase its peak alpha frequency and demonstrates the influence of threshold setting - the threshold that determines whether positive or negative feedback is provided. Analyses of the model suggest that neurofeedback can be likened to a search process that uses importance sampling to estimate the posterior probability distribution over striatal representational space, with each representation being associated with a distribution of values of the target EEG band. The model provides an important proof of concept to address pertinent methodological questions about how to understand and improve EEG neurofeedback success. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Dissemination and implementation of evidence-based practices for child and adolescent mental health: a systematic review.

    PubMed

    Novins, Douglas K; Green, Amy E; Legha, Rupinder K; Aarons, Gregory A

    2013-10-01

    Although there has been a dramatic increase in the number of evidence-based practices (EBPs) to improve child and adolescent mental health, the poor uptake of these EBPs has led to investigations of factors related to their successful dissemination and implementation. The purpose of this systematic review was to identify key findings from empirical studies examining the dissemination and implementation of EBPs for child and adolescent mental health. Of 14,247 citations initially identified, 73 articles drawn from 44 studies met inclusion criteria. The articles were classified by implementation phase (exploration, preparation, implementation, and sustainment) and specific implementation factors examined. These factors were divided into outer (i.e., system level) and inner (i.e., organizational level) contexts. Few studies used true experimental designs; most were observational. Of the many inner context factors that were examined in these studies (e.g., provider characteristics, organizational resources, leadership), fidelity monitoring and supervision had the strongest empirical evidence. Albeit the focus of fewer studies, implementation interventions focused on improving organizational climate and culture were associated with better intervention sustainment as well as child and adolescent outcomes. Outer contextual factors such as training and use of specific technologies to support intervention use were also important in facilitating the implementation process. The further development and testing of dissemination and implementation strategies is needed to more efficiently move EBPs into usual care. Copyright © 2013 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  16. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  17. Implementation of guidelines for management of possible multidrug-resistant pneumonia in intensive care: an observational, multicentre cohort study.

    PubMed

    Kett, Daniel H; Cano, Ennie; Quartin, Andrew A; Mangino, Julie E; Zervos, Marcus J; Peyrani, Paula; Cely, Cynthia M; Ford, Kimbal D; Scerpella, Ernesto G; Ramirez, Julio A

    2011-03-01

    The American Thoracic Society and Infectious Diseases Society of America provide guidelines for management of hospital-acquired, ventilator-associated, and health-care-associated pneumonias, consisting of empirical antibiotic regimens for patients at risk for multidrug-resistant pathogens. We aimed to improve compliance with these guidelines and assess outcomes. We implemented a performance-improvement initiative in four academic medical centres in the USA with protocol-based education and prospective observation of outcomes. Patients were assessed for severity of illness and followed up until death, hospital discharge, or day 28. We included patients in intensive-care units who were at risk for multidrug-resistant pneumonia and were treated empirically. 303 patients at risk for multidrug-resistant pneumonia were treated empirically, and prescribed treatment was guideline compliant in 129 patients and non-compliant in 174 patients. 44 (34%) patients died before 28 days in the compliance group and 35 (20%) died in the non-compliance group. Five patients in the compliance group and seven in the non-compliance group were lost to follow-up after day 14. Kaplan-Meier estimated survival to 28 days was 65% in the compliance group and 79% in the non-compliance group (p=0·0042). This difference persisted after adjustment for severity of illness. Median length of stay and duration of mechanical ventilation did not differ between groups. Compliance failures included non-use of dual treatment for Gram-negative pathogens in 154 patients and absence of meticillin-resistant Staphylococcus aureus coverage in 24 patients. For patients in whom pathogens were subsequently identified, empirical treatment was active in 79 (81%) of 97 of patients receiving compliant therapy compared with 109 (85%) of 128 of patients receiving non-compliant therapy. Because adherence with empirical treatment was associated with increased mortality, we recommend a randomised trial be done before further implementation of these guidelines. Pfizer, US Medical. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Computational Analysis of Stereospecificity in the Cope Rearrangement

    ERIC Educational Resources Information Center

    Glish, Laura; Hanks, Timothy W.

    2007-01-01

    The Cope rearrangement is a highly stereospecific, concerted reaction of considerable synthetic utility. Experimental product distributions from the reaction of disubstituted 1,5-hexadienes can be readily understood by computer modeling of the various possible transitions states. Semi-empirical methods give relative energies of transition states…

  19. Modeling Spanish Mood Choice in Belief Statements

    ERIC Educational Resources Information Center

    Robinson, Jason R.

    2013-01-01

    This work develops a computational methodology new to linguistics that empirically evaluates competing linguistic theories on Spanish verbal mood choice through the use of computational techniques to learn mood and other hidden linguistic features from Spanish belief statements found in corpora. The machine learned probabilistic linguistic models…

  20. Novel physical constraints on implementation of computational processes

    NASA Astrophysics Data System (ADS)

    Wolpert, David; Kolchinsky, Artemy

    Non-equilibrium statistical physics permits us to analyze computational processes, i.e., ways to drive a physical system such that its coarse-grained dynamics implements some desired map. It is now known how to implement any such desired computation without dissipating work, and what the minimal (dissipationless) work is that such a computation will require (the so-called generalized Landauer bound\\x9D). We consider how these analyses change if we impose realistic constraints on the computational process. First, we analyze how many degrees of freedom of the system must be controlled, in addition to the ones specifying the information-bearing degrees of freedom, in order to avoid dissipating work during a given computation, when local detailed balance holds. We analyze this issue for deterministic computations, deriving a state-space vs. speed trade-off, and use our results to motivate a measure of the complexity of a computation. Second, we consider computations that are implemented with logic circuits, in which only a small numbers of degrees of freedom are coupled at a time. We show that the way a computation is implemented using circuits affects its minimal work requirements, and relate these minimal work requirements to information-theoretic measures of complexity.

  1. Empirical Evidence for Niss' "Implemented Anticipation" in Mathematising Realistic Situations

    ERIC Educational Resources Information Center

    Stillman, Gloria; Brown, Jill P.

    2012-01-01

    Mathematisation of realistic situations is an on-going focus of research. Classroom data from a Year 9 class participating in a program of structured modelling of real situations was analysed for evidence of Niss's theoretical construct, implemented anticipation, during mathematisation. Evidence was found for two of three proposed aspects. In…

  2. School Climate, Connectedness and Academic Achievement: Examining Positive Impacts from High School Mentoring Services

    ERIC Educational Resources Information Center

    Angus, Rebecca; Hughes, Thomas

    2017-01-01

    Schools regularly implement numerous programs to satisfy widespread expectations. Often, implementation is carried out with little follow-up examining data that could help refine or determine the ultimate worth of the intervention. Through utilization of both descriptive and empirical methods, this study delved into the long-term effectiveness of…

  3. Teachers' Reasons for Using Peer Assessment: Positive Experience Predicts Use

    ERIC Educational Resources Information Center

    Panadero, Ernesto; Brown, Gavin T. L.

    2017-01-01

    Peer assessment (PA) is one of the central principles of formative assessment and assessment for learning (AfL) fields. There is ample empirical evidence as to the benefits for students' learning when AfL principles are implemented. However, teachers play a critical role in mediating the implementation of intended policies. Hence, their…

  4. Models for Implementing Response to Intervention: Tools, Outcomes, and Implications

    ERIC Educational Resources Information Center

    Shapiro, Edward S., Ed.; Zigmond, Naomi, Ed.; Wallace, Teri, Ed.; Marston, Doug, Ed.

    2011-01-01

    Providing a unique "on-the-ground" perspective, this book examines the implementation of three empirically supported response-to-intervention (RTI) models in four different school districts. The book addresses the complexity of putting RTI into place in the elementary grades, showing how the process actually took place and what impact it…

  5. A Study of Transformational Change at Three Schools of Nursing Implementing Healthcare Informatics

    ERIC Educational Resources Information Center

    Cornell, Revonda Leota

    2009-01-01

    The "Health Professions Education: A Bridge to Quality" (IOM, 2003) proposed strategies for higher education leaders and faculty to transform their institutions in ways that address the healthcare problems. This study provides higher education leaders and faculty with empirical data about the processes of change involved to implement the…

  6. Tales of Refusal, Adoption, and Maintenance: Evidence-Based Substance Abuse Prevention via School-Extension Collaborations

    ERIC Educational Resources Information Center

    St. Pierre, Tena L.; Kaltreider, D. Lynne

    2004-01-01

    Despite availability of empirically supported school-based substance abuse prevention programs, adoption and implementation fidelity of such programs appear to be low. A replicated case study was conducted to investigate school adoption and implementation processes of the EXSELS model (Project ALERT delivered by program leaders through Cooperative…

  7. Electronic Portfolios in Grades One, Two and Three: A Cautionary Tale

    ERIC Educational Resources Information Center

    Kotsopoulos, Donna; Lee, Joanne; Cordy, Michelle; Bruyns, Susan

    2015-01-01

    Some electronic portfolios (EPs) developers are proposing that EPs are suitable for implementation in primary education (i.e. kindergarten to grade three). Yet, empirical research evaluating the implementation and efficacy of EPs used in primary school settings at both the teacher and the student level is scarce. In this research, the authors…

  8. Essential Characteristics for a Professional Development Program for Promoting the Implementation of a Multidisciplinary Science Module

    ERIC Educational Resources Information Center

    Visser, Talitha C.; Coenders, Fer G. M.; Terlouw, Cees; Pieters, Jules M.

    2010-01-01

    Teachers involved in the implementation of a curriculum innovation can be prepared for this task through a professional development program. In this paper, we describe essential characteristics (identified empirically and theoretically) for such a professional development program that promotes the acquisition of competences by these teachers. The…

  9. Fidelity of Implementation and Instructional Alignment in Response to Intervention Research

    ERIC Educational Resources Information Center

    Hill, David R.; King, Seth A.; Lemons, Christopher J.; Partanen, Jane N.

    2012-01-01

    In this review, we explore the extent to which researchers evaluating the efficacy of Tier 2 elementary reading interventions within the framework of Response to Intervention reported on fidelity of implementation and alignment of instruction between tiers. A literature search identified 22 empirical studies from which conclusions were drawn.…

  10. Real-time LMR control parameter generation using advanced adaptive synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, R.W.; Mott, J.E.

    1990-01-01

    The reactor delta T'', the difference between the average core inlet and outlet temperatures, for the liquid-sodium-cooled Experimental Breeder Reactor 2 is empirically synthesized in real time from, a multitude of examples of past reactor operation. The real-time empirical synthesis is based on reactor operation. The real-time empirical synthesis is based on system state analysis (SSA) technology embodied in software on the EBR 2 data acquisition computer. Before the real-time system is put into operation, a selection of reactor plant measurements is made which is predictable over long periods encompassing plant shutdowns, core reconfigurations, core load changes, and plant startups.more » A serial data link to a personal computer containing SSA software allows the rapid verification of the predictability of these plant measurements via graphical means. After the selection is made, the real-time synthesis provides a fault-tolerant estimate of the reactor delta T accurate to {plus}/{minus}1{percent}. 5 refs., 7 figs.« less

  11. Rollover risk prediction of heavy vehicles by reliability index and empirical modelling

    NASA Astrophysics Data System (ADS)

    Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles

    2018-03-01

    This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.

  12. Monte Carlo method for calculating the radiation skyshine produced by electron accelerators

    NASA Astrophysics Data System (ADS)

    Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin

    2005-06-01

    Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.

  13. Semi-empirical quantum evaluation of peptide - MHC class II binding

    NASA Astrophysics Data System (ADS)

    González, Ronald; Suárez, Carlos F.; Bohórquez, Hugo J.; Patarroyo, Manuel A.; Patarroyo, Manuel E.

    2017-01-01

    Peptide presentation by the major histocompatibility complex (MHC) is a key process for triggering a specific immune response. Studying peptide-MHC (pMHC) binding from a structural-based approach has potential for reducing the costs of investigation into vaccine development. This study involved using two semi-empirical quantum chemistry methods (PM7 and FMO-DFTB) for computing the binding energies of peptides bonded to HLA-DR1 and HLA-DR2. We found that key stabilising water molecules involved in the peptide binding mechanism were required for finding high correlation with IC50 experimental values. Our proposal is computationally non-intensive, and is a reliable alternative for studying pMHC binding interactions.

  14. Home Crafts Days at Mountain Empire Community College Bridge Generation Gap in Mountain Youth's Search for Identity.

    ERIC Educational Resources Information Center

    Turnage, Martha; Moore, Roderick

    Mountain Empire Community College has a commitment to preserve, learn, and teach the heritage of mountain folk. Community participation by those who can teach the heritage of the area is a part of the implementation of this commitment. Some of the older people in the MECC service area either take the course work in folklife or come to the classes…

  15. Do We Need to Understand the Technology to Get to the Science? A Systematic Review of the Concept of Computer Literacy in Preventive Health Programs

    ERIC Educational Resources Information Center

    Dominick, Gregory M.; Friedman, Daniela B.; Hoffman-Goetz, Laurie

    2009-01-01

    Objective: To systematically review definitions and descriptions of computer literacy as related to preventive health education programs. Method: A systematic review of the concept of computer literacy as related to preventive health education was conducted. Empirical studies published between 1994 and 2007 on prevention education programs with a…

  16. Exploratory Mixed-Method Study of End-User Computing within an Information Technology Infrastructure Library U.S. Army Service Delivery Environment

    ERIC Educational Resources Information Center

    Manzano, Sancho J., Jr.

    2012-01-01

    Empirical studies have been conducted on what is known as end-user computing from as early as the 1980s to present-day IT employees. There have been many studies on using quantitative instruments by Cotterman and Kumar (1989) and Rockart and Flannery (1983). Qualitative studies on end-user computing classifications have been conducted by…

  17. Secure Cloud Computing Implementation Study For Singapore Military Operations

    DTIC Science & Technology

    2016-09-01

    COMPUTING IMPLEMENTATION STUDY FOR SINGAPORE MILITARY OPERATIONS by Lai Guoquan September 2016 Thesis Advisor: John D. Fulp Co-Advisor...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE SECURE CLOUD COMPUTING IMPLEMENTATION STUDY FOR SINGAPORE MILITARY OPERATIONS 5. FUNDING NUMBERS...addition, from the military perspective, the benefits of cloud computing were analyzed from a study of the U.S. Department of Defense. Then, using

  18. Computational Investigation of Graphene-Carbon Nanotube-Polymer Composite

    NASA Astrophysics Data System (ADS)

    Jha, Sanjiv; Roth, Michael; Todde, Guido; Subramanian, Gopinath; Shukla, Manoj; Univ of Southern Mississippi Collaboration; US Army Engineer Research; Development Center 3909 Halls Ferry Road Vicksburg, MS 39180, USA Collaboration

    Graphene is a single atom thick two dimensional carbon sheet where sp2 -hybridized carbon atoms are arranged in a honeycomb structure. The functionalization of graphene and carbon nanotubes (CNTs) with polymer is a route for developing high performance nanocomposite materials. We study the interfacial interactions among graphene, CNT, and Nylon 6 polymer using computational methods based on density functional theory (DFT) and empirical force-field. Our DFT calculations are carried out using Quantum-ESPRESSO electronic structure code with van der Waals functional (vdW-DF2), whereas the empirical calculations are performed using LAMMPS with the COMPASS force-field. Our results demonstrated that the interactions between (8,8) CNT and graphene, and between CNT/graphene and Nylon 6 consist mostly of van der Waals type. The computed Young's moduli indicated that the mechanical properties of carbon nanostructures are enhanced by their interactions with polymer. The presence of Stone-Wales (SW) defects lowered the Young's moduli of carbon nanostructures.

  19. A Bayesian Analysis of Scale-Invariant Processes

    DTIC Science & Technology

    2012-01-01

    Earth Grid (EASE- Grid). The NED raster elevation data of one arc-second resolution (30 m) over the continental US are derived from multiple satellites ...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...empirical and ME distributions, yet ensuring computational efficiency. Instead of com- puting empirical histograms from large amount of data , only some

  20. Validating Computational Human Behavior Models: Consistency and Accuracy Issues

    DTIC Science & Technology

    2004-06-01

    includes a discussion of SME demographics, content, and organization of the datasets . This research generalizes data from two pilot studies and two base...meet requirements for validating the varied and complex behavioral models. Through a series of empirical studies , this research identifies subject...meet requirements for validating the varied and complex behavioral models. Through a series of empirical studies , this research identifies subject

  1. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  2. Empirically Assessing the Importance of Computer Skills

    ERIC Educational Resources Information Center

    Baker, William M.

    2013-01-01

    This research determines which computer skills are important for entry-level accountants, and whether some skills are more important than others. Students participated before and after internships in public accounting. Longitudinal analysis is also provided; responses from 2001 are compared to those from 2008-2009. Responses are also compared to…

  3. Artificial Intelligence Methods in Computer-Based Instructional Design. The Minnesota Adaptive Instructional System.

    ERIC Educational Resources Information Center

    Tennyson, Robert

    1984-01-01

    Reviews educational applications of artificial intelligence and presents empirically-based design variables for developing a computer-based instruction management system. Taken from a programmatic research effort based on the Minnesota Adaptive Instructional System, variables include amount and sequence of instruction, display time, advisement,…

  4. Computer modelling of solid alkali metal carboxylates

    NASA Astrophysics Data System (ADS)

    Barreto, L. S.; Mort, K. A.; Jackson, R. A.; Alves, O. L.

    2000-11-01

    A computational study of solid lithium acetate dihydrate and anhydrous sodium acetate is presented. Interatomic potentials are obtained by empirical fitting to experimental structural data for both materials and the resulting potentials were found to be transferable to different phases of the same materials, giving good agreement with the experimental structure.

  5. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  6. The Role of Computer Networks in Aerospace Engineering.

    ERIC Educational Resources Information Center

    Bishop, Ann Peterson

    1994-01-01

    Presents selected results from an empirical investigation into the use of computer networks in aerospace engineering based on data from a national mail survey. The need for user-based studies of electronic networking is discussed, and a copy of the questionnaire used in the survey is appended. (Contains 46 references.) (LRW)

  7. Empirical Validation and Application of the Computing Attitudes Survey

    ERIC Educational Resources Information Center

    Dorn, Brian; Elliott Tew, Allison

    2015-01-01

    Student attitudes play an important role in shaping learning experiences. However, few validated instruments exist for measuring student attitude development in a discipline-specific way. In this paper, we present the design, development, and validation of the computing attitudes survey (CAS). The CAS is an extension of the Colorado Learning…

  8. Validation of the National Solar Radiation Database (NSRDB) (2005-2012): Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Weekley, Andrew; Habte, Aron

    Publicly accessible, high-quality, long-term, satellite-based solar resource data is foundational and critical to solar technologies to quantify system output predictions and deploy solar energy technologies in grid-tied systems. Solar radiation models have been in development for more than three decades. For many years, the National Renewable Energy Laboratory (NREL) developed and/or updated such models through the National Solar Radiation Data Base (NSRDB). There are two widely used approaches to derive solar resource data from models: (a) an empirical approach that relates ground-based observations to satellite measurements and (b) a physics-based approach that considers the radiation received at the satellite andmore » creates retrievals to estimate clouds and surface radiation. Although empirical methods have been traditionally used for computing surface radiation, the advent of faster computing has made operational physical models viable. The Global Solar Insolation Project (GSIP) is an operational physical model from the National Oceanic and Atmospheric Administration (NOAA) that computes global horizontal irradiance (GHI) using the visible and infrared channel measurements from the Geostationary Operational Environmental Satellites (GOES) system. GSIP uses a two-stage scheme that first retrieves cloud properties and then uses those properties in the Satellite Algorithm for Surface Radiation Budget (SASRAB) model to calculate surface radiation. NREL, the University of Wisconsin, and NOAA have recently collaborated to adapt GSIP to create a high temporal and spatial resolution data set. The product initially generates the cloud properties using the AVHRR Pathfinder Atmospheres-Extended (PATMOS-x) algorithms [3], whereas the GHI is calculated using SASRAB. Then NREL implements accurate and high-resolution input parameters such as aerosol optical depth (AOD) and precipitable water vapor (PWV) to compute direct normal irradiance (DNI) using the DISC model. The AOD and PWV, temperature, and pressure data are also combined with the MMAC model to simulate solar radiation under clear-sky conditions. The current NSRDB update is based on a 4-km x 4-km resolution at a 30-minute time interval, which has a higher temporal and spatial resolution. This paper demonstrates the evaluation of the data set using ground-measured data and detailed evaluation statistics. The result of the comparison shows a good correlation to the NSRDB data set. Further, an outline of the new version of the NSRDB and future plans for enhancement and improvement are provided.« less

  9. 78 FR 79564 - Discontinuance of Annual Financial Assessments-Delay in Implementation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-30

    ... that due to delays in modifying computer software, VA is postponing implementation of this change. FOR... computer matching of income reported to the Internal Revenue Service (IRS) and Social Security... implemented by December 31, 2013. Due to delays in revising and updating supporting computer software, VA is...

  10. Is Openness to Using Empirically Supported Treatments Related to Organizational Culture and Climate?

    PubMed

    Patterson Silver Wolf Adelv Unegv Waya, David A; Dulmus, Catherine N; Maguin, Eugene

    2013-01-01

    The overall purpose of this study is to investigate workers' openness towards implementing a new empirically supported treatment (EST) and whether the workers' openness scores relate to their workplace culture and climate scores. Participants in this study (N=1273) worked in a total of 55 different programs in a large child and family services organization and completed a survey measuring their attitudes toward ESTs. Results indicate that work groups that measure themselves as being more open to using ESTs rated their organizational cultures as being significantly more proficient and significantly less resistant to change. With ESTs becoming the gold standard for professional social work practices, it is important to have accessible pathways to EST implementation.

  11. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  12. Health in All Policies in South Australia: what has supported early implementation?

    PubMed

    Delany, Toni; Lawless, Angela; Baum, Frances; Popay, Jennie; Jones, Laura; McDermott, Dennis; Harris, Elizabeth; Broderick, Danny; Marmot, Michael

    2016-12-01

    Health in All Policies (HiAP) is a policy development approach that facilitates intersectoral responses to addressing the social determinants of health and health equity whilst, at the same time, contributing to policy priorities across the various sectors of government. Given that different models of HiAP have been implemented in at least 16 countries, there is increasing interest in how its effectiveness can be optimized. Much of the existing literature on HiAP remains descriptive, however, and lacks critical, empirically informed analyses of the elements that support implementation. Furthermore, literature on HiAP, and intersectoral action more generally, provides little detail on the practical workings of policy collaborations. This paper contributes empirical findings from a multi-method study of HiAP implementation in South Australia (SA) between 2007 and 2013. It considers the views of public servants and presents analysis of elements that have supported, and impeded, implementation of HiAP in SA. We found that HiAP has been implemented in SA using a combination of interrelated elements. The operation of these elements has provided a strong foundation, which suggests the potential for HiAP to extend beyond being an isolated strategy, to form a more integrated and systemic mechanism of policy-making. We conclude with learnings from the SA experience of HiAP implementation to inform the ongoing development and implementation of HiAP in SA and internationally. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Closed Field Coronal Heating Models Inspired by Wave Turbulence

    NASA Astrophysics Data System (ADS)

    Downs, C.; Lionello, R.; Mikic, Z.; Linker, J.; Velli, M. M.

    2013-12-01

    To simulate the energy balance of coronal plasmas on macroscopic scales, we often require the specification of the coronal heating mechanism in some functional form. To go beyond empirical formulations and to build a more physically motivated heating function, we investigate the wave-turbulence dissipation (WTD) phenomenology for the heating of closed coronal loops. To do so, we employ an implementation of non-WKB equations designed to capture the large-scale propagation, reflection, and dissipation of wave turbulence along a loop. The parameter space of this model is explored by solving the coupled WTD and hydrodynamic equations in 1D for an idealized loop, and the relevance to a range of solar conditions is established by computing solutions for several hundred loops extracted from a realistic 3D coronal field. Due to the implicit dependence of the WTD heating model on loop geometry and plasma properties along the loop and at the footpoints, we find that this model can significantly reduce the number of free parameters when compared to traditional empirical heating models, and still robustly describe a broad range of quiet-sun and active region conditions. The importance of the self-reflection term in producing realistic heating scale heights and thermal non-equilibrium cycles is discussed, and preliminary 3D thermodynamic MHD simulations using this formulation are presented. Research supported by NASA and NSF.

  14. The use of self checks and voting in software error detection - An empirical study

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.

    1990-01-01

    The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.

  15. Views of Ethical Best Practices in Sharing Individual-Level Data From Medical and Public Health Research

    PubMed Central

    Roberts, Nia; Parker, Michael

    2015-01-01

    There is increasing support for sharing individual-level data generated by medical and public health research. This scoping review of empirical research and conceptual literature examined stakeholders’ perspectives of ethical best practices in data sharing, particularly in low- and middle-income settings. Sixty-nine empirical and conceptual articles were reviewed, of which, only five were empirical studies and eight were conceptual articles focusing on low- and middle-income settings. We conclude that support for sharing individual-level data is contingent on the development and implementation of international and local policies and processes to support ethical best practices. Further conceptual and empirical research is needed to ensure data sharing policies and processes in low- and middle-income settings are appropriately informed by stakeholders’ perspectives. PMID:26297745

  16. Contract Learning.

    ERIC Educational Resources Information Center

    Gilbert, Jay

    Academic work carried out through learning contracts at Empire State College is described. Learning contracts are defined and examples are given. Faculty roles, educational advantages, and implementation methods are discussed. (MLH)

  17. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  18. Computer-aided diagnosis of prostate cancer using multi-parametric MRI: comparison between PUN and Tofts models

    NASA Astrophysics Data System (ADS)

    Mazzetti, S.; Giannini, V.; Russo, F.; Regge, D.

    2018-05-01

    Computer-aided diagnosis (CAD) systems are increasingly being used in clinical settings to report multi-parametric magnetic resonance imaging (mp-MRI) of the prostate. Usually, CAD systems automatically highlight cancer-suspicious regions to the radiologist, reducing reader variability and interpretation errors. Nevertheless, implementing this software requires the selection of which mp-MRI parameters can best discriminate between malignant and non-malignant regions. To exploit functional information, some parameters are derived from dynamic contrast-enhanced (DCE) acquisitions. In particular, much CAD software employs pharmacokinetic features, such as K trans and k ep, derived from the Tofts model, to estimate a likelihood map of malignancy. However, non-pharmacokinetic models can be also used to describe DCE-MRI curves, without any requirement for prior knowledge or measurement of the arterial input function, which could potentially lead to large errors in parameter estimation. In this work, we implemented an empirical function derived from the phenomenological universalities (PUN) class to fit DCE-MRI. The parameters of the PUN model are used in combination with T2-weighted and diffusion-weighted acquisitions to feed a support vector machine classifier to produce a voxel-wise malignancy likelihood map of the prostate. The results were all compared to those for a CAD system based on Tofts pharmacokinetic features to describe DCE-MRI curves, using different quality aspects of image segmentation, while also evaluating the number and size of false positive (FP) candidate regions. This study included 61 patients with 70 biopsy-proven prostate cancers (PCa). The metrics used to evaluate segmentation quality between the two CAD systems were not statistically different, although the PUN-based CAD reported a lower number of FP, with reduced size compared to the Tofts-based CAD. In conclusion, the CAD software based on PUN parameters is a feasible means with which to detect PCa, without affecting segmentation quality, and hence it could be successfully applied in clinical settings, improving the automated diagnosis process and reducing computational complexity.

  19. Content-Based VLE Designs Improve Learning Efficiency in Constructivist Statistics Education

    PubMed Central

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    Background We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific–purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. Objectives The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Methods Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. Results The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a content–based design outperforms the traditional VLE–based design. PMID:21998652

  20. The impact of SOA for achieving healthcare interoperability. An empirical investigation based on a hypothetical adoption.

    PubMed

    Daskalakis, S; Mantas, J

    2009-01-01

    The evaluation of a service-oriented prototype implementation for healthcare interoperability. A prototype framework was developed, aiming to exploit the use of service-oriented architecture (SOA) concepts for achieving healthcare interoperability and to move towards a virtual patient record (VPR) paradigm. The prototype implementation was evaluated for its hypothetical adoption. The evaluation strategy was based on the initial proposition of the DeLone and McLean model of information systems (IS) success [1], as modeled by Iivari [2]. A set of SOA and VPR characteristics were empirically encapsulated within the dimensions of IS success model, combined with measures from previous research works. The data gathered was analyzed using partial least squares (PLS). The results highlighted that system quality is a partial predictor of system use but not of user satisfaction. On the contrary, information quality proved to be a significant predictor of user satisfaction and partially a strong significant predictor of system use. Moreover, system use did not prove to be a significant predictor of individual impact whereas the bi-directional relation between use and user satisfaction did not confirm. Additionally, user satisfaction was found to be a strong significant predictor of individual impact. Finally, individual impact proved to be a strong significant predictor of organizational impact. The empirical study attempted to obtain hypothetical, but still useful beliefs and perceptions regarding the SOA prototype implementation. The deduced observations can form the basis for further investigation regarding the adaptability of SOA implementations with VPR characteristics in the healthcare domain.

  1. Modified free volume theory of self-diffusion and molecular theory of shear viscosity of liquid carbon dioxide.

    PubMed

    Nasrabad, Afshin Eskandari; Laghaei, Rozita; Eu, Byung Chan

    2005-04-28

    In previous work on the density fluctuation theory of transport coefficients of liquids, it was necessary to use empirical self-diffusion coefficients to calculate the transport coefficients (e.g., shear viscosity of carbon dioxide). In this work, the necessity of empirical input of the self-diffusion coefficients in the calculation of shear viscosity is removed, and the theory is thus made a self-contained molecular theory of transport coefficients of liquids, albeit it contains an empirical parameter in the subcritical regime. The required self-diffusion coefficients of liquid carbon dioxide are calculated by using the modified free volume theory for which the generic van der Waals equation of state and Monte Carlo simulations are combined to accurately compute the mean free volume by means of statistical mechanics. They have been computed as a function of density along four different isotherms and isobars. A Lennard-Jones site-site interaction potential was used to model the molecular carbon dioxide interaction. The density and temperature dependence of the theoretical self-diffusion coefficients are shown to be in excellent agreement with experimental data when the minimum critical free volume is identified with the molecular volume. The self-diffusion coefficients thus computed are then used to compute the density and temperature dependence of the shear viscosity of liquid carbon dioxide by employing the density fluctuation theory formula for shear viscosity as reported in an earlier paper (J. Chem. Phys. 2000, 112, 7118). The theoretical shear viscosity is shown to be robust and yields excellent density and temperature dependence for carbon dioxide. The pair correlation function appearing in the theory has been computed by Monte Carlo simulations.

  2. Photochromic molecular implementations of universal computation.

    PubMed

    Chaplin, Jack C; Krasnogor, Natalio; Russell, Noah A

    2014-12-01

    Unconventional computing is an area of research in which novel materials and paradigms are utilised to implement computation. Previously we have demonstrated how registers, logic gates and logic circuits can be implemented, unconventionally, with a biocompatible molecular switch, NitroBIPS, embedded in a polymer matrix. NitroBIPS and related molecules have been shown elsewhere to be capable of modifying many biological processes in a manner that is dependent on its molecular form. Thus, one possible application of this type of unconventional computing is to embed computational processes into biological systems. Here we expand on our earlier proof-of-principle work and demonstrate that universal computation can be implemented using NitroBIPS. We have previously shown that spatially localised computational elements, including registers and logic gates, can be produced. We explain how parallel registers can be implemented, then demonstrate an application of parallel registers in the form of Turing machine tapes, and demonstrate both parallel registers and logic circuits in the form of elementary cellular automata. The Turing machines and elementary cellular automata utilise the same samples and same hardware to implement their registers, logic gates and logic circuits; and both represent examples of universal computing paradigms. This shows that homogenous photochromic computational devices can be dynamically repurposed without invasive reconfiguration. The result represents an important, necessary step towards demonstrating the general feasibility of interfacial computation embedded in biological systems or other unconventional materials and environments. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  3. An Exploratory Study of the Implementation of Computer Technology in an American Islamic Private School

    ERIC Educational Resources Information Center

    Saleem, Mohammed M.

    2009-01-01

    This exploratory study of the implementation of computer technology in an American Islamic private school leveraged the case study methodology and ethnographic methods informed by symbolic interactionism and the framework of the Muslim Diaspora. The study focused on describing the implementation of computer technology and identifying the…

  4. The Computer as a Research and Teaching Instrument for Students in the Behavioral Sciences.

    ERIC Educational Resources Information Center

    Rowland, David L.; Crisler, Larry J.

    A program designed to provide students a background in computers and computing that was implemented by the Department of Behavioral Sciences at Millikin University, Illinois, is described. The program was implemented in three overlapping stages: faculty preparation; course preparation; and course implementation. The development of faculty…

  5. Models of optical quantum computing

    NASA Astrophysics Data System (ADS)

    Krovi, Hari

    2017-03-01

    I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  6. Systems Biology for Organotypic Cell Cultures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grego, Sonia; Dougherty, Edward R.; Alexander, Francis J.

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data. This consensus report summarizes the discussions held.« less

  7. Workshop Report: Systems Biology for Organotypic Cell Cultures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grego, Sonia; Dougherty, Edward R.; Alexander, Francis Joseph

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.« less

  8. Workshop Report: Systems Biology for Organotypic Cell Cultures

    DOE PAGES

    Grego, Sonia; Dougherty, Edward R.; Alexander, Francis Joseph; ...

    2016-11-14

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.« less

  9. Systems biology for organotypic cell cultures.

    PubMed

    Grego, Sonia; Dougherty, Edward R; Alexander, Francis J; Auerbach, Scott S; Berridge, Brian R; Bittner, Michael L; Casey, Warren; Cooley, Philip C; Dash, Ajit; Ferguson, Stephen S; Fennell, Timothy R; Hawkins, Brian T; Hickey, Anthony J; Kleensang, Andre; Liebman, Michael N J; Martin, Florian; Maull, Elizabeth A; Paragas, Jason; Qiao, Guilin Gary; Ramaiahgari, Sreenivasa; Sumner, Susan J; Yoon, Miyoung

    2017-01-01

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, "organotypic" cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomic data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.

  10. Analytical determination of propeller performance degradation due to ice accretion

    NASA Technical Reports Server (NTRS)

    Miller, T. L.

    1986-01-01

    A computer code has been developed which is capable of computing propeller performance for clean, glaze, or rime iced propeller configurations, thereby providing a mechanism for determining the degree of performance degradation which results from a given icing encounter. The inviscid, incompressible flow field at each specified propeller radial location is first computed using the Theodorsen transformation method of conformal mapping. A droplet trajectory computation then calculates droplet impingement points and airfoil collection efficiency for each radial location, at which point several user-selectable empirical correlations are available for determining the aerodynamic penalities which arise due to the ice accretion. Propeller performance is finally computed using strip analysis for either the clean or iced propeller. In the iced mode, the differential thrust and torque coefficient equations are modified by the drag and lift coefficient increments due to ice to obtain the appropriate iced values. Comparison with available experimental propeller icing data shows good agreement in several cases. The code's capability to properly predict iced thrust coefficient, power coefficient, and propeller efficiency is shown to be dependent on the choice of empirical correlation employed as well as proper specification of radial icing extent.

  11. Implementing a Contributory Scoring Approach for the "GRE"® Analytical Writing Section: A Comprehensive Empirical Investigation. Research Report. ETS RR-17-14

    ERIC Educational Resources Information Center

    Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent

    2017-01-01

    In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…

  12. Modularisation in the German VET System: A Study of Policy Implementation

    ERIC Educational Resources Information Center

    Li, Junmin; Pilz, Matthias

    2017-01-01

    Modularisation of vocational training courses is a major issue across many European countries. Germany has been slow to implement modularisation in its VET system: the prevailing view of modular concepts in the country is one of great scepticism, but there is very little empirical data to inform the debate. This exploratory study focuses on the…

  13. The Ebb and Flow of Educational Change: Change Agents as Negotiators of Change

    ERIC Educational Resources Information Center

    McGrath, Cormac; Barman, Linda; Stenfors-Hayes, Terese; Roxå, Torgny; Silén, Charlotte; Laksov, Klara Bolander

    2016-01-01

    In this paper, we are concerned with how change agents go about and experience change implementation in higher education. We identified change agents and interviewed them about how they implement change. Empirical data was analysed using a theoretical framework of change. The findings suggest that change in the university is enacted through a…

  14. Course-Level Implementation of First Principles, Goal Orientations, and Cognitive Engagement: A Multilevel Mediation Model

    ERIC Educational Resources Information Center

    Lee, Sunghye; Koszalka, Tiffany A.

    2016-01-01

    The First Principles of Instruction (FPI) represent ideologies found in most instructional design theories and models. Few attempts, however, have been made to empirically test the relationship of these FPI to instructional outcomes. This study addresses whether the degree to which FPI are implemented in courses makes a difference to student…

  15. The Implementation of Modified Parent-Child Interaction Therapy for Youth with Separation Anxiety Disorder

    ERIC Educational Resources Information Center

    Pincus, Donna B.; Santucci, Lauren C.; Ehrenreich, Jill T.; Eyberg, Sheila M.

    2008-01-01

    Separation Anxiety Disorder (SAD) is the most prevalent anxiety disorder experienced by children, and yet empirical treatment studies of SAD in young children are virtually nonexistent. This paper will describe the development and implementation of an innovative treatment for SAD in young children. First, we will highlight the rationale for…

  16. Electronic Medical Records (EMR): An Empirical Testing of Factors Contributing to Healthcare Professionals' Resistance to Use EMR Systems

    ERIC Educational Resources Information Center

    Bazile, Emmanuel Patrick

    2016-01-01

    The benefits of using electronic medical records (EMRs) have been well documented; however, despite numerous financial benefits and cost reductions being offered by the federal government, some healthcare professionals have been reluctant to implement EMR systems. In fact, prior research provides evidence of failed EMR implementations due to…

  17. Theoretical and Empirical Base for Implementation Components of Health-Promoting Schools

    ERIC Educational Resources Information Center

    Samdal, Oddrun; Rowling, Louise

    2011-01-01

    Purpose: Efforts to create a scientific base for the health-promoting school approach have so far not articulated a clear "Science of Delivery". There is thus a need for systematic identification of clearly operationalised implementation components. To address a next step in the refinement of the health-promoting schools' work, this paper sets out…

  18. How an Organization's Environmental Orientation Impacts Environmental Performance and Its Resultant Financial Performance through Green Computing Hiring Practices: An Empirical Investigation of the Natural Resource-Based View of the Firm

    ERIC Educational Resources Information Center

    Aken, Andrew Joseph

    2010-01-01

    This dissertation uses the logic embodied in Strategic Fit Theory, the Natural Resource- Based View of the Firm (NRBV), strategic human resource management, and other relevant literature streams to empirically demonstrate how the environmental orientation of a firm's strategy impacts their environmental performance and resultant financial…

  19. The development, evolution, and status of Holland's theory of vocational personalities: Reflections and future directions for counseling psychology.

    PubMed

    Nauta, Margaret M

    2010-01-01

    This article celebrates the 50th anniversary of the introduction of John L. Holland's (1959) theory of vocational personalities and work environments by describing the theory's development and evolution, its instrumentation, and its current status. Hallmarks of Holland's theory are its empirical testability and its user-friendliness. By constructing measures for operationalizing the theory's constructs, Holland and his colleagues helped ensure that the theory could be implemented in practice on a widespread basis. Empirical data offer considerable support for the existence of Holland's RIASEC types and their ordering among persons and environments. Although Holland's congruence hypotheses have received empirical support, congruence appears to have modest predictive power. Mixed support exists for Holland's hypotheses involving the secondary constructs of differentiation, consistency, and vocational identity. Evidence of the continued impact of Holland's theory on the field of counseling psychology, particularly in the area of interest assessment, can be seen from its frequent implementation in practice and its use by scholars. Ideas for future research and practice using Holland's theory are suggested.

  20. Impact of Company Size on Manufacturing Improvement Practices: An empirical study

    NASA Astrophysics Data System (ADS)

    Syan, C. S.; Ramoutar, K.

    2014-07-01

    There is a constant search for ways to achieve a competitive advantage through new manufacturing techniques. Best performing manufacturing companies tend to use world-class manufacturing (WCM) practices. Although the last few years have witnessed phenomenal growth in the use of WCM techniques, their effectiveness is not well understood specifically in the context of less developed countries. This paper presents an empirical study to investigate the impact of company size on improving manufacturing performance in manufacturing organizations based in Trinidad and Tobago (T&T). Empirical data were collected via a questionnaire survey which was send to 218 manufacturing firms in T&T. Five different company sizes and seven different industry sectors were studied. The analysis of survey data was performed with the aid of Statistical Package for Social Sciences (SPSS) software. The study signified facilitating and impeding factors towards improving manufacturing performance. Their relative impact/importance is dependent on varying company size and industry sectors. Findings indicate that T&T manufacturers are still practicing traditional approaches, when compared with world class manufacturers. In the majority of organizations, these practices were not 100% implemented even though they started the implementation process more than 5 years ago. The findings provided some insights in formulating more optimal operational strategies, and later develop action plans towards more effective implementation of WCM in T&T manufacturers.

  1. Revisiting Synchronous Computer-Mediated Communication: Learner Perception and the Meaning of Corrective Feedback

    ERIC Educational Resources Information Center

    Kim, Hye Yeong

    2014-01-01

    Effectively exploring the efficacy of synchronous computer-mediated communication (SCMC) for pedagogical purposes can be achieved through the careful investigation of potentially beneficial, inherent attributes of SCMC. This study provides empirical evidence for the capacity of task-based SCMC to draw learner attention to linguistic forms by…

  2. Anisotropy and temperature dependence of structural, thermodynamic, and elastic properties of crystalline cellulose Iβ: a first-principles investigation

    Treesearch

    ShunLi Shang; Louis G. Hector Jr.; Paul Saxe; Zi-Kui Liu; Robert J. Moon; Pablo D. Zavattieri

    2014-01-01

    Anisotropy and temperature dependence of structural, thermodynamic and elastic properties of crystalline cellulose Iβ were computed with first-principles density functional theory (DFT) and a semi-empirical correction for van der Waals interactions. Specifically, we report the computed temperature variation (up to 500...

  3. Relative User Ratings of MMPI-2 Computer-Based Test Interpretations

    ERIC Educational Resources Information Center

    Williams, John E.; Weed, Nathan C.

    2004-01-01

    There are eight commercially available computer-based test interpretations (CBTIs) for the Minnesota Multiphasic Personality Inventory-2 (MMPI-2), of which few have been empirically evaluated. Prospective users of these programs have little scientific data to guide choice of a program. This study compared ratings of these eight CBTIs. Test users…

  4. Nonoccurrence of Negotiation of Meaning in Task-Based Synchronous Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Van Der Zwaard, Rose; Bannink, Anne

    2016-01-01

    This empirical study investigated the occurrence of meaning negotiation in an interactive synchronous computer-mediated second language (L2) environment. Sixteen dyads (N = 32) consisting of nonnative speakers (NNSs) and native speakers (NSs) of English performed 2 different tasks using videoconferencing and written chat. The data were coded and…

  5. Computer-Based Methods for Collecting Peer Nomination Data: Utility, Practice, and Empirical Support

    ERIC Educational Resources Information Center

    van den Berg, Yvonne H. M.; Gommans, Rob

    2017-01-01

    New technologies have led to several major advances in psychological research over the past few decades. Peer nomination research is no exception. Thanks to these technological innovations, computerized data collection is becoming more common in peer nomination research. However, computer-based assessment is more than simply programming the…

  6. Programming Video Games and Simulations in Science Education: Exploring Computational Thinking through Code Analysis

    ERIC Educational Resources Information Center

    Garneli, Varvara; Chorianopoulos, Konstantinos

    2018-01-01

    Various aspects of computational thinking (CT) could be supported by educational contexts such as simulations and video-games construction. In this field study, potential differences in student motivation and learning were empirically examined through students' code. For this purpose, we performed a teaching intervention that took place over five…

  7. Innovation Attributes, Policy Intervention, and the Diffusion of Computer Applications Among Local Governments

    ERIC Educational Resources Information Center

    Perry, James L.; Kraemer, Kenneth L.

    1978-01-01

    Argues that innovation attributes, together with policies associated with the diffusion on an innovation, account for significant differences in diffusion patterns. An empirical analysis of this thesis focuses on the diffusion of computer applications software in local government. Available from Elsevier Scientific Publishing Co., Box 211,…

  8. Subgroup Discovery with User Interaction Data: An Empirically Guided Approach to Improving Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    Poitras, Eric G.; Lajoie, Susanne P.; Doleck, Tenzin; Jarrell, Amanda

    2016-01-01

    Learner modeling, a challenging and complex endeavor, is an important and oft-studied research theme in computer-supported education. From this perspective, Educational Data Mining (EDM) research has focused on modeling and comprehending various dimensions of learning in computer-based learning environments (CBLE). Researchers and designers are…

  9. Personality Characteristics and Performance on Computer Assisted Instruction and Programmed Text.

    ERIC Educational Resources Information Center

    Blitz, Allan N.; Smith, Timothy

    An empirical study investigated whether personality characteristics have a bearing on an individual's success with particular modes of instruction, in this case, computer-assisted instruction (CAI) and the programed text (PT). The study was developed in an attempt to establish useful criteria on which to base a rationale for choosing suitable…

  10. Model Uncertainty and Robustness: A Computational Framework for Multimodel Analysis

    ERIC Educational Resources Information Center

    Young, Cristobal; Holsteen, Katherine

    2017-01-01

    Model uncertainty is pervasive in social science. A key question is how robust empirical results are to sensible changes in model specification. We present a new approach and applied statistical software for computational multimodel analysis. Our approach proceeds in two steps: First, we estimate the modeling distribution of estimates across all…

  11. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    ERIC Educational Resources Information Center

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  12. The Effectiveness of Computer-Based Cognitive Training Programs

    ERIC Educational Resources Information Center

    Walcott, Christy M.; Phillips, Miranda E.

    2013-01-01

    The purpose of this article is to summarize empirical findings for school-age computer-based cognitive training (CCT) programs and to provide specific guidelines to practitioners who may be consulting with parents and schools about the utility of such programs. CCT programs vary in nature and in their targeted functions, but they share similar…

  13. Community Colleges in the Information Age: Gains Associated with Students' Use of Computer Technology

    ERIC Educational Resources Information Center

    Anderson, Bodi; Horn, Robert

    2012-01-01

    Computer literacy is increasingly important in higher education, and many educational technology experts propose a more prominent integration of technology into pedagogy. Empirical evidence is needed to support these theories. This study examined community college students planning to transfer to 4-year universities and estimated the relationship…

  14. Synthesizing Results from Empirical Research on Computer-Based Scaffolding in STEM Education: A Meta-Analysis

    ERIC Educational Resources Information Center

    Belland, Brian R.; Walker, Andrew E.; Kim, Nam Ju; Lefler, Mason

    2017-01-01

    Computer-based scaffolding assists students as they generate solutions to complex problems, goals, or tasks, helping increase and integrate their higher order skills in the process. However, despite decades of research on scaffolding in STEM (science, technology, engineering, and mathematics) education, no existing comprehensive meta-analysis has…

  15. Pavement-Transportation Computer Assisted Structural Engineering (PCASE) Implementation of the Modified Berggren (ModBerg) Equation for Computing the Frost Penetration Depth within Pavement Structures

    DTIC Science & Technology

    2012-04-01

    ER D C/ G SL T R -1 2 -1 5 Pavement -Transportation Computer Assisted Structural Engineering (PCASE) Implementation of the Modified...Berggren (ModBerg) Equation for Computing the Frost Penetration Depth within Pavement Structures G eo te ch n ic al a n d S tr u ct u re s La b or at...April 2012 Pavement -Transportation Computer Assisted Structural Engineering (PCASE) Implementation of the Modified Berggren (ModBerg) Equation for

  16. Faculty of Education Students' Computer Self-Efficacy Beliefs and Their Attitudes towards Computers and Implementing Computer Supported Education

    ERIC Educational Resources Information Center

    Berkant, Hasan Güner

    2016-01-01

    This study investigates faculty of education students' computer self-efficacy beliefs and their attitudes towards computers and implementing computer supported education. This study is descriptive and based on a correlational survey model. The final sample consisted of 414 students studying in the faculty of education of a Turkish university. The…

  17. Comprehensive review: Computational modelling of schizophrenia.

    PubMed

    Valton, Vincent; Romaniuk, Liana; Douglas Steele, J; Lawrie, Stephen; Seriès, Peggy

    2017-12-01

    Computational modelling has been used to address: (1) the variety of symptoms observed in schizophrenia using abstract models of behavior (e.g. Bayesian models - top-down descriptive models of psychopathology); (2) the causes of these symptoms using biologically realistic models involving abnormal neuromodulation and/or receptor imbalance (e.g. connectionist and neural networks - bottom-up realistic models of neural processes). These different levels of analysis have been used to answer different questions (i.e. understanding behavioral vs. neurobiological anomalies) about the nature of the disorder. As such, these computational studies have mostly supported diverging hypotheses of schizophrenia's pathophysiology, resulting in a literature that is not always expanding coherently. Some of these hypotheses are however ripe for revision using novel empirical evidence. Here we present a review that first synthesizes the literature of computational modelling for schizophrenia and psychotic symptoms into categories supporting the dopamine, glutamate, GABA, dysconnection and Bayesian inference hypotheses respectively. Secondly, we compare model predictions against the accumulated empirical evidence and finally we identify specific hypotheses that have been left relatively under-investigated. Copyright © 2017. Published by Elsevier Ltd.

  18. Mass Estimation and Its Applications

    DTIC Science & Technology

    2012-02-23

    parameters); e.g., the rect- angular kernel function has fixed width or fixed per unit size. But the rectangular function used in mass has no parameter...MassTER is implemented in JAVA , and we use DBSCAN in WEKA [13] and a version of DENCLUE implemented in R (www.r-project.org) in our empirical evaluation...Proceedings of SIGKDD, 2010, 989-998. [13] I.H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations

  19. Development of an empirically based dynamic biomechanical strength model

    NASA Technical Reports Server (NTRS)

    Pandya, A.; Maida, J.; Aldridge, A.; Hasson, S.; Woolford, B.

    1992-01-01

    The focus here is on the development of a dynamic strength model for humans. Our model is based on empirical data. The shoulder, elbow, and wrist joints are characterized in terms of maximum isolated torque, position, and velocity in all rotational planes. This information is reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining the torque as a function of position and velocity. The isolated joint torque equations are then used to compute forces resulting from a composite motion, which in this case is a ratchet wrench push and pull operation. What is presented here is a comparison of the computed or predicted results of the model with the actual measured values for the composite motion.

  20. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  1. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  2. The importance of education, understanding, and empirical research in social work: the nuts and bolts of the business.

    PubMed

    Long, Kimberly; Wodarski, John S

    2010-05-01

    Over the past three decades, existing literature has demanded, and continues to demand, accountability in the delivery of social services through empirically based research and implementation of established norms: this is, and of itself, the true basis of social work. It is through these norms and empirically established models and theories of treatment that a social worker can really do what he/she wants to do: help the client. This article will describe the nuts and bolts of social work; i.e. those theories, models, and the established norms of practice. It is the desire of the author's that all social workers be educated in the nuts and bolts (basics) and that education will be based on empirical evidence that supports behavioral change through intervention and modification.

  3. Cyber-T web server: differential analysis of high-throughput data.

    PubMed

    Kayala, Matthew A; Baldi, Pierre

    2012-07-01

    The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.

  4. The Philosophy, Theoretical Bases, and Implementation of the AHAAH Model for Evaluation of Hazard from Exposure to Intense Sounds

    DTIC Science & Technology

    2018-04-01

    empirical, external energy-damage correlation methods for evaluating hearing damage risk associated with impulsive noise exposure. AHAAH applies the...is validated against the measured results of human exposures to impulsive sounds, and unlike wholly empirical correlation approaches, AHAAH’s...a measured level (LAEQ8 of 85 dB). The approach in MIL-STD-1474E is very different. Previous standards tried to find a correlation between some

  5. Why do generic drugs fail to achieve an adequate market share in Greece? Empirical findings and policy suggestions.

    PubMed

    Balasopoulos, T; Charonis, A; Athanasakis, K; Kyriopoulos, J; Pavi, E

    2017-03-01

    Since 2010, the memoranda of understanding were implemented in Greece as a measure of fiscal adjustment. Public pharmaceutical expenditure was one of the main focuses of this implementation. Numerous policies, targeted on pharma spending, reduced the pharmaceutical budget by 60.5%. Yet, generics' penetration in Greece remained among the lowest among OECD countries. This study aims to highlight the factors that affect the perceptions of the population on generic drugs and to suggest effective policy measures. The empirical analysis is based on a national cross-sectional survey that was conducted through a sample of 2003 individuals, representative of the general population. Two ordinal logistic regression models were constructed in order to identify the determinants that affect the respondents' beliefs on the safety and the effectiveness of generic drugs. The empirical findings presented a positive and statistically significant correlation with income, bill payment difficulties, safety and effectiveness of drugs, prescription and dispensing preferences and the views toward pharmaceutical companies. Also, age and trust toward medical community have a positive and statistically significant correlation with the perception on the safety of generic drugs. Policy interventions are suggested on the bases of the empirical results on 3 major categories; (a) information campaigns, (b) incentives to doctors and pharmacists and (c) to strengthen the bioequivalence control framework and the dissemination of results. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The meaning and measurement of implementation climate

    PubMed Central

    2011-01-01

    Background Climate has a long history in organizational studies, but few theoretical models integrate the complex effects of climate during innovation implementation. In 1996, a theoretical model was proposed that organizations could develop a positive climate for implementation by making use of various policies and practices that promote organizational members' means, motives, and opportunities for innovation use. The model proposes that implementation climate--or the extent to which organizational members perceive that innovation use is expected, supported, and rewarded--is positively associated with implementation effectiveness. The implementation climate construct holds significant promise for advancing scientific knowledge about the organizational determinants of innovation implementation. However, the construct has not received sufficient scholarly attention, despite numerous citations in the scientific literature. In this article, we clarify the meaning of implementation climate, discuss several measurement issues, and propose guidelines for empirical study. Discussion Implementation climate differs from constructs such as organizational climate, culture, or context in two important respects: first, it has a strategic focus (implementation), and second, it is innovation-specific. Measuring implementation climate is challenging because the construct operates at the organizational level, but requires the collection of multi-dimensional perceptual data from many expected innovation users within an organization. In order to avoid problems with construct validity, assessments of within-group agreement of implementation climate measures must be carefully considered. Implementation climate implies a high degree of within-group agreement in climate perceptions. However, researchers might find it useful to distinguish implementation climate level (the average of implementation climate perceptions) from implementation climate strength (the variability of implementation climate perceptions). It is important to recognize that the implementation climate construct applies most readily to innovations that require collective, coordinated behavior change by many organizational members both for successful implementation and for realization of anticipated benefits. For innovations that do not possess these attributes, individual-level theories of behavior change could be more useful in explaining implementation effectiveness. Summary This construct has considerable value in implementation science, however, further debate and development is necessary to refine and distinguish the construct for empirical use. PMID:21781328

  7. Optoelectronic Reservoir Computing

    PubMed Central

    Paquot, Y.; Duport, F.; Smerieri, A.; Dambre, J.; Schrauwen, B.; Haelterman, M.; Massar, S.

    2012-01-01

    Reservoir computing is a recently introduced, highly efficient bio-inspired approach for processing time dependent data. The basic scheme of reservoir computing consists of a non linear recurrent dynamical system coupled to a single input layer and a single output layer. Within these constraints many implementations are possible. Here we report an optoelectronic implementation of reservoir computing based on a recently proposed architecture consisting of a single non linear node and a delay line. Our implementation is sufficiently fast for real time information processing. We illustrate its performance on tasks of practical importance such as nonlinear channel equalization and speech recognition, and obtain results comparable to state of the art digital implementations. PMID:22371825

  8. Prolegomena to the field

    NASA Astrophysics Data System (ADS)

    Chen, Su Shing; Caulfield, H. John

    1994-03-01

    Adaptive Computing, vs. Classical Computing, is emerging to be a field which is the culmination during the last 40 and more years of various scientific and technological areas, including cybernetics, neural networks, pattern recognition networks, learning machines, selfreproducing automata, genetic algorithms, fuzzy logics, probabilistic logics, chaos, electronics, optics, and quantum devices. This volume of "Critical Reviews on Adaptive Computing: Mathematics, Electronics, and Optics" is intended as a synergistic approach to this emerging field. There are many researchers in these areas working on important results. However, we have not seen a general effort to summarize and synthesize these results in theory as well as implementation. In order to reach a higher level of synergism, we propose Adaptive Computing as the field which comprises of the above mentioned computational paradigms and various realizations. The field should include both the Theory (or Mathematics) and the Implementation. Our emphasis is on the interplay of Theory and Implementation. The interplay, an adaptive process itself, of Theory and Implementation is the only "holistic" way to advance our understanding and realization of brain-like computation. We feel that a theory without implementation has the tendency to become unrealistic and "out-of-touch" with reality, while an implementation without theory runs the risk to be superficial and obsolete.

  9. AMDTreat 5.0+ with PHREEQC titration module to compute caustic chemical quantity, effluent quality, and sludge volume

    USGS Publications Warehouse

    Cravotta, Charles A.; Means, Brent P; Arthur, Willam; McKenzie, Robert M; Parkhurst, David L.

    2015-01-01

    Alkaline chemicals are commonly added to discharges from coal mines to increase pH and decrease concentrations of acidity and dissolved aluminum, iron, manganese, and associated metals. The annual cost of chemical treatment depends on the type and quantities of chemicals added and sludge produced. The AMDTreat computer program, initially developed in 2003, is widely used to compute such costs on the basis of the user-specified flow rate and water quality data for the untreated AMD. Although AMDTreat can use results of empirical titration of net-acidic or net-alkaline effluent with caustic chemicals to accurately estimate costs for treatment, such empirical data are rarely available. A titration simulation module using the geochemical program PHREEQC has been incorporated with AMDTreat 5.0+ to improve the capability of AMDTreat to estimate: (1) the quantity and cost of caustic chemicals to attain a target pH, (2) the chemical composition of the treated effluent, and (3) the volume of sludge produced by the treatment. The simulated titration results for selected caustic chemicals (NaOH, CaO, Ca(OH)2, Na2CO3, or NH3) without aeration or with pre-aeration can be compared with or used in place of empirical titration data to estimate chemical quantities, treated effluent composition, sludge volume (precipitated metals plus unreacted chemical), and associated treatment costs. This paper describes the development, evaluation, and potential utilization of the PHREEQC titration module with the new AMDTreat 5.0+ computer program available at http://www.amd.osmre.gov/.

  10. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  11. Method for implementation of recursive hierarchical segmentation on parallel computers

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  12. Training of Lay Health Educators to Implement an Evidence-Based Behavioral Weight Loss Intervention in Rural Senior Centers

    ERIC Educational Resources Information Center

    Krukowski, Rebecca A.; Lensing, Shelly; Love, ShaRhonda; Prewitt, T. Elaine; Adams, Becky; Cornell, Carol E.; Felix, Holly C.; West, Delia

    2013-01-01

    Purpose of the Study: Lay health educators (LHEs) offer great promise for facilitating the translation of evidence-based health promotion programs to underserved areas; yet, there is little guidance on how to train LHEs to implement these programs, particularly in the crucial area of empirically validated obesity interventions. Design and Methods:…

  13. A Narrative Review of Problem-Based Learning with School-Aged Children: Implementation and Outcomes

    ERIC Educational Resources Information Center

    Jerzembek, Gabi; Murphy, Simon

    2013-01-01

    This paper reviews empirical studies that have evaluated the impact of problem-based learning (PBL) on school-aged pupils, in order to summarise how it has been implemented and to assess its effects on academic and personal development. Following electronic searches of PsychINFO, the British Education Index and the Cochrane review database, six…

  14. ESD Implementation at the School Organisation Level, Part 2--Investigating the Transformative Perspective in School Leaders' Quality Strategies at ESD Schools

    ERIC Educational Resources Information Center

    Mogren, Anna; Gericke, Niklas

    2017-01-01

    Previous research has suggested that adopting a transformative school organisation perspective when implementing ESD may be more productive than the previously recommended transmissive perspectives, but it is not clear how transformative perspectives could be introduced. To address this issue, we conducted an empirical mixed methods study of…

  15. Lessons Learned from Implementing a Check-in/Check-out Behavioral Program in an Urban Middle School

    ERIC Educational Resources Information Center

    Myers, Diane M.; Briere, Donald E., III

    2010-01-01

    Schoolwide positive behavior support (SWPBS) is an empirically supported approach that is implemented by more than 10,000 schools in the United States to support student and staff behavior (www.pbis.org). SWPBS is based on a three-tiered prevention logic: (a) Tier 1 interventions support all students; (b) Tier 2 interventions support targeted…

  16. Bridge over Troubled Water: Using Implementation Science to Facilitate Effective Services in Child Welfare

    ERIC Educational Resources Information Center

    Mildon, Robyn; Shlonsky, Aron

    2011-01-01

    To maximize benefits to children and their families, effective practices need to be used competently in child welfare settings. Since the 1990s, researchers and policy makers have focused attention on empirically supported interventions (ESIs). Much less attention has been paid to what is needed to implement these in a range of real-world…

  17. A new method for QRS detection in ECG signals using QRS-preserving filtering techniques.

    PubMed

    Sharma, Tanushree; Sharma, Kamalesh K

    2018-03-28

    Detection of QRS complexes in ECG signals is required for various purposes such as determination of heart rate, feature extraction and classification. The problem of automatic QRS detection in ECG signals is complicated by the presence of noise spectrally overlapping with the QRS frequency range. As a solution to this problem, we propose the use of least-squares-optimisation-based smoothing techniques that suppress the noise peaks in the ECG while preserving the QRS complexes. We also propose a novel nonlinear transformation technique that is applied after the smoothing operations, which equalises the QRS amplitudes without boosting the supressed noise peaks. After these preprocessing operations, the R-peaks can finally be detected with high accuracy. The proposed technique has a low computational load and, therefore, it can be used for real-time QRS detection in a wearable device such as a Holter monitor or for fast offline QRS detection. The offline and real-time versions of the proposed technique have been evaluated on the standard MIT-BIH database. The offline implementation is found to perform better than state-of-the-art techniques based on wavelet transforms, empirical mode decomposition, etc. and the real-time implementation also shows improved performance over existing real-time QRS detection techniques.

  18. A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment

    NASA Astrophysics Data System (ADS)

    Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.

    2017-12-01

    Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.

  19. Photo-z-SQL: Integrated, flexible photometric redshift computation in a database

    NASA Astrophysics Data System (ADS)

    Beck, R.; Dobos, L.; Budavári, T.; Szalay, A. S.; Csabai, I.

    2017-04-01

    We present a flexible template-based photometric redshift estimation framework, implemented in C#, that can be seamlessly integrated into a SQL database (or DB) server and executed on-demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and utilizes the computational capabilities of DB hardware. The code is able to perform both maximum likelihood and Bayesian estimation, and can handle inputs of variable photometric filter sets and corresponding broad-band magnitudes. It is possible to take into account the full covariance matrix between filters, and filter zero points can be empirically calibrated using measurements with given redshifts. The list of spectral templates and the prior can be specified flexibly, and the expensive synthetic magnitude computations are done via lazy evaluation, coupled with a caching of results. Parallel execution is fully supported. For large upcoming photometric surveys such as the LSST, the ability to perform in-place photo-z calculation would be a significant advantage. Also, the efficient handling of variable filter sets is a necessity for heterogeneous databases, for example the Hubble Source Catalog, and for cross-match services such as SkyQuery. We illustrate the performance of our code on two reference photo-z estimation testing datasets, and provide an analysis of execution time and scalability with respect to different configurations. The code is available for download at https://github.com/beckrob/Photo-z-SQL.

  20. Reconstruction of normal forms by learning informed observation geometries from data.

    PubMed

    Yair, Or; Talmon, Ronen; Coifman, Ronald R; Kevrekidis, Ioannis G

    2017-09-19

    The discovery of physical laws consistent with empirical observations is at the heart of (applied) science and engineering. These laws typically take the form of nonlinear differential equations depending on parameters; dynamical systems theory provides, through the appropriate normal forms, an "intrinsic" prototypical characterization of the types of dynamical regimes accessible to a given model. Using an implementation of data-informed geometry learning, we directly reconstruct the relevant "normal forms": a quantitative mapping from empirical observations to prototypical realizations of the underlying dynamics. Interestingly, the state variables and the parameters of these realizations are inferred from the empirical observations; without prior knowledge or understanding, they parametrize the dynamics intrinsically without explicit reference to fundamental physical quantities.

Top