Systematic Applications of Metabolomics in Metabolic Engineering
Dromms, Robert A.; Styczynski, Mark P.
2012-01-01
The goals of metabolic engineering are well-served by the biological information provided by metabolomics: information on how the cell is currently using its biochemical resources is perhaps one of the best ways to inform strategies to engineer a cell to produce a target compound. Using the analysis of extracellular or intracellular levels of the target compound (or a few closely related molecules) to drive metabolic engineering is quite common. However, there is surprisingly little systematic use of metabolomics datasets, which simultaneously measure hundreds of metabolites rather than just a few, for that same purpose. Here, we review the most common systematic approaches to integrating metabolite data with metabolic engineering, with emphasis on existing efforts to use whole-metabolome datasets. We then review some of the most common approaches for computational modeling of cell-wide metabolism, including constraint-based models, and discuss current computational approaches that explicitly use metabolomics data. We conclude with discussion of the broader potential of computational approaches that systematically use metabolomics data to drive metabolic engineering. PMID:24957776
A Systematic Approach for Understanding Slater-Gaussian Functions in Computational Chemistry
ERIC Educational Resources Information Center
Stewart, Brianna; Hylton, Derrick J.; Ravi, Natarajan
2013-01-01
A systematic way to understand the intricacies of quantum mechanical computations done by a software package known as "Gaussian" is undertaken via an undergraduate research project. These computations involve the evaluation of key parameters in a fitting procedure to express a Slater-type orbital (STO) function in terms of the linear…
When New Boundaries Abound: A Systematic Approach to Redistricting.
ERIC Educational Resources Information Center
Creighton, Roger L.; Irwin, Armond J.
1994-01-01
A systematic approach to school redistricting that was developed over the past half-dozen years utilizes a computer. Crucial to achieving successful results are accuracy of data, enrollment forecasting, and citizen participation. Outlines the major steps of a typical redistricting study. One figure illustrates the redistricting process. (MLF)
Multiconfiguration calculations of electronic isotope shift factors in Al i
NASA Astrophysics Data System (ADS)
Filippin, Livio; Beerwerth, Randolf; Ekman, Jörgen; Fritzsche, Stephan; Godefroid, Michel; Jönsson, Per
2016-12-01
The present work reports results from systematic multiconfiguration Dirac-Hartree-Fock calculations of electronic isotope shift factors for a set of transitions between low-lying levels of neutral aluminium. These electronic quantities together with observed isotope shifts between different pairs of isotopes provide the changes in mean-square charge radii of the atomic nuclei. Two computational approaches are adopted for the estimation of the mass- and field-shift factors. Within these approaches, different models for electron correlation are explored in a systematic way to determine a reliable computational strategy and to estimate theoretical error bars of the isotope shift factors.
Secure Multiparty Quantum Computation for Summation and Multiplication.
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-21
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.
Secure Multiparty Quantum Computation for Summation and Multiplication
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-01
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics. PMID:26792197
A Teaching Exercise for the Identification of Bacteria Using An Interactive Computer Program.
ERIC Educational Resources Information Center
Bryant, Trevor N.; Smith, John E.
1979-01-01
Describes an interactive Fortran computer program which provides an exercise in the identification of bacteria. Provides a way of enhancing a student's approach to systematic bacteriology and numerical identification procedures. (Author/MA)
The soft computing-based approach to investigate allergic diseases: a systematic review.
Tartarisco, Gennaro; Tonacci, Alessandro; Minciullo, Paola Lucia; Billeci, Lucia; Pioggia, Giovanni; Incorvaia, Cristoforo; Gangemi, Sebastiano
2017-01-01
Early recognition of inflammatory markers and their relation to asthma, adverse drug reactions, allergic rhinitis, atopic dermatitis and other allergic diseases is an important goal in allergy. The vast majority of studies in the literature are based on classic statistical methods; however, developments in computational techniques such as soft computing-based approaches hold new promise in this field. The aim of this manuscript is to systematically review the main soft computing-based techniques such as artificial neural networks, support vector machines, bayesian networks and fuzzy logic to investigate their performances in the field of allergic diseases. The review was conducted following PRISMA guidelines and the protocol was registered within PROSPERO database (CRD42016038894). The research was performed on PubMed and ScienceDirect, covering the period starting from September 1, 1990 through April 19, 2016. The review included 27 studies related to allergic diseases and soft computing performances. We observed promising results with an overall accuracy of 86.5%, mainly focused on asthmatic disease. The review reveals that soft computing-based approaches are suitable for big data analysis and can be very powerful, especially when dealing with uncertainty and poorly characterized parameters. Furthermore, they can provide valuable support in case of lack of data and entangled cause-effect relationships, which make it difficult to assess the evolution of disease. Although most works deal with asthma, we believe the soft computing approach could be a real breakthrough and foster new insights into other allergic diseases as well.
Identification of metabolic pathways using pathfinding approaches: a systematic review.
Abd Algfoor, Zeyad; Shahrizal Sunar, Mohd; Abdullah, Afnizanfaizal; Kolivand, Hoshang
2017-03-01
Metabolic pathways have become increasingly available for various microorganisms. Such pathways have spurred the development of a wide array of computational tools, in particular, mathematical pathfinding approaches. This article can facilitate the understanding of computational analysis of metabolic pathways in genomics. Moreover, stoichiometric and pathfinding approaches in metabolic pathway analysis are discussed. Three major types of studies are elaborated: stoichiometric identification models, pathway-based graph analysis and pathfinding approaches in cellular metabolism. Furthermore, evaluation of the outcomes of the pathways with mathematical benchmarking metrics is provided. This review would lead to better comprehension of metabolism behaviors in living cells, in terms of computed pathfinding approaches. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
What is Intrinsic Motivation? A Typology of Computational Approaches
Oudeyer, Pierre-Yves; Kaplan, Frederic
2007-01-01
Intrinsic motivation, centrally involved in spontaneous exploration and curiosity, is a crucial concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics. PMID:18958277
2014-01-01
Background This article describes the systematic development of the I Move intervention: a web-based computer tailored physical activity promotion intervention, aimed at increasing and maintaining physical activity among adults. This intervention is based on the theoretical insights and practical applications of self-determination theory and motivational interviewing. Methods/design Since developing interventions in a systemically planned way increases the likelihood of effectiveness, we used the Intervention Mapping protocol to develop the I Move intervention. In this article, we first describe how we proceeded through each of the six steps of the Intervention Mapping protocol. After that, we describe the content of the I Move intervention and elaborate on the planned randomized controlled trial. Discussion By integrating self-determination theory and motivational interviewing in web-based computer tailoring, the I Move intervention introduces a more participant-centered approach than traditional tailored interventions. Adopting this approach might enhance computer tailored physical activity interventions both in terms of intervention effectiveness and user appreciation. We will evaluate this in an randomized controlled trial, by comparing the I Move intervention to a more traditional web-based computer tailored intervention. Trial registration NTR4129 PMID:24580802
Read-across is an important data gap filling technique used within category and analog approaches for regulatory hazard identification and risk assessment. Although much technical guidance is available that describes how to develop category/analog approaches, practical principles...
Hard-spin mean-field theory: A systematic derivation and exact correlations in one dimension
Kabakcioglu
2000-04-01
Hard-spin mean-field theory is an improved mean-field approach which has proven to give accurate results, especially for frustrated spin systems, with relatively little computational effort. In this work, the previous phenomenological derivation is supplanted by a systematic and generic derivation that opens the possibility for systematic improvements, especially for the calculation of long-range correlation functions. A first level of improvement suffices to recover the exact long-range values of the correlation functions in one dimension.
Field Science Ethnography: Methods For Systematic Observation on an Expedition
NASA Technical Reports Server (NTRS)
Clancey, William J.; Clancy, Daniel (Technical Monitor)
2001-01-01
The Haughton-Mars expedition is a multidisciplinary project, exploring an impact crater in an extreme environment to determine how people might live and work on Mars. The expedition seeks to understand and field test Mars facilities, crew roles, operations, and computer tools. I combine an ethnographic approach to establish a baseline understanding of how scientists prefer to live and work when relatively unemcumbered, with a participatory design approach of experimenting with procedures and tools in the context of use. This paper focuses on field methods for systematically recording and analyzing the expedition's activities. Systematic photography and time-lapse video are combined with concept mapping to organize and present information. This hybrid approach is generally applicable to the study of modern field expeditions having a dozen or more multidisciplinary participants, spread over a large terrain during multiple field seasons.
Toward a systematic exploration of nano-bio interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Xue; Liu, Fang; Liu, Yin
Many studies of nanomaterials make non-systematic alterations of nanoparticle physicochemical properties. Given the immense size of the property space for nanomaterials, such approaches are not very useful in elucidating fundamental relationships between inherent physicochemical properties of these materials and their interactions with, and effects on, biological systems. Data driven artificial intelligence methods such as machine learning algorithms have proven highly effective in generating models with good predictivity and some degree of interpretability. They can provide a viable method of reducing or eliminating animal testing. However, careful experimental design with the modelling of the results in mind is a proven andmore » efficient way of exploring large materials spaces. This approach, coupled with high speed automated experimental synthesis and characterization technologies now appearing, is the fastest route to developing models that regulatory bodies may find useful. We advocate greatly increased focus on systematic modification of physicochemical properties of nanoparticles combined with comprehensive biological evaluation and computational analysis. This is essential to obtain better mechanistic understanding of nano-bio interactions, and to derive quantitatively predictive and robust models for the properties of nanomaterials that have useful domains of applicability. - Highlights: • Nanomaterials studies make non-systematic alterations to nanoparticle properties. • Vast nanomaterials property spaces require systematic studies of nano-bio interactions. • Experimental design and modelling are efficient ways of exploring materials spaces. • We advocate systematic modification and computational analysis to probe nano-bio interactions.« less
Lattice Boltzmann and Navier-Stokes Cartesian CFD Approaches for Airframe Noise Predictions
NASA Technical Reports Server (NTRS)
Barad, Michael F.; Kocheemoolayil, Joseph G.; Kiris, Cetin C.
2017-01-01
Lattice Boltzmann (LB) and compressible Navier-Stokes (NS) equations based computational fluid dynamics (CFD) approaches are compared for simulating airframe noise. Both LB and NS CFD approaches are implemented within the Launch Ascent and Vehicle Aerodynamics (LAVA) framework. Both schemes utilize the same underlying Cartesian structured mesh paradigm with provision for local adaptive grid refinement and sub-cycling in time. We choose a prototypical massively separated, wake-dominated flow ideally suited for Cartesian-grid based approaches in this study - The partially-dressed, cavity-closed nose landing gear (PDCC-NLG) noise problem from AIAA's Benchmark problems for Airframe Noise Computations (BANC) series of workshops. The relative accuracy and computational efficiency of the two approaches are systematically compared. Detailed comments are made on the potential held by LB to significantly reduce time-to-solution for a desired level of accuracy within the context of modeling airframes noise from first principles.
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg
2015-07-01
In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.
Computer & manual accident typing for bicyclist accidents : administrator's guide
DOT National Transportation Integrated Search
1983-01-01
This guide provides guidelines and procedures for classifying and analyzing bicyclist-motor vehicle accidents. The approach described herein is part of a systematic effort by the National Highway Traffic Safety Administration (NHTSA) to assist states...
Achievements and Challenges in Computational Protein Design.
Samish, Ilan
2017-01-01
Computational protein design (CPD), a yet evolving field, includes computer-aided engineering for partial or full de novo designs of proteins of interest. Designs are defined by a requested structure, function, or working environment. This chapter describes the birth and maturation of the field by presenting 101 CPD examples in a chronological order emphasizing achievements and pending challenges. Integrating these aspects presents the plethora of CPD approaches with the hope of providing a "CPD 101". These reflect on the broader structural bioinformatics and computational biophysics field and include: (1) integration of knowledge-based and energy-based methods, (2) hierarchical designated approach towards local, regional, and global motifs and the integration of high- and low-resolution design schemes that fit each such region, (3) systematic differential approaches towards different protein regions, (4) identification of key hot-spot residues and the relative effect of remote regions, (5) assessment of shape-complementarity, electrostatics and solvation effects, (6) integration of thermal plasticity and functional dynamics, (7) negative design, (8) systematic integration of experimental approaches, (9) objective cross-assessment of methods, and (10) successful ranking of potential designs. Future challenges also include dissemination of CPD software to the general use of life-sciences researchers and the emphasis of success within an in vivo milieu. CPD increases our understanding of protein structure and function and the relationships between the two along with the application of such know-how for the benefit of mankind. Applied aspects range from biological drugs, via healthier and tastier food products to nanotechnology and environmentally friendly enzymes replacing toxic chemicals utilized in the industry.
Computational analysis of conserved RNA secondary structure in transcriptomes and genomes.
Eddy, Sean R
2014-01-01
Transcriptomics experiments and computational predictions both enable systematic discovery of new functional RNAs. However, many putative noncoding transcripts arise instead from artifacts and biological noise, and current computational prediction methods have high false positive rates. I discuss prospects for improving computational methods for analyzing and identifying functional RNAs, with a focus on detecting signatures of conserved RNA secondary structure. An interesting new front is the application of chemical and enzymatic experiments that probe RNA structure on a transcriptome-wide scale. I review several proposed approaches for incorporating structure probing data into the computational prediction of RNA secondary structure. Using probabilistic inference formalisms, I show how all these approaches can be unified in a well-principled framework, which in turn allows RNA probing data to be easily integrated into a wide range of analyses that depend on RNA secondary structure inference. Such analyses include homology search and genome-wide detection of new structural RNAs.
Long-distance effects in B→ K^*ℓ ℓ from analyticity
NASA Astrophysics Data System (ADS)
Bobeth, Christoph; Chrzaszcz, Marcin; van Dyk, Danny; Virto, Javier
2018-06-01
We discuss a novel approach to systematically determine the dominant long-distance contribution to B→ K^*ℓ ℓ decays in the kinematic region where the dilepton invariant mass is below the open charm threshold. This approach provides the most consistent and reliable determination to date and can be used to compute Standard Model predictions for all observables of interest, including the kinematic region where the dilepton invariant mass lies between the J/ψ and the ψ (2S) resonances. We illustrate the power of our results by performing a New Physics fit to the Wilson coefficient C_9. This approach is systematically improvable from theoretical and experimental sides, and applies to other decay modes of the type B→ Vℓ ℓ , B→ Pℓ ℓ and B→ Vγ.
Laboratory services series: a programmed maintenance system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuxbury, D.C.; Srite, B.E.
1980-01-01
The diverse facilities, operations and equipment at a major national research and development laboratory require a systematic, analytical approach to operating equipment maintenance. A computer-scheduled preventive maintenance program is described including program development, equipment identification, maintenance and inspection instructions, scheduling, personnel, and equipment history.
NASA Astrophysics Data System (ADS)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.
2017-02-01
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.
CAMCE: An Environment to Support Multimedia Courseware Projects.
ERIC Educational Resources Information Center
Barrese, R. M.; And Others
1992-01-01
Presents results of CAMCE (Computer-Aided Multimedia Courseware Engineering) project research concerned with definition of a methodology to describe a systematic approach for multimedia courseware development. Discussion covers the CAMCE methodology, requirements of an advanced authoring environment, use of an object-based model in the CAMCE…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.
2007-09-15
We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustratemore » our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls.« less
A new decision sciences for complex systems.
Lempert, Robert J
2002-05-14
Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an approach to decision-making under conditions of deep uncertainty that is ideally suited to applying complex systems to policy analysis. The article demonstrates the approach on the policy problem of global climate change, with a particular focus on the role of technology policies in a robust, adaptive strategy for greenhouse gas abatement.
A systematic way for the cost reduction of density fitting methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kállay, Mihály, E-mail: kallay@mail.bme.hu
2014-12-28
We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less
This paper presents a new system for automated 2D-3D migration of chemicals in large databases with conformer multiplication. The main advantages of this system are its straightforward performance, reasonable execution time, simplicity, and applicability to building large 3D che...
2010-01-24
assess the trustworthiness of sensor data. Mohamed Nabeel (PhD student), Department of Computer Science. M. Nabeel has been involved in the research...April 7-12, 2008, Cancun, Mexico. 10. M. Nabeel , E. Bertino, "Secure Delta-Publishing of XML Content", Poster Paper, Proceedings of 24th...30, 2007, Vilamoura, Portugal, Lecture Notes in Computer Science 4804, Springer 2007. 12. M. Nabeel , E. Bertino, "A Structure Preserving Approach for
NASA Astrophysics Data System (ADS)
Motta, Mario; Zhang, Shiwei
2018-05-01
We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.
JPRS Report, Soviet Union, Economic Affairs
1988-10-05
for a more detailed and systematic approach to working up data on the GNP. We should mention that today the national income is computed within the...process of reproduction. This approach imposes a strict sequence in the develop- ment and analysis of data on the various phases of movement of the...variants for reorganization. The plans were reviewed on a competitive basis and this made it easier to select the best one. There was still one other
The ensemble switch method for computing interfacial tensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitz, Fabian; Virnau, Peter
2015-04-14
We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...
2017-03-20
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
A computer-based tutorial structure for teaching and applying a complex process
Daniel L. Schmoldt; William G Bradshaw
1991-01-01
Economic accountability concerns for wildfire prevention planning have led to the development of an ignition management approach to fire problems. The Fire Loss Prevention Planning Process (FLPPP) systematizes fire problem analyses and concomitantly establishes a means for evaluating prescribed prevention programs. However, new users of the FLPPP have experienced...
Brain-computer interfacing under distraction: an evaluation study
NASA Astrophysics Data System (ADS)
Brandl, Stephanie; Frølich, Laura; Höhne, Johannes; Müller, Klaus-Robert; Samek, Wojciech
2016-10-01
Objective. While motor-imagery based brain-computer interfaces (BCIs) have been studied over many years by now, most of these studies have taken place in controlled lab settings. Bringing BCI technology into everyday life is still one of the main challenges in this field of research. Approach. This paper systematically investigates BCI performance under 6 types of distractions that mimic out-of-lab environments. Main results. We report results of 16 participants and show that the performance of the standard common spatial patterns (CSP) + regularized linear discriminant analysis classification pipeline drops significantly in this ‘simulated’ out-of-lab setting. We then investigate three methods for improving the performance: (1) artifact removal, (2) ensemble classification, and (3) a 2-step classification approach. While artifact removal does not enhance the BCI performance significantly, both ensemble classification and the 2-step classification combined with CSP significantly improve the performance compared to the standard procedure. Significance. Systematically analyzing out-of-lab scenarios is crucial when bringing BCI into everyday life. Algorithms must be adapted to overcome nonstationary environments in order to tackle real-world challenges.
Bridging the Gap between Human Judgment and Automated Reasoning in Predictive Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanfilippo, Antonio P.; Riensche, Roderick M.; Unwin, Stephen D.
2010-06-07
Events occur daily that impact the health, security and sustainable growth of our society. If we are to address the challenges that emerge from these events, anticipatory reasoning has to become an everyday activity. Strong advances have been made in using integrated modeling for analysis and decision making. However, a wider impact of predictive analytics is currently hindered by the lack of systematic methods for integrating predictive inferences from computer models with human judgment. In this paper, we present a predictive analytics approach that supports anticipatory analysis and decision-making through a concerted reasoning effort that interleaves human judgment and automatedmore » inferences. We describe a systematic methodology for integrating modeling algorithms within a serious gaming environment in which role-playing by human agents provides updates to model nodes and the ensuing model outcomes in turn influence the behavior of the human players. The approach ensures a strong functional partnership between human players and computer models while maintaining a high degree of independence and greatly facilitating the connection between model and game structures.« less
Chronopoulos, D
2017-01-01
A systematic expression quantifying the wave energy skewing phenomenon as a function of the mechanical characteristics of a non-isotropic structure is derived in this study. A structure of arbitrary anisotropy, layering and geometric complexity is modelled through Finite Elements (FEs) coupled to a periodic structure wave scheme. A generic approach for efficiently computing the angular sensitivity of the wave slowness for each wave type, direction and frequency is presented. The approach does not involve any finite differentiation scheme and is therefore computationally efficient and not prone to the associated numerical errors. Copyright © 2016 Elsevier B.V. All rights reserved.
Ab initio structure prediction of silicon and germanium sulfides for lithium-ion battery materials
NASA Astrophysics Data System (ADS)
Hsueh, Connie; Mayo, Martin; Morris, Andrew J.
Conventional experimental-based approaches to materials discovery, which can rely heavily on trial and error, are time-intensive and costly. We discuss approaches to coupling experimental and computational techniques in order to systematize, automate, and accelerate the process of materials discovery, which is of particular relevance to developing new battery materials. We use the ab initio random structure searching (AIRSS) method to conduct a systematic investigation of Si-S and Ge-S binary compounds in order to search for novel materials for lithium-ion battery (LIB) anodes. AIRSS is a high-throughput, density functional theory-based approach to structure prediction which has been successful at predicting the structures of LIBs containing sulfur and silicon and germanium. We propose a lithiation mechanism for Li-GeS2 anodes as well as report new, theoretically stable, layered and porous structures in the Si-S and Ge-S systems that pique experimental interest.
Aladjov, Hristo; Ankley, Gerald; Byrne, Hugh J.; de Knecht, Joop; Heinzle, Elmar; Klambauer, Günter; Landesmann, Brigitte; Luijten, Mirjam; MacKay, Cameron; Maxwell, Gavin; Meek, M. E. (Bette); Paini, Alicia; Perkins, Edward; Sobanski, Tomasz; Villeneuve, Dan; Waters, Katrina M.; Whelan, Maurice
2017-01-01
Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework provides a systematic approach for organizing knowledge that may support such inference. Likewise, computational models of biological systems at various scales provide another means and platform to integrate current biological understanding to facilitate inference and extrapolation. We argue that the systematic organization of knowledge into AOP frameworks can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. This concept was explored as part of a workshop on AOP-Informed Predictive Modeling Approaches for Regulatory Toxicology held September 24–25, 2015. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development is described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment. PMID:27994170
Revisiting cognitive and learning styles in computer-assisted instruction: not so useful after all.
Cook, David A
2012-06-01
In a previous systematic review, the author proposed that adaptation to learners' cognitive and learning styles (CLSs) could improve the efficiency of computer-assisted instruction (CAI). In the present article, he questions that proposition, arguing that CLSs do not make a substantive difference in CAI. To support this argument, the author performed an updated systematic literature search, pooled new findings with those from the previous review, and reinterpreted this evidence with a focus on aptitude-treatment interactions. (An aptitude-treatment interaction occurs when a student with attribute 1 learns better with instructional approach A than with approach B, whereas a student with attribute 2 learns better with instructional approach B).Of 65 analyses reported in 48 studies, only 9 analyses (14%) showed significant interactions between CLS and instructional approach. It seems that aptitude-treatment interactions with CLSs are at best infrequent and small in magnitude. There are several possible explanations for this lack of effect. First, the influence of strong instructional methods likely dominates the impact of CLSs. Second, current methods for assessing CLSs lack validity evidence and are inadequate to accurately characterize the individual learner. Third, theories are vague, and empiric evidence is virtually nonexistent to guide the planning of style-targeted instructional designs. Adaptation to learners' CLSs thus seems unlikely to enhance CAI. The author recommends that educators focus on employing strong instructional methods. Educators might also consider assessing and adapting to learners' prior knowledge or allowing learners to select among alternate instructional approaches.
Overy, Catherine; Booth, George H; Blunt, N S; Shepherd, James J; Cleland, Deidre; Alavi, Ali
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overy, Catherine; Blunt, N. S.; Shepherd, James J.
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamicmore » itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.« less
A systematic approach to assessing the clinical significance of genetic variants.
Duzkale, H; Shen, J; McLaughlin, H; Alfares, A; Kelly, M A; Pugh, T J; Funke, B H; Rehm, H L; Lebo, M S
2013-11-01
Molecular genetic testing informs diagnosis, prognosis, and risk assessment for patients and their family members. Recent advances in low-cost, high-throughput DNA sequencing and computing technologies have enabled the rapid expansion of genetic test content, resulting in dramatically increased numbers of DNA variants identified per test. To address this challenge, our laboratory has developed a systematic approach to thorough and efficient assessments of variants for pathogenicity determination. We first search for existing data in publications and databases including internal, collaborative and public resources. We then perform full evidence-based assessments through statistical analyses of observations in the general population and disease cohorts, evaluation of experimental data from in vivo or in vitro studies, and computational predictions of potential impacts of each variant. Finally, we weigh all evidence to reach an overall conclusion on the potential for each variant to be disease causing. In this report, we highlight the principles of variant assessment, address the caveats and pitfalls, and provide examples to illustrate the process. By sharing our experience and providing a framework for variant assessment, including access to a freely available customizable tool, we hope to help move towards standardized and consistent approaches to variant assessment. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.
2014-12-01
Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while managing the uncertainties of scientific conclusions derived from such capabilities. This talk will provide an overview of JPL's efforts in developing a comprehensive architectural approach to data science.
Lattice dynamics calculations based on density-functional perturbation theory in real space
NASA Astrophysics Data System (ADS)
Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias
2017-06-01
A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.
Automatic Differentiation as a tool in engineering design
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.; Hall, Laura E.
1992-01-01
Automatic Differentiation (AD) is a tool that systematically implements the chain rule of differentiation to obtain the derivatives of functions calculated by computer programs. In this paper, it is assessed as a tool for engineering design. The paper discusses the forward and reverse modes of AD, their computing requirements, and approaches to implementing AD. It continues with application to two different tools to two medium-size structural analysis problems to generate sensitivity information typically necessary in an optimization or design situation. The paper concludes with the observation that AD is to be preferred to finite differencing in most cases, as long as sufficient computer storage is available.
ERIC Educational Resources Information Center
Welch, Karla Conn; Hieb, Jeffrey; Graham, James
2015-01-01
Coursework that instills patterns of rigorous logical thought has long been a hallmark of the engineering curriculum. However, today's engineering students are expected to exhibit a wider range of thinking capabilities both to satisfy ABET requirements and to prepare the students to become successful practitioners. This paper presents the initial…
ERIC Educational Resources Information Center
Chieu, Vu Minh; Luengo, Vanda; Vadcard, Lucile; Tonetti, Jerome
2010-01-01
Cognitive approaches have been used for student modeling in intelligent tutoring systems (ITSs). Many of those systems have tackled fundamental subjects such as mathematics, physics, and computer programming. The change of the student's cognitive behavior over time, however, has not been considered and modeled systematically. Furthermore, the…
A Systematic Approach to Improving E-Learning Implementations in High Schools
ERIC Educational Resources Information Center
Pardamean, Bens; Suparyanto, Teddy
2014-01-01
This study was based on the current growing trend of implementing e-learning in high schools. Most endeavors have been inefficient, rendering an objective of determining the initial steps that could be taken to improve these efforts by assessing a student population's computer skill levels and performances in an IT course. Demographic factors were…
Simulation of an enzyme-based glucose sensor
NASA Astrophysics Data System (ADS)
Sha, Xianzheng; Jablecki, Michael; Gough, David A.
2001-09-01
An important biosensor application is the continuous monitoring blood or tissue fluid glucose concentration in people with diabetes. Our research focuses on the development of a glucose sensor based on potentiostatic oxygen electrodes and immobilized glucose oxidase for long- term application as an implant in tissues. As the sensor signal depends on many design variables, a trial-and-error approach to sensor optimization can be time-consuming. Here, the properties of an implantable glucose sensor are optimized by a systematic computational simulation approach.
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING.
Bayard, David S; Schumitzky, Alan
2010-03-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches.
Use of agents to implement an integrated computing environment
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implement the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.
NASA Astrophysics Data System (ADS)
Landry, Guillaume; Parodi, Katia; Wildberger, Joachim E.; Verhaegen, Frank
2013-08-01
Dedicated methods of in-vivo verification of ion treatment based on the detection of secondary emitted radiation, such as positron-emission-tomography and prompt gamma detection require high accuracy in the assignment of the elemental composition. This especially concerns the content in carbon and oxygen, which are the most abundant elements of human tissue. The standard single-energy computed tomography (SECT) approach to carbon and oxygen concentration determination has been shown to introduce significant discrepancies in the carbon and oxygen content of tissues. We propose a dual-energy CT (DECT)-based approach for carbon and oxygen content assignment and investigate the accuracy gains of the method. SECT and DECT Hounsfield units (HU) were calculated using the stoichiometric calibration procedure for a comprehensive set of human tissues. Fit parameters for the stoichiometric calibration were obtained from phantom scans. Gaussian distributions with standard deviations equal to those derived from phantom scans were subsequently generated for each tissue for several values of the computed tomography dose index (CTDIvol). The assignment of %weight carbon and oxygen (%wC,%wO) was performed based on SECT and DECT. The SECT scheme employed a HU versus %wC,O approach while for DECT we explored a Zeff versus %wC,O approach and a (Zeff, ρe) space approach. The accuracy of each scheme was estimated by calculating the root mean square (RMS) error on %wC,O derived from the input Gaussian distribution of HU for each tissue and also for the noiseless case as a limiting case. The (Zeff, ρe) space approach was also compared to SECT by comparing RMS error for hydrogen and nitrogen (%wH,%wN). Systematic shifts were applied to the tissue HU distributions to assess the robustness of the method against systematic uncertainties in the stoichiometric calibration procedure. In the absence of noise the (Zeff, ρe) space approach showed more accurate %wC,O assignment (largest error of 2%) than the Zeff versus %wC,O and HU versus %wC,O approaches (largest errors of 15% and 30%, respectively). When noise was present, the accuracy of the (Zeff, ρe) space (DECT approach) was decreased but the RMS error over all tissues was lower than the HU versus %wC,O (SECT approach) (5.8%wC versus 7.5%wC at CTDIvol = 20 mGy). The DECT approach showed decreasing RMS error with decreasing image noise (or increasing CTDIvol). At CTDIvol = 80 mGy the RMS error over all tissues was 3.7% for DECT and 6.2% for SECT approaches. However, systematic shifts greater than ±5HU undermined the accuracy gains afforded by DECT at any dose level. DECT provides more accurate %wC,O assignment than SECT when imaging noise and systematic uncertainties in HU values are not considered. The presence of imaging noise degrades the DECT accuracy on %wC,O assignment but it remains superior to SECT. However, DECT was found to be sensitive to systematic shifts of human tissue HU.
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
Lumbo-Pelvic-Hip Complex Pain in a Competitive Basketball Player
Reiman, Michael P.; Cox, Kara D.; Jones, Kay S.; Byrd, J. W.
2011-01-01
Establishing the cause of lumbo-pelvic-hip complex pain is a challenge for many clinicians. This case report describes the mechanism of injury, diagnostic process, surgical management, and rehabilitation of a female high school basketball athlete who sustained an injury when falling on her right side. Diagnostics included clinical examination, radiography of the spine and hip joint, magnetic resonance imaging arthrogram, 3-dimensional computed tomography scan, and computed tomography of the hip joint. A systematic multidisciplinary clinical approach resulted in the patient’s return to previous functional levels. PMID:23015993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giner, Emmanuel, E-mail: gnrmnl@unife.it; Angeli, Celestino, E-mail: anc@unife.it
2016-03-14
The present work describes a new method to compute accurate spin densities for open shell systems. The proposed approach follows two steps: first, it provides molecular orbitals which correctly take into account the spin delocalization; second, a proper CI treatment allows to account for the spin polarization effect while keeping a restricted formalism and avoiding spin contamination. The main idea of the optimization procedure is based on the orbital relaxation of the various charge transfer determinants responsible for the spin delocalization. The algorithm is tested and compared to other existing methods on a series of organic and inorganic open shellmore » systems. The results reported here show that the new approach (almost black-box) provides accurate spin densities at a reasonable computational cost making it suitable for a systematic study of open shell systems.« less
Modeling and Simulation of High Dimensional Stochastic Multiscale PDE Systems at the Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kevrekidis, Ioannis
2017-03-22
The thrust of the proposal was to exploit modern data-mining tools in a way that will create a systematic, computer-assisted approach to the representation of random media -- and also to the representation of the solutions of an array of important physicochemical processes that take place in/on such media. A parsimonious representation/parametrization of the random media links directly (via uncertainty quantification tools) to good sampling of the distribution of random media realizations. It also links directly to modern multiscale computational algorithms (like the equation-free approach that has been developed in our group) and plays a crucial role in accelerating themore » scientific computation of solutions of nonlinear PDE models (deterministic or stochastic) in such media – both solutions in particular realizations of the random media, and estimation of the statistics of the solutions over multiple realizations (e.g. expectations).« less
Techniques for grid manipulation and adaptation. [computational fluid dynamics
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.
1992-01-01
Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.
NASA Technical Reports Server (NTRS)
Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris
2011-01-01
A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.
Baumketner, Andrij
2009-01-01
The performance of reaction-field methods to treat electrostatic interactions is tested in simulations of ions solvated in water. The potential of mean force between sodium chloride pair of ions and between side chains of lysine and aspartate are computed using umbrella sampling and molecular dynamics simulations. It is found that in comparison with lattice sum calculations, the charge-group-based approaches to reaction-field treatments produce a large error in the association energy of the ions that exhibits strong systematic dependence on the size of the simulation box. The atom-based implementation of the reaction field is seen to (i) improve the overall quality of the potential of mean force and (ii) remove the dependence on the size of the simulation box. It is suggested that the atom-based truncation be used in reaction-field simulations of mixed media. PMID:19292522
Internet and computer based interventions for cannabis use: a meta-analysis.
Tait, Robert J; Spijkerman, Renske; Riper, Heleen
2013-12-01
Worldwide, cannabis is the most prevalently used illegal drug and creates demand for prevention and treatment services that cannot be fulfilled using conventional approaches. Computer and Internet-based interventions may have the potential to meet this need. Therefore, we systematically reviewed the literature and conducted a meta-analysis on the effectiveness of this approach in reducing the frequency of cannabis use. We systematically searched online databases (Medline, PubMed, PsychINFO, Embase) for eligible studies and conducted a meta-analysis. Studies had to use a randomized design, be delivered either via the Internet or computer and report separate outcomes for cannabis use. The principal outcome measure was the frequency of cannabis use. Data were extracted from 10 studies and the meta-analysis involved 10 comparisons with 4,125 participants. The overall effect size was small but significant, g=0.16 (95% confidence interval (CI) 0.09-0.22, P<0.001) at post-treatment. Subgroup analyses did not reveal significant subgroup differences for key factors including type of analysis (intention-to-treat, completers only), type of control (active, waitlist), age group (11-16, 17+ years), gender composition (female only, mixed), type of intervention (prevention, 'treatment'), guided versus unguided programs, mode of delivery (Internet, computer), individual versus family dyad and venue (home, research setting). Also, no significant moderation effects were found for number of sessions and time to follow-up. Finally, there was no evidence of publication bias. Internet and computer interventions appear to be effective in reducing cannabis use in the short-term albeit based on data from few studies and across diverse samples. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Wittwehr, Clemens; Aladjov, Hristo; Ankley, Gerald; Byrne, Hugh J; de Knecht, Joop; Heinzle, Elmar; Klambauer, Günter; Landesmann, Brigitte; Luijten, Mirjam; MacKay, Cameron; Maxwell, Gavin; Meek, M E Bette; Paini, Alicia; Perkins, Edward; Sobanski, Tomasz; Villeneuve, Dan; Waters, Katrina M; Whelan, Maurice
2017-02-01
Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework provides a systematic approach for organizing knowledge that may support such inference. Likewise, computational models of biological systems at various scales provide another means and platform to integrate current biological understanding to facilitate inference and extrapolation. We argue that the systematic organization of knowledge into AOP frameworks can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. This concept was explored as part of a workshop on AOP-Informed Predictive Modeling Approaches for Regulatory Toxicology held September 24-25, 2015. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development is described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology.
Total System Design (TSD) Methodology Assessment.
1983-01-01
hardware implementation. Author: Martin - Marietta Aerospace Title: Total System Design Methodology Source: Martin - Marietta Technical Report MCR -79-646...systematic, rational approach to computer systems design is needed. Martin - Marietta has produced a Total System Design Methodology to support such design...gathering and ordering. The purpose of the paper is to document the existing TSD methoeology at Martin - Marietta , describe the supporting tools, and
ERIC Educational Resources Information Center
Slauson, Gayla Jo; Carpenter, Donald; Snyder, Johnny
2011-01-01
Systems in the Foundations of Information Systems course can be used to connect with students in computer information systems programs; a systematic approach to beginning student relationship management in this course is helpful. The authors suggest that four systems be created in the Foundations Course. These theoretical systems include an…
ERIC Educational Resources Information Center
Boh, Larry E.; And Others
1987-01-01
A project to (1) develop and apply a microcomputer simulation program to enhance clinical medication problem solving in preclerkship and clerkship students and (2) perform an initial formative evaluation of the simulation is described. A systematic instructional design approach was used in applying the simulation to the disease state of rheumatoid…
Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao
2014-01-01
Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, Elise; Wolf, Rachel; Sako, Masao
2016-11-09
Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less
Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing
NASA Astrophysics Data System (ADS)
Klems, Markus; Nimis, Jens; Tai, Stefan
On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.
Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August
2018-07-01
Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
The paranasal sinuses: the last frontier in craniofacial biology.
Márquez, Samuel
2008-11-01
This special issue of the Anatomical Record explores the presence and diversity of paranasal sinuses in distinct vertebrate groups. The following topics are addressed in particular: dinosaur physiology; development; physiology; adaptation; imaging; and primate systematics. A variety of approaches and techniques are used to examine and characterize the diversity of paranasal sinus pneumatization in a wide spectrum of vertebrates. These range from dissection to histology, from plain X-rays to computer tomography, from comparative anatomy to natural experimental settings, from mathematical computation to computer model simulation, and 2D to 3D reconstructions. The articles in this issue are a combination of literature review and new, hypothesis-driven anatomical research that highlights the complexities of paranasal sinus growth and development; ontogenetic and disease processes; physiology; paleontology; primate systematics; and human evolution. The issue incorporates a wide variety of vertebrates, encompassing a period of over 65 million years, in an effort to offer insight into the diversity of the paranasal sinus complexes through time and space, and thereby providing a greater understanding and appreciation of these special spaces within the cranium. Copyright 2008 Wiley-Liss, Inc.
Chang, Shao-Hsia; Yu, Nan-Ying
2014-07-01
The objective of this study was to compare the effect of computer-assisted practice with the sensorimotor approach on the remediation of handwriting problems in children with dysgraphia. In a randomized controlled trial, experiments were conducted to verify the intervention effect. Forty two children with handwriting deficit were assigned to computer-assisted instruction, sensorimotor training, or a control group. Handwriting performance was measured using the elementary reading/writing test and computerized handwriting evaluation before and after 6 weeks of intervention. Repeated-measures ANOVA of changed scores were conducted to show whether statistically significant differences across the three groups were present. Significant differences in the elementary reading/writing test were found among the three groups. The computer group showed more significant improvements than the other two groups did. In the kinematic and kinetic analyses, the computer group showed promising results in the remediation of handwriting speed and fluency. This study provided clinical evidence for applying a computer-assisted handwriting program for children with dysgraphia. Clinicians and school teachers are provided with a systematic intervention for the improvement of handwriting difficulties. Copyright © 2014 Elsevier Ltd. All rights reserved.
A cloud computing based platform for sleep behavior and chronic diseases collaborative research.
Kuo, Mu-Hsing; Borycki, Elizabeth; Kushniruk, Andre; Huang, Yueh-Min; Hung, Shu-Hui
2014-01-01
The objective of this study is to propose a Cloud Computing based platform for sleep behavior and chronic disease collaborative research. The platform consists of two main components: (1) a sensing bed sheet with textile sensors to automatically record patient's sleep behaviors and vital signs, and (2) a service-oriented cloud computing architecture (SOCCA) that provides a data repository and allows for sharing and analysis of collected data. Also, we describe our systematic approach to implementing the SOCCA. We believe that the new cloud-based platform can provide nurse and other health professional researchers located in differing geographic locations with a cost effective, flexible, secure and privacy-preserved research environment.
Ajie, Whitney N; Chapman-Novakofski, Karen M
2014-06-01
The purpose of this systematic review was to evaluate recent research regarding the use of computer-based nutrition education interventions targeting adolescent overweight and obesity. Online databases were systematically searched using key words, and bibliographies of related articles were manually searched. Inclusion/exclusion criteria were applied and included studies evaluated for their ability to achieve their objectives and for quality using the Nutrition Evidence Library appraisal guidelines for research design and implementation. Of the 15 studies included, 10 were randomized controlled trials. Two studies targeted weight loss, 2 targeted weight maintenance, and 11 targeted dietary improvement with or without physical activity. At least half of in-school (60%) and nonschool interventions (80%) exhibited significantly positive effects on nutrition- or obesity-related variables. Small changes in diet, physical activity, knowledge, and self-efficacy were shown; however, few results were sustained long term. Recommendations included application of health behavior theory and computer tailoring for feedback messages. Future research should include thorough description of intervention content (messages, theory, multimedia, etc.), application of rigorous methodology, as well as consideration of covariates such as parental involvement and gender. With further research and evidentiary support, this approach to obesity-related nutrition education has the potential to be successful. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji
2017-01-01
We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka.
Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji
2017-01-01
We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka. PMID:28399163
Efficient anharmonic vibrational spectroscopy for large molecules using local-mode coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Xiaolu; Steele, Ryan P., E-mail: ryan.steele@utah.edu
This article presents a general computational approach for efficient simulations of anharmonic vibrational spectra in chemical systems. An automated local-mode vibrational approach is presented, which borrows techniques from localized molecular orbitals in electronic structure theory. This approach generates spatially localized vibrational modes, in contrast to the delocalization exhibited by canonical normal modes. The method is rigorously tested across a series of chemical systems, ranging from small molecules to large water clusters and a protonated dipeptide. It is interfaced with exact, grid-based approaches, as well as vibrational self-consistent field methods. Most significantly, this new set of reference coordinates exhibits a well-behavedmore » spatial decay of mode couplings, which allows for a systematic, a priori truncation of mode couplings and increased computational efficiency. Convergence can typically be reached by including modes within only about 4 Å. The local nature of this truncation suggests particular promise for the ab initio simulation of anharmonic vibrational motion in large systems, where connection to experimental spectra is currently most challenging.« less
New Approach for Investigating Reaction Dynamics and Rates with Ab Initio Calculations.
Fleming, Kelly L; Tiwary, Pratyush; Pfaendtner, Jim
2016-01-21
Herein, we demonstrate a convenient approach to systematically investigate chemical reaction dynamics using the metadynamics (MetaD) family of enhanced sampling methods. Using a symmetric SN2 reaction as a model system, we applied infrequent metadynamics, a theoretical framework based on acceleration factors, to quantitatively estimate the rate of reaction from biased and unbiased simulations. A systematic study of the algorithm and its application to chemical reactions was performed by sampling over 5000 independent reaction events. Additionally, we quantitatively reweighed exhaustive free-energy calculations to obtain the reaction potential-energy surface and showed that infrequent metadynamics works to effectively determine Arrhenius-like activation energies. Exact agreement with unbiased high-temperature kinetics is also shown. The feasibility of using the approach on actual ab initio molecular dynamics calculations is then presented by using Car-Parrinello MD+MetaD to sample the same reaction using only 10-20 calculations of the rare event. Owing to the ease of use and comparatively low-cost of computation, the approach has extensive potential applications for catalysis, combustion, pyrolysis, and enzymology.
A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework
NASA Astrophysics Data System (ADS)
Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo
An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.
NASA Technical Reports Server (NTRS)
Hanks, G. W.; Shomber, H. A.; Dethman, H. A.; Gratzer, L. B.; Maeshiro, A.; Gangsaas, D.; Blight, J. D.; Buchan, S. M.; Crumb, C. B.; Dorwart, R. J.
1981-01-01
An active controls technology (ACT) system architecture was selected based on current technology system elements and optimal control theory was evaluated for use in analyzing and synthesizing ACT multiple control laws. The system selected employs three redundant computers to implement all of the ACT functions, four redundant smaller computers to implement the crucial pitch-augmented stability function, and a separate maintenance and display computer. The reliability objective of probability of crucial function failure of less than 1 x 10 to the -9th power per flight of 1 hr can be met with current technology system components, if the software is assumed fault free and coverage approaching 1.0 can be provided. The optimal control theory approach to ACT control law synthesis yielded comparable control law performance much more systematically and directly than the classical s-domain approach. The ACT control law performance, although somewhat degraded by the inclusion of representative nonlinearities, remained quite effective. Certain high-frequency gust-load alleviation functions may require increased surface rate capability.
Covariant extension of the GPD overlap representation at low Fock states
Chouika, N.; Mezrag, C.; Moutarde, H.; ...
2017-12-26
Here, we present a novel approach to compute generalized parton distributions within the lightfront wave function overlap framework. We show how to systematically extend generalized parton distributions computed within the DGLAP region to the ERBL one, fulfilling at the same time both the polynomiality and positivity conditions. We exemplify our method using pion lightfront wave functions inspired by recent results of non-perturbative continuum techniques and algebraic nucleon lightfront wave functions. We also test the robustness of our algorithm on reggeized phenomenological parameterizations. This approach paves the way to a better understanding of the nucleon structure from non-perturbative techniques and tomore » a unification of generalized parton distributions and transverse momentum dependent parton distribution functions phenomenology through lightfront wave functions.« less
Machine learning bandgaps of double perovskites
Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; Ramprasad, R.; Gubernatis, J. E.; Lookman, T.
2016-01-01
The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the most crucial and relevant predictors. The developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance. PMID:26783247
Hamel, Lauren M; Robbins, Lorraine B
2013-01-01
To: (1) determine the effect of computer- and web-based interventions on improving eating behavior (e.g. increasing fruit and vegetable consumption; decreasing fat consumption) and/or diet-related physical outcomes (e.g. body mass index) among children and adolescents; and (2) examine what elements enhance success. Children and adolescents are the heaviest they have ever been. Excess weight can carry into adulthood and result in chronic health problems. Because of the capacity to reach large audiences of children and adolescents to promote healthy eating, computer- and web-based interventions hold promise for helping to curb this serious trend. However, evidence to support this approach is lacking. Systematic review using guidelines from the Cochrane Effective Practice and Organisation of Care Group. The following databases were searched for studies from 1998-2011: CINAHL; PubMed; Cochrane; PsycINFO; ERIC; and Proquest. Fifteen randomized controlled trials or quasi-experimental studies were analysed in a systematic review. Although a majority of interventions resulted in statistically significant positive changes in eating behavior and/or diet-related physical outcomes, interventions that included post intervention follow-up, ranging from 3-18 months, showed that changes were not maintained. Elements, such as conducting the intervention at school or using individually tailored feedback, may enhance success. Computer- and web-based interventions can improve eating behavior and diet-related physical outcomes among children and adolescents, particularly when conducted in schools and individually tailored. These interventions can complement and support nursing efforts to give preventive care; however, maintenance efforts are recommended. © 2012 Blackwell Publishing Ltd.
Computer-Aided Analysis of Patents for Product Technology Maturity Forecasting
NASA Astrophysics Data System (ADS)
Liang, Yanhong; Gan, Dequan; Guo, Yingchun; Zhang, Peng
Product technology maturity foresting is vital for any enterprises to hold the chance for innovation and keep competitive for a long term. The Theory of Invention Problem Solving (TRIZ) is acknowledged both as a systematic methodology for innovation and a powerful tool for technology forecasting. Based on TRIZ, the state -of-the-art on the technology maturity of product and the limits of application are discussed. With the application of text mining and patent analysis technologies, this paper proposes a computer-aided approach for product technology maturity forecasting. It can overcome the shortcomings of the current methods.
The Nucleon Axial Form Factor and Staggered Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Aaron Scott
The study of neutrino oscillation physics is a major research goal of the worldwide particle physics program over the upcoming decade. Many new experiments are being built to study the properties of neutrinos and to answer questions about the phenomenon of neutrino oscillation. These experiments need precise theoretical cross sections in order to access fundamental neutrino properties. Neutrino oscillation experiments often use large atomic nuclei as scattering targets, which are challenging for theorists to model. Nuclear models rely on free-nucleon amplitudes as inputs. These amplitudes are constrained by scattering experiments with large nuclear targets that rely on the very samemore » nuclear models. The work in this dissertation is the rst step of a new initiative to isolate and compute elementary amplitudes with theoretical calculations to support the neutrino oscillation experimental program. Here, the eort focuses on computing the axial form factor, which is the largest contributor of systematic error in the primary signal measurement process for neutrino oscillation studies, quasielastic scattering. Two approaches are taken. First, neutrino scattering data on a deuterium target are reanalyzed with a model-independent parametrization of the axial form factor to quantify the present uncertainty in the free-nucleon amplitudes. The uncertainties on the free-nucleon cross section are found to be underestimated by about an order of magnitude compared to the ubiquitous dipole model parametrization. The second approach uses lattice QCD to perform a rst-principles computation of the nucleon axial form factor. The Highly Improved Staggered Quark (HISQ) action is employed for both valence and sea quarks. The results presented in this dissertation are computed at physical pion mass for one lattice spacing. This work presents a computation of the axial form factor at zero momentum transfer, and forms the basis for a computation of the axial form factor momentum dependence with an extrapolation to the continuum limit and a full systematic error budget.« less
ERIC Educational Resources Information Center
Dominick, Gregory M.; Friedman, Daniela B.; Hoffman-Goetz, Laurie
2009-01-01
Objective: To systematically review definitions and descriptions of computer literacy as related to preventive health education programs. Method: A systematic review of the concept of computer literacy as related to preventive health education was conducted. Empirical studies published between 1994 and 2007 on prevention education programs with a…
Ahmad, Peer Zahoor; Quadri, S M K; Ahmad, Firdous; Bahar, Ali Newaz; Wani, Ghulam Mohammad; Tantary, Shafiq Maqbool
2017-12-01
Quantum-dot cellular automata, is an extremely small size and a powerless nanotechnology. It is the possible alternative to current CMOS technology. Reversible QCA logic is the most important issue at present time to reduce power losses. This paper presents a novel reversible logic gate called the F-Gate. It is simplest in design and a powerful technique to implement reversible logic. A systematic approach has been used to implement a novel single layer reversible Full-Adder, Full-Subtractor and a Full Adder-Subtractor using the F-Gate. The proposed Full Adder-Subtractor has achieved significant improvements in terms of overall circuit parameters among the most previously cost-efficient designs that exploit the inevitable nano-level issues to perform arithmetic computing. The proposed designs have been authenticated and simulated using QCADesigner tool ver. 2.0.3.
Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.
Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William
2017-01-01
Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.
Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments
NASA Astrophysics Data System (ADS)
Munsky, Brian; Shepherd, Douglas
2014-03-01
Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.
On Establishing Big Data Wave Breakwaters with Analytics (Invited)
NASA Astrophysics Data System (ADS)
Riedel, M.
2013-12-01
The Research Data Alliance Big Data Analytics (RDA-BDA) Interest Group seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. RDA-BDA seeks to analyze different scientific domain applications and their potential use of various big data analytics techniques. A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. These combinations are complex since a wide variety of different data analysis algorithms exist (e.g. specific algorithms using GPUs of analyzing brain images) that need to work together with multiple analytical tools reaching from simple (iterative) map-reduce methods (e.g. with Apache Hadoop or Twister) to sophisticated higher level frameworks that leverage machine learning algorithms (e.g. Apache Mahout). These computational analysis techniques are often augmented with visual analytics techniques (e.g. computational steering on large-scale high performance computing platforms) to put the human judgement into the analysis loop or new approaches with databases that are designed to support new forms of unstructured or semi-structured data as opposed to the rather tradtional structural databases (e.g. relational databases). More recently, data analysis and underpinned analytics frameworks also have to consider energy footprints of underlying resources. To sum up, the aim of this talk is to provide pieces of information to understand big data analytics in the context of science and engineering using the aforementioned classification as the lighthouse and as the frame of reference for a systematic approach. This talk will provide insights about big data analytics methods in context of science within varios communities and offers different views of how approaches of correlation and causality offer complementary methods to advance in science and engineering today. The RDA Big Data Analytics Group seeks to understand what approaches are not only technically feasible, but also scientifically feasible. The lighthouse Goal of the RDA Big Data Analytics Group is a classification of clever combinations of various Technologies and scientific applications in order to provide clear recommendations to the scientific community what approaches are technicalla and scientifically feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wittwehr, Clemens; Aladjov, Hristo; Ankley, Gerald
Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework has emerged as a systematic approach for organizing knowledge that supports such inference. We argue that this systematic organization of knowledge can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment.more » Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment.« less
Shen, Weifeng; Jiang, Libing; Zhang, Mao; Ma, Yuefeng; Jiang, Guanyu; He, Xiaojun
2014-01-01
To review the research methods of mass casualty incident (MCI) systematically and introduce the concept and characteristics of complexity science and artificial system, computational experiments and parallel execution (ACP) method. We searched PubMed, Web of Knowledge, China Wanfang and China Biology Medicine (CBM) databases for relevant studies. Searches were performed without year or language restrictions and used the combinations of the following key words: "mass casualty incident", "MCI", "research method", "complexity science", "ACP", "approach", "science", "model", "system" and "response". Articles were searched using the above keywords and only those involving the research methods of mass casualty incident (MCI) were enrolled. Research methods of MCI have increased markedly over the past few decades. For now, dominating research methods of MCI are theory-based approach, empirical approach, evidence-based science, mathematical modeling and computer simulation, simulation experiment, experimental methods, scenario approach and complexity science. This article provides an overview of the development of research methodology for MCI. The progresses of routine research approaches and complexity science are briefly presented in this paper. Furthermore, the authors conclude that the reductionism underlying the exact science is not suitable for MCI complex systems. And the only feasible alternative is complexity science. Finally, this summary is followed by a review that ACP method combining artificial systems, computational experiments and parallel execution provides a new idea to address researches for complex MCI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less
Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, M.A.; Craig, J.I.
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implementmore » the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.« less
Automated computation of autonomous spectral submanifolds for nonlinear modal analysis
NASA Astrophysics Data System (ADS)
Ponsioen, Sten; Pedergnana, Tiemo; Haller, George
2018-04-01
We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.
Performance Modeling of Experimental Laser Lightcrafts
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Chen, Yen-Sen; Liu, Jiwen; Myrabo, Leik N.; Mead, Franklin B., Jr.; Turner, Jim (Technical Monitor)
2001-01-01
A computational plasma aerodynamics model is developed to study the performance of a laser propelled Lightcraft. The computational methodology is based on a time-accurate, three-dimensional, finite-difference, chemically reacting, unstructured grid, pressure-based formulation. The underlying physics are added and tested systematically using a building-block approach. The physics modeled include non-equilibrium thermodynamics, non-equilibrium air-plasma finite-rate kinetics, specular ray tracing, laser beam energy absorption and refraction by plasma, non-equilibrium plasma radiation, and plasma resonance. A series of transient computations are performed at several laser pulse energy levels and the simulated physics are discussed and compared with those of tests and literatures. The predicted coupling coefficients for the Lightcraft compared reasonably well with those of tests conducted on a pendulum apparatus.
An introduction to quantum machine learning
NASA Astrophysics Data System (ADS)
Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco
2015-04-01
Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications in the IT industry. In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. Ideas range from running computationally costly algorithms or their subroutines efficiently on a quantum computer to the translation of stochastic methods into the language of quantum theory. This contribution gives a systematic overview of the emerging field of quantum machine learning. It presents the approaches as well as technical details in an accessible way, and discusses the potential of a future theory of quantum learning.
Automatic differentiation as a tool in engineering design
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois; Hall, Laura E.
1992-01-01
Automatic Differentiation (AD) is a tool that systematically implements the chain rule of differentiation to obtain the derivatives of functions calculated by computer programs. AD is assessed as a tool for engineering design. The forward and reverse modes of AD, their computing requirements, as well as approaches to implementing AD are discussed. The application of two different tools to two medium-size structural analysis problems to generate sensitivity information typically necessary in an optimization or design situation is also discussed. The observation is made that AD is to be preferred to finite differencing in most cases, as long as sufficient computer storage is available; in some instances, AD may be the alternative to consider in lieu of analytical sensitivity analysis.
Koivunen, Marita; Välimäki, Maritta; Jakobsson, Tiina; Pitkänen, Anneli
2008-01-01
This article describes the systematic process in which an evidence-based approach was used to develop a curriculum designed to support the computer and Internet skills of nurses in psychiatric hospitals in Finland. The pressure on organizations to have skilled and motivated nurses who use modern information and communication technology in health care organizations has increased due to rapid technology development at the international and national levels. However, less frequently has the development of those computer education curricula been based on evidence-based knowledge. First, we identified psychiatric nurses' learning experiences and barriers to computer use by examining written essays. Second, nurses' computer skills were surveyed. Last, evidence from the literature was scrutinized to find effective methods that can be used to teach and learn computer use in health care. This information was integrated and used for the development process of an education curriculum designed to support nurses' computer and Internet skills.
NASA Astrophysics Data System (ADS)
Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel
2017-04-01
The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical approaches through an Observing System Simulation Experiment (OSSE) on a global scale. By changing the size of the random and systematic errors in the OSSE, we can determine the corresponding spatial and temporal resolutions at which useful flux signals could be detected from the OCO-2 data.
Computational modeling in melanoma for novel drug discovery.
Pennisi, Marzio; Russo, Giulia; Di Salvatore, Valentina; Candido, Saverio; Libra, Massimo; Pappalardo, Francesco
2016-06-01
There is a growing body of evidence highlighting the applications of computational modeling in the field of biomedicine. It has recently been applied to the in silico analysis of cancer dynamics. In the era of precision medicine, this analysis may allow the discovery of new molecular targets useful for the design of novel therapies and for overcoming resistance to anticancer drugs. According to its molecular behavior, melanoma represents an interesting tumor model in which computational modeling can be applied. Melanoma is an aggressive tumor of the skin with a poor prognosis for patients with advanced disease as it is resistant to current therapeutic approaches. This review discusses the basics of computational modeling in melanoma drug discovery and development. Discussion includes the in silico discovery of novel molecular drug targets, the optimization of immunotherapies and personalized medicine trials. Mathematical and computational models are gradually being used to help understand biomedical data produced by high-throughput analysis. The use of advanced computer models allowing the simulation of complex biological processes provides hypotheses and supports experimental design. The research in fighting aggressive cancers, such as melanoma, is making great strides. Computational models represent the key component to complement these efforts. Due to the combinatorial complexity of new drug discovery, a systematic approach based only on experimentation is not possible. Computational and mathematical models are necessary for bringing cancer drug discovery into the era of omics, big data and personalized medicine.
Baltzer, Pascal Andreas Thomas; Freiberg, Christian; Beger, Sebastian; Vag, Tibor; Dietzel, Matthias; Herzog, Aimee B; Gajda, Mieczyslaw; Camara, Oumar; Kaiser, Werner A
2009-09-01
Enhancement characteristics after administration of a contrast agent are regarded as a major criterion for differential diagnosis in magnetic resonance mammography (MRM). However, no consensus exists about the best measurement method to assess contrast enhancement kinetics. This systematic investigation was performed to compare visual estimation with manual region of interest (ROI) and computer-aided diagnosis (CAD) analysis for time curve measurements in MRM. A total of 329 patients undergoing surgery after MRM (1.5 T) were analyzed prospectively. Dynamic data were measured using visual estimation, including ROI as well as CAD methods, and classified depending on initial signal increase and delayed enhancement. Pathology revealed 469 lesions (279 malignant, 190 benign). Kappa agreement between the methods ranged from 0.78 to 0.81. Diagnostic accuracies of 74.4% (visual), 75.7% (ROI), and 76.6% (CAD) were found without statistical significant differences. According to our results, curve type measurements are useful as a diagnostic criterion in breast lesions irrespective of the method used.
Systematic network coding for two-hop lossy transmissions
NASA Astrophysics Data System (ADS)
Li, Ye; Blostein, Steven; Chan, Wai-Yip
2015-12-01
In this paper, we consider network transmissions over a single or multiple parallel two-hop lossy paths. These scenarios occur in applications such as sensor networks or WiFi offloading. Random linear network coding (RLNC), where previously received packets are re-encoded at intermediate nodes and forwarded, is known to be a capacity-achieving approach for these networks. However, a major drawback of RLNC is its high encoding and decoding complexity. In this work, a systematic network coding method is proposed. We show through both analysis and simulation that the proposed method achieves higher end-to-end rate as well as lower computational cost than RLNC for finite field sizes and finite-sized packet transmissions.
Active Learning to Understand Infectious Disease Models and Improve Policy Making
Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel
2014-01-01
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings. PMID:24743387
Active learning to understand infectious disease models and improve policy making.
Willem, Lander; Stijven, Sean; Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel
2014-04-01
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.
Methodological approaches of health technology assessment.
Goodman, C S; Ahn, R
1999-12-01
In this era of evolving health care systems throughout the world, technology remains the substance of health care. Medical informatics comprises a growing contribution to the technologies used in the delivery and management of health care. Diverse, evolving technologies include artificial neural networks, computer-assisted surgery, computer-based patient records, hospital information systems, and more. Decision-makers increasingly demand well-founded information to determine whether or how to develop these technologies, allow them on the market, acquire them, use them, pay for their use, and more. The development and wider use of health technology assessment (HTA) reflects this demand. While HTA offers systematic, well-founded approaches for determining the value of medical informatics technologies, HTA must continue to adapt and refine its methods in response to these evolving technologies. This paper provides a basic overview of HTA principles and methods.
A genetic epidemiology approach to cyber-security.
Gil, Santiago; Kott, Alexander; Barabási, Albert-László
2014-07-16
While much attention has been paid to the vulnerability of computer networks to node and link failure, there is limited systematic understanding of the factors that determine the likelihood that a node (computer) is compromised. We therefore collect threat log data in a university network to study the patterns of threat activity for individual hosts. We relate this information to the properties of each host as observed through network-wide scans, establishing associations between the network services a host is running and the kinds of threats to which it is susceptible. We propose a methodology to associate services to threats inspired by the tools used in genetics to identify statistical associations between mutations and diseases. The proposed approach allows us to determine probabilities of infection directly from observation, offering an automated high-throughput strategy to develop comprehensive metrics for cyber-security.
A genetic epidemiology approach to cyber-security
Gil, Santiago; Kott, Alexander; Barabási, Albert-László
2014-01-01
While much attention has been paid to the vulnerability of computer networks to node and link failure, there is limited systematic understanding of the factors that determine the likelihood that a node (computer) is compromised. We therefore collect threat log data in a university network to study the patterns of threat activity for individual hosts. We relate this information to the properties of each host as observed through network-wide scans, establishing associations between the network services a host is running and the kinds of threats to which it is susceptible. We propose a methodology to associate services to threats inspired by the tools used in genetics to identify statistical associations between mutations and diseases. The proposed approach allows us to determine probabilities of infection directly from observation, offering an automated high-throughput strategy to develop comprehensive metrics for cyber-security. PMID:25028059
Conceptual design of distillation-based hybrid separation processes.
Skiborowski, Mirko; Harwardt, Andreas; Marquardt, Wolfgang
2013-01-01
Hybrid separation processes combine different separation principles and constitute a promising design option for the separation of complex mixtures. Particularly, the integration of distillation with other unit operations can significantly improve the separation of close-boiling or azeotropic mixtures. Although the design of single-unit operations is well understood and supported by computational methods, the optimal design of flowsheets of hybrid separation processes is still a challenging task. The large number of operational and design degrees of freedom requires a systematic and optimization-based design approach. To this end, a structured approach, the so-called process synthesis framework, is proposed. This article reviews available computational methods for the conceptual design of distillation-based hybrid processes for the separation of liquid mixtures. Open problems are identified that must be addressed to finally establish a structured process synthesis framework for such processes.
Machine learning bandgaps of double perovskites
Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; ...
2016-01-19
The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the mostmore » crucial and relevant predictors. As a result, the developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance.« less
Semiclassical Virasoro blocks from AdS 3 gravity
Hijano, Eliot; Kraus, Per; Perlmutter, Eric; ...
2015-12-14
We present a unified framework for the holographic computation of Virasoro conformal blocks at large central charge. In particular, we provide bulk constructions that correctly reproduce all semiclassical Virasoro blocks that are known explicitly from conformal field theory computations. The results revolve around the use of geodesic Witten diagrams, recently introduced in [1], evaluated in locally AdS 3 geometries generated by backreaction of heavy operators. We also provide an alternative computation of the heavy-light semiclassical block — in which two external operators become parametrically heavy — as a certain scattering process involving higher spin gauge fields in AdS 3; thismore » approach highlights the chiral nature of Virasoro blocks. Finally, these techniques may be systematically extended to compute corrections to these blocks and to interpolate amongst the different semiclassical regimes.« less
NASA Astrophysics Data System (ADS)
Crowell, Andrew Rippetoe
This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P.; Marini, J.
1979-01-01
The implementation of satellite-based Doppler positioning systems frequently requires the recovery of transmitter position from a single pass of Doppler data. The least-squares approach to the problem yields conjugate solutions on either side of the satellite subtrack. It is important to develop a procedure for choosing the proper solution which is correct in a high percentage of cases. A test for ambiguity resolution which is the most powerful in the sense that it maximizes the probability of a correct decision is derived. When systematic error sources are properly included in the least-squares reduction process to yield an optimal solution the test reduces to choosing the solution which provides the smaller valuation of the least-squares loss function. When systematic error sources are ignored in the least-squares reduction, the most powerful test is a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudoinverse of a reduced-rank square matrix. A formula for computing the power of the most powerful test is provided. Numerical examples are included in which the power of the test is computed for situations that are relevant to the design of a satellite-aided search and rescue system.
NASA Astrophysics Data System (ADS)
Assadi, Amir H.
2001-11-01
Perceptual geometry is an emerging field of interdisciplinary research whose objectives focus on study of geometry from the perspective of visual perception, and in turn, apply such geometric findings to the ecological study of vision. Perceptual geometry attempts to answer fundamental questions in perception of form and representation of space through synthesis of cognitive and biological theories of visual perception with geometric theories of the physical world. Perception of form and space are among fundamental problems in vision science. In recent cognitive and computational models of human perception, natural scenes are used systematically as preferred visual stimuli. Among key problems in perception of form and space, we have examined perception of geometry of natural surfaces and curves, e.g. as in the observer's environment. Besides a systematic mathematical foundation for a remarkably general framework, the advantages of the Gestalt theory of natural surfaces include a concrete computational approach to simulate or recreate images whose geometric invariants and quantities might be perceived and estimated by an observer. The latter is at the very foundation of understanding the nature of perception of space and form, and the (computer graphics) problem of rendering scenes to visually invoke virtual presence.
Marlière, Daniel Amaral Alves; Demétrio, Maurício Silva; Picinini, Leonardo Santos; De Oliveira, Rodrigo Guerra; Chaves Netto, Henrique Duque De Miranda
2018-01-01
Assess clinical studies regarding accuracy between virtual planning of computer-guided surgery and actual outcomes of dental implant placements in total edentulous alveolar ridges. A PubMed search was performed to identify only clinical studies published between 2011 and 2016, searching the following combinations of keywords: “Accuracy AND Computer-Assisted Surgery AND Dental Implants.” Study designs were identified using the terms: Case Reports, Clinical study, Randomized Controlled Trial, Systematic Reviews, Meta-Analysis, humans. Level of agreement between the authors in the study selection process was substantial (k = 0.767), and the study eligibility was considered excellent (k = 0.863). Seven articles were included in this review. They describe the use of bone and muco-supported guides, demonstrating angular deviations cervically and apically ranging from (minimum and maximum means), respectively, 1.85–8.4 (°), 0.17–2.17 (mm), and 0.77–2.86 (mm). Angular deviations obtained most inaccuracy in maxila. For cervical and apical deviations, accuracy was preponderantly lower in maxilla. Despite the similar deviations measurement approaches described, clinical relevance of this study may be useful to warn the surgeon that safety margins in clinical situations. PMID:29657542
Zhang, Guo-Qiang; Xing, Guangming; Cui, Licong
2018-04-01
One of the basic challenges in developing structural methods for systematic audition on the quality of biomedical ontologies is the computational cost usually involved in exhaustive sub-graph analysis. We introduce ANT-LCA, a new algorithm for computing all non-trivial lowest common ancestors (LCA) of each pair of concepts in the hierarchical order induced by an ontology. The computation of LCA is a fundamental step for non-lattice approach for ontology quality assurance. Distinct from existing approaches, ANT-LCA only computes LCAs for non-trivial pairs, those having at least one common ancestor. To skip all trivial pairs that may be of no practical interest, ANT-LCA employs a simple but innovative algorithmic strategy combining topological order and dynamic programming to keep track of non-trivial pairs. We provide correctness proofs and demonstrate a substantial reduction in computational time for two largest biomedical ontologies: SNOMED CT and Gene Ontology (GO). ANT-LCA achieved an average computation time of 30 and 3 sec per version for SNOMED CT and GO, respectively, about 2 orders of magnitude faster than the best known approaches. Our algorithm overcomes a fundamental computational barrier in sub-graph based structural analysis of large ontological systems. It enables the implementation of a new breed of structural auditing methods that not only identifies potential problematic areas, but also automatically suggests changes to fix the issues. Such structural auditing methods can lead to more effective tools supporting ontology quality assurance work. Copyright © 2018 Elsevier Inc. All rights reserved.
Pronk, Nicolaas P; Boucher, Jackie L; Gehling, Eve; Boyle, Raymond G; Jeffery, Robert W
2002-10-01
To describe an integrated, operational platform from which mail- and telephone-based health promotion programs are implemented and to specifically relate this approach to weight management programming in a managed care setting. In-depth description of essential systems structures, including people, computer technology, and decision-support protocols. The roles of support staff, counselors, a librarian, and a manager in delivering a weight management program are described. Information availability using computer technology is a critical component in making this system effective and is presented according to its architectural layout and design. Protocols support counselors and administrative support staff in decision making, and a detailed flowchart presents the layout of this part of the system. This platform is described in the context of a weight management program, and we present baseline characteristics of 1801 participants, their behaviors, self-reported medical conditions, and initial pattern of enrollment in the various treatment options. Considering the prevalence and upward trend of overweight and obesity in the United States, a need exists for robust intervention platforms that can systematically support multiple types of programs. Weight management interventions implemented using this platform are scalable to the population level and are sustainable over time despite the limits of defined resources and budgets. The present article describes an innovative approach to reaching a large population with effective programs in an integrated, coordinated, and systematic manner. This comprehensive, robust platform represents an example of how obesity prevention and treatment research may be translated into the applied setting.
Systematic exploration of unsupervised methods for mapping behavior
NASA Astrophysics Data System (ADS)
Todd, Jeremy G.; Kain, Jamey S.; de Bivort, Benjamin L.
2017-02-01
To fully understand the mechanisms giving rise to behavior, we need to be able to precisely measure it. When coupled with large behavioral data sets, unsupervised clustering methods offer the potential of unbiased mapping of behavioral spaces. However, unsupervised techniques to map behavioral spaces are in their infancy, and there have been few systematic considerations of all the methodological options. We compared the performance of seven distinct mapping methods in clustering a wavelet-transformed data set consisting of the x- and y-positions of the six legs of individual flies. Legs were automatically tracked by small pieces of fluorescent dye, while the fly was tethered and walking on an air-suspended ball. We find that there is considerable variation in the performance of these mapping methods, and that better performance is attained when clustering is done in higher dimensional spaces (which are otherwise less preferable because they are hard to visualize). High dimensionality means that some algorithms, including the non-parametric watershed cluster assignment algorithm, cannot be used. We developed an alternative watershed algorithm which can be used in high-dimensional spaces when a probability density estimate can be computed directly. With these tools in hand, we examined the behavioral space of fly leg postural dynamics and locomotion. We find a striking division of behavior into modes involving the fore legs and modes involving the hind legs, with few direct transitions between them. By computing behavioral clusters using the data from all flies simultaneously, we show that this division appears to be common to all flies. We also identify individual-to-individual differences in behavior and behavioral transitions. Lastly, we suggest a computational pipeline that can achieve satisfactory levels of performance without the taxing computational demands of a systematic combinatorial approach.
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches.
Pahle, Jürgen
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem.
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem. PMID:19151097
Bannwarth, Christoph; Seibert, Jakob; Grimme, Stefan
2016-05-01
The electronic circular dichroism (ECD) spectrum of the recently synthesized [16]helicene and a derivative comprising two triisopropylsilyloxy protection groups was computed by means of the very efficient simplified time-dependent density functional theory (sTD-DFT) approach. Different from many previous ECD studies of helicenes, nonequilibrium structure effects were accounted for by computing ECD spectra on "snapshots" obtained from a molecular dynamics (MD) simulation including solvent molecules. The trajectories are based on a molecule specific classical potential as obtained from the recently developed quantum chemically derived force field (QMDFF) scheme. The reduced computational cost in the MD simulation due to the use of the QMDFF (compared to ab-initio MD) as well as the sTD-DFT approach make realistic spectral simulations feasible for these compounds that comprise more than 100 atoms. While the ECD spectra of [16]helicene and its derivative computed vertically on the respective gas phase, equilibrium geometries show noticeable differences, these are "washed" out when nonequilibrium structures are taken into account. The computed spectra with two recommended density functionals (ωB97X and BHLYP) and extended basis sets compare very well with the experimental one. In addition we provide an estimate for the missing absolute intensities of the latter. The approach presented here could also be used in future studies to capture nonequilibrium effects, but also to systematically average ECD spectra over different conformations in more flexible molecules. Chirality 28:365-369, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Some aspects of modeling hydrocarbon oxidation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gal, D.; Botar, L.; Danoczy, E.
1981-01-01
A modeling procedure for the study of hydrocarbon oxidation is suggested, and its effectiveness for the oxidation of ethylbenzene is demonstrated. As a first step in modeling, systematization involves compilation of possible mechanisms. Then, by introduction of the concept of kinetic communication, the chaotic set of possible mechanisms is systematized into a network. Experimentation serves both as feedback to the systematic arrangement of information and source of new information. Kinetic treatment of the possible mechanism has been accomplished by two different approaches: by classical inductive calculations starting with a small mechanism and using kinetic approximations, and by computer simulation. Themore » authors have compiled a so-called Main Contributory Mechanism, involving processes - within the possible mechanism - which contribute basically to the formation and consumption of the intermediates, to the consumption of the starting compounds and to the formation of the end products. 24 refs.« less
Towards resolving the complete fern tree of life.
Lehtonen, Samuli
2011-01-01
In the past two decades, molecular systematic studies have revolutionized our understanding of the evolutionary history of ferns. The availability of large molecular data sets together with efficient computer algorithms, now enables us to reconstruct evolutionary histories with previously unseen completeness. Here, the most comprehensive fern phylogeny to date, representing over one-fifth of the extant global fern diversity, is inferred based on four plastid genes. Parsimony and maximum-likelihood analyses provided a mostly congruent results and in general supported the prevailing view on the higher-level fern systematics. At a deep phylogenetic level, the position of horsetails depended on the optimality criteria chosen, with horsetails positioned as the sister group either of Marattiopsida-Polypodiopsida clade or of the Polypodiopsida. The analyses demonstrate the power of using a 'supermatrix' approach to resolve large-scale phylogenies and reveal questionable taxonomies. These results provide a valuable background for future research on fern systematics, ecology, biogeography and other evolutionary studies.
Bal, Kristof M; Neyts, Erik C
2018-03-28
A number of recent computational material design studies based on density functional theory (DFT) calculations have put forward a new class of materials with electrically switchable chemical characteristics that can be exploited in the development of tunable gas storage and electrocatalytic applications. We find systematic flaws in almost every computational study of gas adsorption on polarized or charged surfaces, stemming from an improper and unreproducible treatment of periodicity, leading to very large errors of up to 3 eV in some cases. Two simple corrective procedures that lead to consistent results are proposed, constituting a crucial course correction to the research in the field.
Quantum chemical approach to estimating the thermodynamics of metabolic reactions.
Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán
2014-11-12
Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.
Ogurtsova, Katherine; Heise, Thomas L; Linnenkamp, Ute; Dintsios, Charalabos-Markos; Lhachimi, Stefan K; Icks, Andrea
2017-12-29
Type 2 diabetes mellitus (T2DM), a highly prevalent chronic disease, puts a large burden on individual health and health care systems. Computer simulation models, used to evaluate the clinical and economic effectiveness of various interventions to handle T2DM, have become a well-established tool in diabetes research. Despite the broad consensus about the general importance of validation, especially external validation, as a crucial instrument of assessing and controlling for the quality of these models, there are no systematic reviews comparing such validation of diabetes models. As a result, the main objectives of this systematic review are to identify and appraise the different approaches used for the external validation of existing models covering the development and progression of T2DM. We will perform adapted searches by applying respective search strategies to identify suitable studies from 14 electronic databases. Retrieved study records will be included or excluded based on predefined eligibility criteria as defined in this protocol. Among others, a publication filter will exclude studies published before 1995. We will run abstract and full text screenings and then extract data from all selected studies by filling in a predefined data extraction spreadsheet. We will undertake a descriptive, narrative synthesis of findings to address the study objectives. We will pay special attention to aspects of quality of these models in regard to the external validation based upon ISPOR and ADA recommendations as well as Mount Hood Challenge reports. All critical stages within the screening, data extraction and synthesis processes will be conducted by at least two authors. This protocol adheres to PRISMA and PRISMA-P standards. The proposed systematic review will provide a broad overview of the current practice in the external validation of models with respect to T2DM incidence and progression in humans built on simulation techniques. PROSPERO CRD42017069983 .
Variability extraction and modeling for product variants.
Linsbauer, Lukas; Lopez-Herrejon, Roberto Erick; Egyed, Alexander
2017-01-01
Fast-changing hardware and software technologies in addition to larger and more specialized customer bases demand software tailored to meet very diverse requirements. Software development approaches that aim at capturing this diversity on a single consolidated platform often require large upfront investments, e.g., time or budget. Alternatively, companies resort to developing one variant of a software product at a time by reusing as much as possible from already-existing product variants. However, identifying and extracting the parts to reuse is an error-prone and inefficient task compounded by the typically large number of product variants. Hence, more disciplined and systematic approaches are needed to cope with the complexity of developing and maintaining sets of product variants. Such approaches require detailed information about the product variants, the features they provide and their relations. In this paper, we present an approach to extract such variability information from product variants. It identifies traces from features and feature interactions to their implementation artifacts, and computes their dependencies. This work can be useful in many scenarios ranging from ad hoc development approaches such as clone-and-own to systematic reuse approaches such as software product lines. We applied our variability extraction approach to six case studies and provide a detailed evaluation. The results show that the extracted variability information is consistent with the variability in our six case study systems given by their variability models and available product variants.
NASA Astrophysics Data System (ADS)
Weatherwax Scott, Caroline; Tsareff, Christopher R.
1990-06-01
One of the main goals of process engineering in the semiconductor industry is to improve wafer fabrication productivity and throughput. Engineers must work continuously toward this goal in addition to performing sustaining and development tasks. To accomplish these objectives, managers must make efficient use of engineering resources. One of the tools being used to improve efficiency is the diagnostic expert system. Expert systems are knowledge based computer programs designed to lead the user through the analysis and solution of a problem. Several photolithography diagnostic expert systems have been implemented at the Hughes Technology Center to provide a systematic approach to process problem solving. This systematic approach was achieved by documenting cause and effect analyses for a wide variety of processing problems. This knowledge was organized in the form of IF-THEN rules, a common structure for knowledge representation in expert system technology. These rules form the knowledge base of the expert system which is stored in the computer. The systems also include the problem solving methodology used by the expert when addressing a problem in his area of expertise. Operators now use the expert systems to solve many process problems without engineering assistance. The systems also facilitate the collection of appropriate data to assist engineering in solving unanticipated problems. Currently, several expert systems have been implemented to cover all aspects of the photolithography process. The systems, which have been in use for over a year, include wafer surface preparation (HMDS), photoresist coat and softbake, align and expose on a wafer stepper, and develop inspection. These systems are part of a plan to implement an expert system diagnostic environment throughout the wafer fabrication facility. In this paper, the systems' construction is described, including knowledge acquisition, rule construction, knowledge refinement, testing, and evaluation. The roles played by the process engineering expert and the knowledge engineer are discussed. The features of the systems are shown, particularly the interactive quality of the consultations and the ease of system use.
2014-09-01
not losing track of the original facts of the situation. However, hippocampal episodic memory also has limitations – it operates one memory at a...ability to strategically control the use of episodic memory . Specific areas of PFC are implicated as these episodic control structures, including...certainly start by encoding the problem into hippocampal episodic memory , so they can retrieve it when interference overtakes the system and they
On the Genealogy of Tissue Engineering and Regenerative Medicine
2015-01-01
In this article, we identify and discuss a timeline of historical events and scientific breakthroughs that shaped the principles of tissue engineering and regenerative medicine (TERM). We explore the origins of TERM concepts in myths, their application in the ancient era, their resurgence during Enlightenment, and, finally, their systematic codification into an emerging scientific and technological framework in recent past. The development of computational/mathematical approaches in TERM is also briefly discussed. PMID:25343302
How to benchmark methods for structure-based virtual screening of large compound libraries.
Christofferson, Andrew J; Huang, Niu
2012-01-01
Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.
EVOLUTIONARY SYSTEMATICS OF THE CHIMPANZEE: IMMUNODIFFUSION COMPUTER APPROACH.
man and gorilla, and shows increasingly more marked divergence from orangutan , gibbons, cercopithecoids, and ceboids. The method for constructing...the gibbon branch from the remaining hominoids, while the next most distant common ancestor separates the orangutan from man, chimpanzee, and gorilla...cercopithecoid-hominoid separation as 30 million years, the chimpanzee-man-gorilla separations were dated at about 6 million years, the orangutan at 14 million years, and the gibbon at about 19 million years. (Author)
On the genealogy of tissue engineering and regenerative medicine.
Kaul, Himanshu; Ventikos, Yiannis
2015-04-01
In this article, we identify and discuss a timeline of historical events and scientific breakthroughs that shaped the principles of tissue engineering and regenerative medicine (TERM). We explore the origins of TERM concepts in myths, their application in the ancient era, their resurgence during Enlightenment, and, finally, their systematic codification into an emerging scientific and technological framework in recent past. The development of computational/mathematical approaches in TERM is also briefly discussed.
Chiral extrapolation of the leading hadronic contribution to the muon anomalous magnetic moment
NASA Astrophysics Data System (ADS)
Golterman, Maarten; Maltman, Kim; Peris, Santiago
2017-04-01
A lattice computation of the leading-order hadronic contribution to the muon anomalous magnetic moment can potentially help reduce the error on the Standard Model prediction for this quantity, if sufficient control of all systematic errors affecting such a computation can be achieved. One of these systematic errors is that associated with the extrapolation to the physical pion mass from values on the lattice larger than the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 200 to 400 MeV with the help of two-loop chiral perturbation theory, and we find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various tricks to improve the reliability of the chiral extrapolation employed in the literature are taken into account. In addition, while chiral perturbation theory also predicts the dependence on the pion mass of the leading-order hadronic contribution to the muon anomalous magnetic moment as the chiral limit is approached, this prediction turns out to be of no practical use because the physical pion mass is larger than the muon mass that sets the scale for the onset of this behavior.
A Computing Method for Sound Propagation Through a Nonuniform Jet Stream
NASA Technical Reports Server (NTRS)
Padula, S. L.; Liu, C. H.
1974-01-01
Understanding the principles of jet noise propagation is an essential ingredient of systematic noise reduction research. High speed computer methods offer a unique potential for dealing with complex real life physical systems whereas analytical solutions are restricted to sophisticated idealized models. The classical formulation of sound propagation through a jet flow was found to be inadequate for computer solutions and a more suitable approach was needed. Previous investigations selected the phase and amplitude of the acoustic pressure as dependent variables requiring the solution of a system of nonlinear algebraic equations. The nonlinearities complicated both the analysis and the computation. A reformulation of the convective wave equation in terms of a new set of dependent variables is developed with a special emphasis on its suitability for numerical solutions on fast computers. The technique is very attractive because the resulting equations are linear in nonwaving variables. The computer solution to such a linear system of algebraic equations may be obtained by well-defined and direct means which are conservative of computer time and storage space. Typical examples are illustrated and computational results are compared with available numerical and experimental data.
Performance Modeling of an Experimental Laser Propelled Lightcraft
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Chen, Yen-Sen; Liu, Jiwen; Myrabo, Leik N.; Mead, Franklin B., Jr.
2000-01-01
A computational plasma aerodynamics model is developed to study the performance of an experimental laser propelled lightcraft. The computational methodology is based on a time-accurate, three-dimensional, finite-difference, chemically reacting, unstructured grid, pressure- based formulation. The underlying physics are added and tested systematically using a building-block approach. The physics modeled include non-equilibn'um thermodynamics, non-equilibrium air-plasma finite-rate kinetics, specular ray tracing, laser beam energy absorption and equi refraction by plasma, non-equilibrium plasma radiation, and plasma resonance. A series of transient computations are performed at several laser pulse energy levels and the simulated physics are discussed and compared with those of tests and literature. The predicted coupling coefficients for the lightcraft compared reasonably well with those of tests conducted on a pendulum apparatus.
Mathematical modeling and computational prediction of cancer drug resistance.
Sun, Xiaoqiang; Hu, Bin
2017-06-23
Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of computational methods for studying drug resistance, including inferring drug-induced signaling networks, multiscale modeling, drug combinations and precision medicine. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Detonation product EOS studies: Using ISLS to refine CHEETAH
NASA Astrophysics Data System (ADS)
Zaug, Joseph; Fried, Larry; Hansen, Donald
2001-06-01
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a suite of non-ideal simple fluids and fluid mixtures. Impulsive Stimulated Light Scattering conducted in the diamond-anvil cell offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model CHEETAH. Computational models are systematically improved with each addition of experimental data. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F + H2 yields HF + H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1988-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F+H2 yields HF+H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
Computational Approaches to Phenotyping
Lussier, Yves A.; Liu, Yang
2007-01-01
The recent completion of the Human Genome Project has made possible a high-throughput “systems approach” for accelerating the elucidation of molecular underpinnings of human diseases, and subsequent derivation of molecular-based strategies to more effectively prevent, diagnose, and treat these diseases. Although altered phenotypes are among the most reliable manifestations of altered gene functions, research using systematic analysis of phenotype relationships to study human biology is still in its infancy. This article focuses on the emerging field of high-throughput phenotyping (HTP) phenomics research, which aims to capitalize on novel high-throughput computation and informatics technology developments to derive genomewide molecular networks of genotype–phenotype associations, or “phenomic associations.” The HTP phenomics research field faces the challenge of technological research and development to generate novel tools in computation and informatics that will allow researchers to amass, access, integrate, organize, and manage phenotypic databases across species and enable genomewide analysis to associate phenotypic information with genomic data at different scales of biology. Key state-of-the-art technological advancements critical for HTP phenomics research are covered in this review. In particular, we highlight the power of computational approaches to conduct large-scale phenomics studies. PMID:17202287
Understanding the Scalability of Bayesian Network Inference using Clique Tree Growth Curves
NASA Technical Reports Server (NTRS)
Mengshoel, Ole Jakob
2009-01-01
Bayesian networks (BNs) are used to represent and efficiently compute with multi-variate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagation in a clique tree compiled from a Bayesian network. There is a lack of understanding of how clique tree computation time, and BN computation time in more general, depends on variations in BN size and structure. On the one hand, complexity results tell us that many interesting BN queries are NP-hard or worse to answer, and it is not hard to find application BNs where the clique tree approach in practice cannot be used. On the other hand, it is well-known that tree-structured BNs can be used to answer probabilistic queries in polynomial time. In this article, we develop an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN's non-root nodes to the number of root nodes, or (ii) the expected number of moral edges in their moral graphs. Our approach is based on combining analytical and experimental results. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for each set. For the special case of bipartite BNs, we consequently have two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, we systematically increase the degree of the root nodes in bipartite Bayesian networks, and find that root clique growth is well-approximated by Gompertz growth curves. It is believed that this research improves the understanding of the scaling behavior of clique tree clustering, provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms, and presents an aid for analytical trade-off studies of clique tree clustering using growth curves.
System and method for generating and/or screening potential metal-organic frameworks
Wilmer, Christopher E; Leaf, Michael; Snurr, Randall Q; Farha, Omar K; Hupp, Joseph T
2015-04-21
A system and method for systematically generating potential metal-organic framework (MOFs) structures given an input library of building blocks is provided herein. One or more material properties of the potential MOFs are evaluated using computational simulations. A range of material properties (surface area, pore volume, pore size distribution, powder x-ray diffraction pattern, methane adsorption capability, and the like) can be estimated, and in doing so, illuminate unidentified structure-property relationships that may only have been recognized by taking a global view of MOF structures. In addition to identifying structure-property relationships, this systematic approach to identify the MOFs of interest is used to identify one or more MOFs that may be useful for high pressure methane storage.
System and method for generating and/or screening potential metal-organic frameworks
Wilmer, Christopher E; Leaf, Michael; Snurr, Randall Q; Farha, Omar K; Hupp, Joseph T
2014-12-02
A system and method for systematically generating potential metal-organic framework (MOFs) structures given an input library of building blocks is provided herein. One or more material properties of the potential MOFs are evaluated using computational simulations. A range of material properties (surface area, pore volume, pore size distribution, powder x-ray diffraction pattern, methane adsorption capability, and the like) can be estimated, and in doing so, illuminate unidentified structure-property relationships that may only have been recognized by taking a global view of MOF structures. In addition to identifying structure-property relationships, this systematic approach to identify the MOFs of interest is used to identify one or more MOFs that may be useful for high pressure methane storage.
Alternative Computer Access for Young Handicapped Children: A Systematic Selection Procedure.
ERIC Educational Resources Information Center
Morris, Karen J.
The paper describes the type of computer access products appropriate for use by handicapped children and presents a systematic procedure for selection of such input and output devices. Modification of computer input is accomplished by three strategies: modifying the keyboard, adding alternative keyboards, and attaching switches to the keyboard.…
Genetic Network Inference: From Co-Expression Clustering to Reverse Engineering
NASA Technical Reports Server (NTRS)
Dhaeseleer, Patrik; Liang, Shoudan; Somogyi, Roland
2000-01-01
Advances in molecular biological, analytical, and computational technologies are enabling us to systematically investigate the complex molecular processes underlying biological systems. In particular, using high-throughput gene expression assays, we are able to measure the output of the gene regulatory network. We aim here to review datamining and modeling approaches for conceptualizing and unraveling the functional relationships implicit in these datasets. Clustering of co-expression profiles allows us to infer shared regulatory inputs and functional pathways. We discuss various aspects of clustering, ranging from distance measures to clustering algorithms and multiple-duster memberships. More advanced analysis aims to infer causal connections between genes directly, i.e., who is regulating whom and how. We discuss several approaches to the problem of reverse engineering of genetic networks, from discrete Boolean networks, to continuous linear and non-linear models. We conclude that the combination of predictive modeling with systematic experimental verification will be required to gain a deeper insight into living organisms, therapeutic targeting, and bioengineering.
A dictionary based informational genome analysis
2012-01-01
Background In the post-genomic era several methods of computational genomics are emerging to understand how the whole information is structured within genomes. Literature of last five years accounts for several alignment-free methods, arisen as alternative metrics for dissimilarity of biological sequences. Among the others, recent approaches are based on empirical frequencies of DNA k-mers in whole genomes. Results Any set of words (factors) occurring in a genome provides a genomic dictionary. About sixty genomes were analyzed by means of informational indexes based on genomic dictionaries, where a systemic view replaces a local sequence analysis. A software prototype applying a methodology here outlined carried out some computations on genomic data. We computed informational indexes, built the genomic dictionaries with different sizes, along with frequency distributions. The software performed three main tasks: computation of informational indexes, storage of these in a database, index analysis and visualization. The validation was done by investigating genomes of various organisms. A systematic analysis of genomic repeats of several lengths, which is of vivid interest in biology (for example to compute excessively represented functional sequences, such as promoters), was discussed, and suggested a method to define synthetic genetic networks. Conclusions We introduced a methodology based on dictionaries, and an efficient motif-finding software application for comparative genomics. This approach could be extended along many investigation lines, namely exported in other contexts of computational genomics, as a basis for discrimination of genomic pathologies. PMID:22985068
A systematic and efficient method to compute multi-loop master integrals
NASA Astrophysics Data System (ADS)
Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu
2018-04-01
We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.
Quantum Iterative Deepening with an Application to the Halting Problem
Tarrataca, Luís; Wichert, Andreas
2013-01-01
Classical models of computation traditionally resort to halting schemes in order to enquire about the state of a computation. In such schemes, a computational process is responsible for signaling an end of a calculation by setting a halt bit, which needs to be systematically checked by an observer. The capacity of quantum computational models to operate on a superposition of states requires an alternative approach. From a quantum perspective, any measurement of an equivalent halt qubit would have the potential to inherently interfere with the computation by provoking a random collapse amongst the states. This issue is exacerbated by undecidable problems such as the Entscheidungsproblem which require universal computational models, e.g. the classical Turing machine, to be able to proceed indefinitely. In this work we present an alternative view of quantum computation based on production system theory in conjunction with Grover's amplitude amplification scheme that allows for (1) a detection of halt states without interfering with the final result of a computation; (2) the possibility of non-terminating computation and (3) an inherent speedup to occur during computations susceptible of parallelization. We discuss how such a strategy can be employed in order to simulate classical Turing machines. PMID:23520465
Evaluation and integration of existing methods for computational prediction of allergens
2013-01-01
Background Allergy involves a series of complex reactions and factors that contribute to the development of the disease and triggering of the symptoms, including rhinitis, asthma, atopic eczema, skin sensitivity, even acute and fatal anaphylactic shock. Prediction and evaluation of the potential allergenicity is of importance for safety evaluation of foods and other environment factors. Although several computational approaches for assessing the potential allergenicity of proteins have been developed, their performance and relative merits and shortcomings have not been compared systematically. Results To evaluate and improve the existing methods for allergen prediction, we collected an up-to-date definitive dataset consisting of 989 known allergens and massive putative non-allergens. The three most widely used allergen computational prediction approaches including sequence-, motif- and SVM-based (Support Vector Machine) methods were systematically compared using the defined parameters and we found that SVM-based method outperformed the other two methods with higher accuracy and specificity. The sequence-based method with the criteria defined by FAO/WHO (FAO: Food and Agriculture Organization of the United Nations; WHO: World Health Organization) has higher sensitivity of over 98%, but having a low specificity. The advantage of motif-based method is the ability to visualize the key motif within the allergen. Notably, the performances of the sequence-based method defined by FAO/WHO and motif eliciting strategy could be improved by the optimization of parameters. To facilitate the allergen prediction, we integrated these three methods in a web-based application proAP, which provides the global search of the known allergens and a powerful tool for allergen predication. Flexible parameter setting and batch prediction were also implemented. The proAP can be accessed at http://gmobl.sjtu.edu.cn/proAP/main.html. Conclusions This study comprehensively evaluated sequence-, motif- and SVM-based computational prediction approaches for allergens and optimized their parameters to obtain better performance. These findings may provide helpful guidance for the researchers in allergen-prediction. Furthermore, we integrated these methods into a web application proAP, greatly facilitating users to do customizable allergen search and prediction. PMID:23514097
Evaluation and integration of existing methods for computational prediction of allergens.
Wang, Jing; Yu, Yabin; Zhao, Yunan; Zhang, Dabing; Li, Jing
2013-01-01
Allergy involves a series of complex reactions and factors that contribute to the development of the disease and triggering of the symptoms, including rhinitis, asthma, atopic eczema, skin sensitivity, even acute and fatal anaphylactic shock. Prediction and evaluation of the potential allergenicity is of importance for safety evaluation of foods and other environment factors. Although several computational approaches for assessing the potential allergenicity of proteins have been developed, their performance and relative merits and shortcomings have not been compared systematically. To evaluate and improve the existing methods for allergen prediction, we collected an up-to-date definitive dataset consisting of 989 known allergens and massive putative non-allergens. The three most widely used allergen computational prediction approaches including sequence-, motif- and SVM-based (Support Vector Machine) methods were systematically compared using the defined parameters and we found that SVM-based method outperformed the other two methods with higher accuracy and specificity. The sequence-based method with the criteria defined by FAO/WHO (FAO: Food and Agriculture Organization of the United Nations; WHO: World Health Organization) has higher sensitivity of over 98%, but having a low specificity. The advantage of motif-based method is the ability to visualize the key motif within the allergen. Notably, the performances of the sequence-based method defined by FAO/WHO and motif eliciting strategy could be improved by the optimization of parameters. To facilitate the allergen prediction, we integrated these three methods in a web-based application proAP, which provides the global search of the known allergens and a powerful tool for allergen predication. Flexible parameter setting and batch prediction were also implemented. The proAP can be accessed at http://gmobl.sjtu.edu.cn/proAP/main.html. This study comprehensively evaluated sequence-, motif- and SVM-based computational prediction approaches for allergens and optimized their parameters to obtain better performance. These findings may provide helpful guidance for the researchers in allergen-prediction. Furthermore, we integrated these methods into a web application proAP, greatly facilitating users to do customizable allergen search and prediction.
Dauz, Emily; Moore, Jan; Smith, Carol E; Puno, Florence; Schaag, Helen
2004-01-01
This article describes the experiences of nurses who, as part of a large clinical trial, brought the Internet into older adults' homes by installing a computer, if needed, and connecting to a patient education Web site. Most of these patients had not previously used the Internet and were taught even basic computer skills when necessary. Because of increasing use of the Internet in patient education, assessment, and home monitoring, nurses in various roles currently connect with patients to monitor their progress, teach about medications, and answer questions about appointments and treatments. Thus, nurses find themselves playing the role of technology managers for patients with home-based Internet connections. This article provides step-by-step procedures for computer installation and training in the form of protocols, checklists, and patient user guides. By following these procedures, nurses can install computers, arrange Internet access, teach and connect to their patients, and prepare themselves to install future generations of technological devices.
van den Broek, Evert; van Lieshout, Stef; Rausch, Christian; Ylstra, Bauke; van de Wiel, Mark A; Meijer, Gerrit A; Fijneman, Remond J A; Abeln, Sanne
2016-01-01
Development of cancer is driven by somatic alterations, including numerical and structural chromosomal aberrations. Currently, several computational methods are available and are widely applied to detect numerical copy number aberrations (CNAs) of chromosomal segments in tumor genomes. However, there is lack of computational methods that systematically detect structural chromosomal aberrations by virtue of the genomic location of CNA-associated chromosomal breaks and identify genes that appear non-randomly affected by chromosomal breakpoints across (large) series of tumor samples. 'GeneBreak' is developed to systematically identify genes recurrently affected by the genomic location of chromosomal CNA-associated breaks by a genome-wide approach, which can be applied to DNA copy number data obtained by array-Comparative Genomic Hybridization (CGH) or by (low-pass) whole genome sequencing (WGS). First, 'GeneBreak' collects the genomic locations of chromosomal CNA-associated breaks that were previously pinpointed by the segmentation algorithm that was applied to obtain CNA profiles. Next, a tailored annotation approach for breakpoint-to-gene mapping is implemented. Finally, dedicated cohort-based statistics is incorporated with correction for covariates that influence the probability to be a breakpoint gene. In addition, multiple testing correction is integrated to reveal recurrent breakpoint events. This easy-to-use algorithm, 'GeneBreak', is implemented in R ( www.cran.r-project.org ) and is available from Bioconductor ( www.bioconductor.org/packages/release/bioc/html/GeneBreak.html ).
A systematic assessment of normalization approaches for the Infinium 450K methylation platform.
Wu, Michael C; Joubert, Bonnie R; Kuan, Pei-fen; Håberg, Siri E; Nystad, Wenche; Peddada, Shyamal D; London, Stephanie J
2014-02-01
The Illumina Infinium HumanMethylation450 BeadChip has emerged as one of the most popular platforms for genome wide profiling of DNA methylation. While the technology is wide-spread, systematic technical biases are believed to be present in the data. For example, this array incorporates two different chemical assays, i.e., Type I and Type II probes, which exhibit different technical characteristics and potentially complicate the computational and statistical analysis. Several normalization methods have been introduced recently to adjust for possible biases. However, there is considerable debate within the field on which normalization procedure should be used and indeed whether normalization is even necessary. Yet despite the importance of the question, there has been little comprehensive comparison of normalization methods. We sought to systematically compare several popular normalization approaches using the Norwegian Mother and Child Cohort Study (MoBa) methylation data set and the technical replicates analyzed with it as a case study. We assessed both the reproducibility between technical replicates following normalization and the effect of normalization on association analysis. Results indicate that the raw data are already highly reproducible, some normalization approaches can slightly improve reproducibility, but other normalization approaches may introduce more variability into the data. Results also suggest that differences in association analysis after applying different normalizations are not large when the signal is strong, but when the signal is more modest, different normalizations can yield very different numbers of findings that meet a weaker statistical significance threshold. Overall, our work provides useful, objective assessment of the effectiveness of key normalization methods.
ERIC Educational Resources Information Center
Davies, T. Claire; Mudge, Suzie; Ameratunga, Shanthi; Stott, N. Susan
2010-01-01
Aim: The purpose of this study was to systematically review published evidence on the development, use, and effectiveness of devices and technologies that enable or enhance self-directed computer access by individuals with cerebral palsy (CP). Methods: Nine electronic databases were searched using keywords "computer", "software", "spastic",…
Comparison of two methods to determine fan performance curves using computational fluid dynamics
NASA Astrophysics Data System (ADS)
Onma, Patinya; Chantrasmi, Tonkid
2018-01-01
This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.
Y2K compliance readiness and contingency planning.
Stahl, S; Cohan, D
1999-09-01
As the millennium approaches, discussion of "Y2K compliance" will shift to discussion of "Y2K readiness." While "compliance" focuses on the technological functioning of one's own computers, "readiness" focuses on the operational planning required in a world of interdependence, in which the functionality of one's own computers is only part of the story. "Readiness" includes the ability to cope with potential Y2K failures of vendors, suppliers, staff, banks, utility companies, and others. Administrators must apply their traditional skills of analysis, inquiry and diligence to the manifold imaginable challenges which Y2K will thrust upon their facilities. The SPICE template can be used as a systematic tool to guide planning for this historic event.
Implementation of quantum logic gates using polar molecules in pendular states.
Zhu, Jing; Kais, Sabre; Wei, Qi; Herschbach, Dudley; Friedrich, Bretislav
2013-01-14
We present a systematic approach to implementation of basic quantum logic gates operating on polar molecules in pendular states as qubits for a quantum computer. A static electric field prevents quenching of the dipole moments by rotation, thereby creating the pendular states; also, the field gradient enables distinguishing among qubit sites. Multi-target optimal control theory is used as a means of optimizing the initial-to-target transition probability via a laser field. We give detailed calculations for the SrO molecule, a favorite candidate for proposed quantum computers. Our simulation results indicate that NOT, Hadamard and CNOT gates can be realized with high fidelity, as high as 0.985, for such pendular qubit states.
Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.
Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K
2016-08-01
The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.
Quantum Chemical Approach to Estimating the Thermodynamics of Metabolic Reactions
Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán
2014-01-01
Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism. PMID:25387603
Transformation in the pharmaceutical industry--a systematic review of the literature.
Shafiei, Nader; Ford, James L; Morecroft, Charles W; Lisboa, Paulo J; Taylor, Mark J; Mouzughi, Yusra
2013-01-01
The evolutionary development of pharmaceutical transformation was studied through systematic review of the literature. Fourteen triggers were identified that will affect the pharmaceutical business, regulatory science, and enabling technologies in future years. The relative importance ranking of the transformation triggers was computed based on their prevalence within the articles studied. The four main triggers with the strongest literature evidence were Fully Integrated Pharma Network, Personalized Medicine, Translational Research, and Pervasive Computing. The theoretical quality risks for each of the four main transformation triggers are examined, and the remaining ten triggers are described. The pharmaceutical industry is currently going through changes that affect the way it performs its research, manufacturing, and regulatory activities (this is termed pharmaceutical transformation). The impact of these changes on the approaches to quality risk management requires more understanding. In this paper, a comprehensive review of the academic, regulatory, and industry literature were used to identify 14 triggers that influence pharmaceutical transformation. The four main triggers, namely Fully Integrated Pharma Network, Personalized Medicine, Translational Research, and Pervasive Computing, were selected as the most important based on the strength of the evidence found during the literature review activity described in this paper. Theoretical quality risks for each of the four main transformation triggers are examined, and the remaining ten triggers are described.
How adverse outcome pathways can aid the development and ...
Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework has emerged as a systematic approach for organizing knowledge that supports such inference. We argue that this systematic organization of knowledge can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment. The present manuscript reports on expert opinion and case studies that came out of a European Commission, Joint Research Centre-sponsored work
Evaluation of nucleus segmentation in digital pathology images through large scale image synthesis
NASA Astrophysics Data System (ADS)
Zhou, Naiyun; Yu, Xiaxia; Zhao, Tianhao; Wen, Si; Wang, Fusheng; Zhu, Wei; Kurc, Tahsin; Tannenbaum, Allen; Saltz, Joel; Gao, Yi
2017-03-01
Digital histopathology images with more than 1 Gigapixel are drawing more and more attention in clinical, biomedical research, and computer vision fields. Among the multiple observable features spanning multiple scales in the pathology images, the nuclear morphology is one of the central criteria for diagnosis and grading. As a result it is also the mostly studied target in image computing. Large amount of research papers have devoted to the problem of extracting nuclei from digital pathology images, which is the foundation of any further correlation study. However, the validation and evaluation of nucleus extraction have yet been formulated rigorously and systematically. Some researches report a human verified segmentation with thousands of nuclei, whereas a single whole slide image may contain up to million. The main obstacle lies in the difficulty of obtaining such a large number of validated nuclei, which is essentially an impossible task for pathologist. We propose a systematic validation and evaluation approach based on large scale image synthesis. This could facilitate a more quantitatively validated study for current and future histopathology image analysis field.
Tomesko, Jennifer; Touger-Decker, Riva; Dreker, Margaret; Zelig, Rena; Parrott, James Scott
2017-01-01
To explore knowledge and skill acquisition outcomes related to learning physical examination (PE) through computer-assisted instruction (CAI) compared with a face-to-face (F2F) approach. A systematic literature review and meta-analysis published between January 2001 and December 2016 was conducted. Databases searched included Medline, Cochrane, CINAHL, ERIC, Ebsco, Scopus, and Web of Science. Studies were synthesized by study design, intervention, and outcomes. Statistical analyses included DerSimonian-Laird random-effects model. In total, 7 studies were included in the review, and 5 in the meta-analysis. There were no statistically significant differences for knowledge (mean difference [MD] = 5.39, 95% confidence interval [CI]: -2.05 to 12.84) or skill acquisition (MD = 0.35, 95% CI: -5.30 to 6.01). The evidence does not suggest a strong consistent preference for either CAI or F2F instruction to teach students/trainees PE. Further research is needed to identify conditions which examine knowledge and skill acquisition outcomes that favor one mode of instruction over the other.
Detonation Product EOS Studies: Using ISLS to Refine Cheetah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaug, J M; Howard, W M; Fried, L E
2001-08-08
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition themore » kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Computational models are systematically improved with each addition of experimental data.« less
O'Mara-Eves, Alison; Thomas, James; McNaught, John; Miwa, Makoto; Ananiadou, Sophia
2015-01-14
The large and growing number of published studies, and their increasing rate of publication, makes the task of identifying relevant studies in an unbiased way for inclusion in systematic reviews both complex and time consuming. Text mining has been offered as a potential solution: through automating some of the screening process, reviewer time can be saved. The evidence base around the use of text mining for screening has not yet been pulled together systematically; this systematic review fills that research gap. Focusing mainly on non-technical issues, the review aims to increase awareness of the potential of these technologies and promote further collaborative research between the computer science and systematic review communities. Five research questions led our review: what is the state of the evidence base; how has workload reduction been evaluated; what are the purposes of semi-automation and how effective are they; how have key contextual problems of applying text mining to the systematic review field been addressed; and what challenges to implementation have emerged? We answered these questions using standard systematic review methods: systematic and exhaustive searching, quality-assured data extraction and a narrative synthesis to synthesise findings. The evidence base is active and diverse; there is almost no replication between studies or collaboration between research teams and, whilst it is difficult to establish any overall conclusions about best approaches, it is clear that efficiencies and reductions in workload are potentially achievable. On the whole, most suggested that a saving in workload of between 30% and 70% might be possible, though sometimes the saving in workload is accompanied by the loss of 5% of relevant studies (i.e. a 95% recall). Using text mining to prioritise the order in which items are screened should be considered safe and ready for use in 'live' reviews. The use of text mining as a 'second screener' may also be used cautiously. The use of text mining to eliminate studies automatically should be considered promising, but not yet fully proven. In highly technical/clinical areas, it may be used with a high degree of confidence; but more developmental and evaluative work is needed in other disciplines.
Unsworth, Carolyn A; Baker, Anne
2014-10-01
Driver rehabilitation has the potential to improve on-road safety and is commonly recommended to clients. The aim of this systematic review was to identify what intervention approaches are used by occupational therapists as part of driver rehabilitation programmes, and to determine the effectiveness of these interventions. Six electronic databases (MEDLINE, CINAHL, PsycInfo, Embase, The Cochrane Library, and OTDBase) were searched. Two authors independently reviewed studies reporting all types of research designs and for all patient populations, provided the interventions could be administered by occupational therapists. The methodological quality of studies was assessed using the 'Downs and Black Instrument', and the level of evidence for each intervention approach was established using 'Centre for Evidence Based Medicine' criteria. Sixteen studies were included in the review. The most common type of intervention approach used was computer-based driving simulator training (n=8), followed by off-road skill-specific training (n=4), and off-road education programmes (n=3). Car adaptations/modifications were used in one of the included studies. There was significant variability between studies with regards to frequency, duration, and total number of intervention sessions, and the diagnoses of the participants. Of the four intervention approaches, there is evidence to support the effectiveness of off-road skill-specific training (with older clients), and computer-based driving simulator training (with both older clients and participants with acquired brain injury). Three types of intervention approaches are commonly reported, however, there is limited evidence to determine to effectiveness of these in improving fitness-to-drive. Further research is required, with clients from a range of diagnostic groups to establish evidence-based interventions and determine their effectiveness in improving these clients' on-road fitness-to-drive. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Mewes, André; Hensen, Bennet; Wacker, Frank; Hansen, Christian
2017-02-01
In this article, we systematically examine the current state of research of systems that focus on touchless human-computer interaction in operating rooms and interventional radiology suites. We further discuss the drawbacks of current solutions and underline promising technologies for future development. A systematic literature search of scientific papers that deal with touchless control of medical software in the immediate environment of the operation room and interventional radiology suite was performed. This includes methods for touchless gesture interaction, voice control and eye tracking. Fifty-five research papers were identified and analyzed in detail including 33 journal publications. Most of the identified literature (62 %) deals with the control of medical image viewers. The others present interaction techniques for laparoscopic assistance (13 %), telerobotic assistance and operating room control (9 % each) as well as for robotic operating room assistance and intraoperative registration (3.5 % each). Only 8 systems (14.5 %) were tested in a real clinical environment, and 7 (12.7 %) were not evaluated at all. In the last 10 years, many advancements have led to robust touchless interaction approaches. However, only a few have been systematically evaluated in real operating room settings. Further research is required to cope with current limitations of touchless software interfaces in clinical environments. The main challenges for future research are the improvement and evaluation of usability and intuitiveness of touchless human-computer interaction and the full integration into productive systems as well as the reduction of necessary interaction steps and further development of hands-free interaction.
Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang
2015-07-31
We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less
NASA Astrophysics Data System (ADS)
Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang
2015-07-01
We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X =N ,P ,As ,Sb , and II-VI compounds, (Zn or Cd)X , with X =O ,S ,Se ,Te . By correcting (1) the binary band gaps at high-symmetry points Γ , L , X , (2) the separation of p -and d -orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.
NASA Astrophysics Data System (ADS)
Vu, Tuan V.; Papavassiliou, Dimitrios V.
2018-05-01
In order to investigate the interfacial region between oil and water with the presence of surfactants using coarse-grained computations, both the interaction between different components of the system and the number of surfactant molecules present at the interface play an important role. However, in many prior studies, the amount of surfactants used was chosen rather arbitrarily. In this work, a systematic approach to develop coarse-grained models for anionic surfactants (such as sodium dodecyl sulfate) and nonionic surfactants (such as octaethylene glycol monododecyl ether) in oil-water interfaces is presented. The key is to place the theoretically calculated number of surfactant molecules on the interface at the critical micelle concentration. Based on this approach, the molecular description of surfactants and the effects of various interaction parameters on the interfacial tension are investigated. The results indicate that the interfacial tension is affected mostly by the head-water and tail-oil interaction. Even though the procedure presented herein is used with dissipative particle dynamics models, it can be applied for other coarse-grained methods to obtain the appropriate set of parameters (or force fields) to describe the surfactant behavior on the oil-water interface.
Keijzers, Gerben; Sithirasenan, Vasugi
2012-02-01
To assess the chest computed tomography (CT) imaging interpreting skills of emergency department (ED) doctors and to study the effect of a CT chest imaging interpretation lecture on these skills. Sixty doctors in two EDs were randomized, using computerized randomization, to either attend a chest CT interpretation lecture or not to attend this lecture. Within 2 weeks of the lecture, the participants completed a questionnaire on demographic variables, anatomical knowledge, and diagnostic interpretation of 10 chest CT studies. Outcome measures included anatomical knowledge score, diagnosis score, and the combined overall score, all expressed as a percentage of correctly answered questions (0-100). Data on 58 doctors were analyzed, of which 27 were randomized to attend the lecture. The CT interpretation lecture did not have an effect on anatomy knowledge scores (72.9 vs. 70.2%), diagnosis scores (71.2 vs. 69.2%), or overall scores (71.4 vs. 69.5%). Twenty-nine percent of doctors stated that they had a systematic approach to chest CT interpretation. Overall self-perceived competency for interpreting CT imaging (brain, chest, abdomen) was low (between 3.2 and 5.2 on a 10-point Visual Analogue Scale). A single chest CT interpretation lecture did not improve chest CT interpretation by ED doctors. Less than one-third of doctors had a systematic approach to chest CT interpretation. A standardized systematic approach may improve interpretation skills.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadri, Keyvan, E-mail: keyvan.sadri@pci.uni-heidelberg.de; Meyer, Hans-Dieter, E-mail: hans-dieter.meyer@pci.uni-heidelberg.de; Lauvergnat, David, E-mail: david.lauvergnat@u-psud.fr
2014-09-21
For computational rovibrational spectroscopy the choice of the frame is critical for an approximate separation of overall rotation from internal motions. To minimize the coupling between internal coordinates and rotation, Eckart proposed a condition [“Some studies concerning rotating axes and polyatomic molecules,” Phys. Rev. 47, 552–558 (1935)] and a frame that fulfills this condition is hence called an Eckart frame. A method is developed to introduce in a systematic way the Eckart frame for the expression of the kinetic energy operator (KEO) in the polyspherical approach. The computed energy levels of a water molecule are compared with those obtained usingmore » a KEO in the standard definition of the Body-fixed frame of the polyspherical approach. The KEO in the Eckart frame leads to a faster convergence especially for large J states and vibrationally excited states. To provide an example with more degrees of freedom, rotational states of the vibrational ground state of the trans nitrous acid (HONO) are also investigated.« less
Schönborn, Sandro; Greifeneder, Rainer; Vetter, Thomas
2018-01-01
Upon a first encounter, individuals spontaneously associate faces with certain personality dimensions. Such first impressions can strongly impact judgments and decisions and may prove highly consequential. Researchers investigating the impact of facial information often rely on (a) real photographs that have been selected to vary on the dimension of interest, (b) morphed photographs, or (c) computer-generated faces (avatars). All three approaches have distinct advantages. Here we present the Basel Face Database, which combines these advantages. In particular, the Basel Face Database consists of real photographs that are subtly, but systematically manipulated to show variations in the perception of the Big Two and the Big Five personality dimensions. To this end, the information specific to each psychological dimension is isolated and modeled in new photographs. Two studies serve as systematic validation of the Basel Face Database. The Basel Face Database opens a new pathway for researchers across psychological disciplines to investigate effects of perceived personality. PMID:29590124
Walker, Mirella; Schönborn, Sandro; Greifeneder, Rainer; Vetter, Thomas
2018-01-01
Upon a first encounter, individuals spontaneously associate faces with certain personality dimensions. Such first impressions can strongly impact judgments and decisions and may prove highly consequential. Researchers investigating the impact of facial information often rely on (a) real photographs that have been selected to vary on the dimension of interest, (b) morphed photographs, or (c) computer-generated faces (avatars). All three approaches have distinct advantages. Here we present the Basel Face Database, which combines these advantages. In particular, the Basel Face Database consists of real photographs that are subtly, but systematically manipulated to show variations in the perception of the Big Two and the Big Five personality dimensions. To this end, the information specific to each psychological dimension is isolated and modeled in new photographs. Two studies serve as systematic validation of the Basel Face Database. The Basel Face Database opens a new pathway for researchers across psychological disciplines to investigate effects of perceived personality.
Revealed Preference Methods for Studying Bicycle Route Choice-A Systematic Review.
Pritchard, Ray
2018-03-07
One fundamental aspect of promoting utilitarian bicycle use involves making modifications to the built environment to improve the safety, efficiency and enjoyability of cycling. Revealed preference data on bicycle route choice can assist greatly in understanding the actual behaviour of a highly heterogeneous group of users, which in turn assists the prioritisation of infrastructure or other built environment initiatives. This systematic review seeks to compare the relative strengths and weaknesses of the empirical approaches for evaluating whole journey route choices of bicyclists. Two electronic databases were systematically searched for a selection of keywords pertaining to bicycle and route choice. In total seven families of methods are identified: GPS devices, smartphone applications, crowdsourcing, participant-recalled routes, accompanied journeys, egocentric cameras and virtual reality. The study illustrates a trade-off in the quality of data obtainable and the average number of participants. Future additional methods could include dockless bikeshare, multiple camera solutions using computer vision and immersive bicycle simulator environments.
Revealed Preference Methods for Studying Bicycle Route Choice—A Systematic Review
2018-01-01
One fundamental aspect of promoting utilitarian bicycle use involves making modifications to the built environment to improve the safety, efficiency and enjoyability of cycling. Revealed preference data on bicycle route choice can assist greatly in understanding the actual behaviour of a highly heterogeneous group of users, which in turn assists the prioritisation of infrastructure or other built environment initiatives. This systematic review seeks to compare the relative strengths and weaknesses of the empirical approaches for evaluating whole journey route choices of bicyclists. Two electronic databases were systematically searched for a selection of keywords pertaining to bicycle and route choice. In total seven families of methods are identified: GPS devices, smartphone applications, crowdsourcing, participant-recalled routes, accompanied journeys, egocentric cameras and virtual reality. The study illustrates a trade-off in the quality of data obtainable and the average number of participants. Future additional methods could include dockless bikeshare, multiple camera solutions using computer vision and immersive bicycle simulator environments. PMID:29518991
Error, Marc; Ashby, Shaelene; Orlandi, Richard R; Alt, Jeremiah A
2018-01-01
Objective To determine if the introduction of a systematic preoperative sinus computed tomography (CT) checklist improves identification of critical anatomic variations in sinus anatomy among patients undergoing endoscopic sinus surgery. Study Design Single-blinded prospective cohort study. Setting Tertiary care hospital. Subjects and Methods Otolaryngology residents were asked to identify critical surgical sinus anatomy on preoperative CT scans before and after introduction of a systematic approach to reviewing sinus CT scans. The percentage of correctly identified structures was documented and compared with a 2-sample t test. Results A total of 57 scans were reviewed: 28 preimplementation and 29 postimplementation. Implementation of the sinus CT checklist improved identification of critical sinus anatomy from 24% to 84% correct ( P < .001). All residents, junior and senior, demonstrated significant improvement in identification of sinus anatomic variants, including those not directly included in the systematic review implemented. Conclusion The implementation of a preoperative endoscopic sinus surgery radiographic checklist improves identification of critical anatomic sinus variations in a training population.
Certainty Equivalence M-MRAC for Systems with Unmatched Uncertainties
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
The paper presents a certainty equivalence state feedback indirect adaptive control design method for the systems of any relative degree with unmatched uncertainties. The approach is based on the parameter identification (estimation) model, which is completely separated from the control design and is capable of producing parameter estimates as fast as the computing power allows without generating high frequency oscillations. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters.
An Event-Based Approach to Distributed Diagnosis of Continuous Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhurry, Indranil; Biswas, Gautam; Koutsoukos, Xenofon
2010-01-01
Distributed fault diagnosis solutions are becoming necessary due to the complexity of modern engineering systems, and the advent of smart sensors and computing elements. This paper presents a novel event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, based on a qualitative abstraction of measurement deviations from the nominal behavior. We systematically derive dynamic fault signatures expressed as event-based fault models. We develop a distributed diagnoser design algorithm that uses these models for designing local event-based diagnosers based on global diagnosability analysis. The local diagnosers each generate globally correct diagnosis results locally, without a centralized coordinator, and by communicating a minimal number of measurements between themselves. The proposed approach is applied to a multi-tank system, and results demonstrate a marked improvement in scalability compared to a centralized approach.
Network Security Validation Using Game Theory
NASA Astrophysics Data System (ADS)
Papadopoulou, Vicky; Gregoriades, Andreas
Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.
A Systematic Approach to Error Free Telemetry
2017-06-28
A SYSTEMATIC APPROACH TO ERROR FREE TELEMETRY 412TW-TIM-17-03 DISTRIBUTION A: Approved for public release. Distribution is...Systematic Approach to Error-Free Telemetry) was submitted by the Commander, 412th Test Wing, Edwards AFB, California 93524. Prepared by...Technical Information Memorandum 3. DATES COVERED (From - Through) February 2016 4. TITLE AND SUBTITLE A Systematic Approach to Error-Free
Computational Prediction of Metabolism: Sites, Products, SAR, P450 Enzyme Dynamics, and Mechanisms
2012-01-01
Metabolism of xenobiotics remains a central challenge for the discovery and development of drugs, cosmetics, nutritional supplements, and agrochemicals. Metabolic transformations are frequently related to the incidence of toxic effects that may result from the emergence of reactive species, the systemic accumulation of metabolites, or by induction of metabolic pathways. Experimental investigation of the metabolism of small organic molecules is particularly resource demanding; hence, computational methods are of considerable interest to complement experimental approaches. This review provides a broad overview of structure- and ligand-based computational methods for the prediction of xenobiotic metabolism. Current computational approaches to address xenobiotic metabolism are discussed from three major perspectives: (i) prediction of sites of metabolism (SOMs), (ii) elucidation of potential metabolites and their chemical structures, and (iii) prediction of direct and indirect effects of xenobiotics on metabolizing enzymes, where the focus is on the cytochrome P450 (CYP) superfamily of enzymes, the cardinal xenobiotics metabolizing enzymes. For each of these domains, a variety of approaches and their applications are systematically reviewed, including expert systems, data mining approaches, quantitative structure–activity relationships (QSARs), and machine learning-based methods, pharmacophore-based algorithms, shape-focused techniques, molecular interaction fields (MIFs), reactivity-focused techniques, protein–ligand docking, molecular dynamics (MD) simulations, and combinations of methods. Predictive metabolism is a developing area, and there is still enormous potential for improvement. However, it is clear that the combination of rapidly increasing amounts of available ligand- and structure-related experimental data (in particular, quantitative data) with novel and diverse simulation and modeling approaches is accelerating the development of effective tools for prediction of in vivo metabolism, which is reflected by the diverse and comprehensive data sources and methods for metabolism prediction reviewed here. This review attempts to survey the range and scope of computational methods applied to metabolism prediction and also to compare and contrast their applicability and performance. PMID:22339582
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palazzo, S.; Vagliasindi, G.; Arena, P.
2010-08-15
In the past years cameras have become increasingly common tools in scientific applications. They are now quite systematically used in magnetic confinement fusion, to the point that infrared imaging is starting to be used systematically for real-time machine protection in major devices. However, in order to guarantee that the control system can always react rapidly in case of critical situations, the time required for the processing of the images must be as predictable as possible. The approach described in this paper combines the new computational paradigm of cellular nonlinear networks (CNNs) with field-programmable gate arrays and has been tested inmore » an application for the detection of hot spots on the plasma facing components in JET. The developed system is able to perform real-time hot spot recognition, by processing the image stream captured by JET wide angle infrared camera, with the guarantee that computational time is constant and deterministic. The statistical results obtained from a quite extensive set of examples show that this solution approximates very well an ad hoc serial software algorithm, with no false or missed alarms and an almost perfect overlapping of alarm intervals. The computational time can be reduced to a millisecond time scale for 8 bit 496x560-sized images. Moreover, in our implementation, the computational time, besides being deterministic, is practically independent of the number of iterations performed by the CNN - unlike software CNN implementations.« less
A depth-first search algorithm to compute elementary flux modes by linear programming.
Quek, Lake-Ee; Nielsen, Lars K
2014-07-30
The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.
NASA Astrophysics Data System (ADS)
Pineda, M.; Stamatakis, M.
2017-07-01
Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.
Monge-Pereira, Esther; Ibañez-Pereda, Jaime; Alguacil-Diego, Isabel M; Serrano, Jose I; Spottorno-Rubio, María P; Molina-Rueda, Francisco
2017-09-01
Brain-computer interface (BCI) systems have been suggested as a promising tool for neurorehabilitation. However, to date, there is a lack of homogeneous findings. Furthermore, no systematic reviews have analyzed the degree of validation of these interventions for upper limb (UL) motor rehabilitation poststroke. The study aims were to compile all available studies that assess an UL intervention based on an electroencephalography (EEG) BCI system in stroke; to analyze the methodological quality of the studies retrieved; and to determine the effects of these interventions on the improvement of motor abilities. TYPE: This was a systematic review. Searches were conducted in PubMed, PEDro, Embase, Cumulative Index to Nursing and Allied Health, Web of Science, and Cochrane Central Register of Controlled Trial from inception to September 30, 2015. This systematic review compiles all available studies that assess UL intervention based on an EEG-BCI system in patients with stroke, analyzing their methodological quality using the Critical Review Form for Quantitative Studies, and determining the grade of recommendation of these interventions for improving motor abilities as established by the Oxford Centre for Evidence-based Medicine. The articles were selected according to the following criteria: studies evaluating an EEG-based BCI intervention; studies including patients with a stroke and hemiplegia, regardless of lesion origin or temporal evolution; interventions using an EEG-based BCI to restore functional abilities of the affected UL, regardless of the interface used or its combination with other therapies; and studies using validated tools to evaluate motor function. After the literature search, 13 articles were included in this review: 4 studies were randomized controlled trials; 1 study was a controlled study; 4 studies were case series studies; and 4 studies were case reports. The methodological quality of the included papers ranged from 6 to 15, and the level of evidence varied from 1b to 5. The articles included in this review involved a total of 141 stroke patients. This systematic review suggests that BCI interventions may be a promising rehabilitation approach in subjects with stroke. II. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Mastering cognitive development theory in computer science education
NASA Astrophysics Data System (ADS)
Gluga, Richard; Kay, Judy; Lister, Raymond; Simon; Kleitman, Sabina
2013-03-01
To design an effective computer science curriculum, educators require a systematic method of classifying the difficulty level of learning activities and assessment tasks. This is important for curriculum design and implementation and for communication between educators. Different educators must be able to use the method consistently, so that classified activities and assessments are comparable across the subjects of a degree, and, ideally, comparable across institutions. One widespread approach to supporting this is to write learning objects in terms of Bloom's Taxonomy. This, or other such classifications, is likely to be more effective if educators can use them consistently, in the way experts would use them. To this end, we present the design and evaluation of our online interactive web-based tutorial system, which can be configured and used to offer training in different classification schemes. We report on results from three evaluations. First, 17 computer science educators complete a tutorial on using Bloom's Taxonomy to classify programming examination questions. Second, 20 computer science educators complete a Neo-Piagetian tutorial. Third evaluation was a comparison of inter-rater reliability scores of computer science educators classifying programming questions using Bloom's Taxonomy, before and after taking our tutorial. Based on the results from these evaluations, we discuss the effectiveness of our tutorial system design for teaching computer science educators how to systematically and consistently classify programming examination questions. We also discuss the suitability of Bloom's Taxonomy and Neo-Piagetian theory for achieving this goal. The Bloom's and Neo-Piagetian tutorials are made available as a community resource. The contributions of this paper are the following: the tutorial system for learning classification schemes for the purpose of coding the difficulty of computing learning materials; its evaluation; new insights into the consistency that computing educators can achieve using Bloom; and first insights into the use of Neo-Piagetian theory by a group of classifiers.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
A Systems Approach to Costing in the Blood Bank
Delon, Gerald L.; Smalley, Harold E.
1969-01-01
A macroscopic approach to departmental cost finding is combined with a microscopic approach to the weighting of laboratory tests in a mathematical model which, when incorporated into a relative unit value format, yields unit costs for such tests under a wide variety of operational conditions. The task of updating such costs to reflect changing conditions can be facilitated by a computer program incorporating the capability of pricing the various tests to achieve any desired profit or loss or to break even. Among other potential uses of such a technique, the effects on unit cost per test caused by increasing or decreasing the number of technicians or the volume of tests can be systematically examined, and pricing can be updated each year as hospital costs change. PMID:5799486
Reavley, Nicola; Livingston, Jenni; Buchbinder, Rachelle; Bennell, Kim; Stecki, Chris; Osborne, Richard Harry
2010-02-01
Despite demands for evidence-based research and practice, little attention has been given to systematic approaches to the development of complex interventions to tackle workplace health problems. This paper outlines an approach to the initial stages of a workplace program development which integrates health promotion and disease management. The approach commences with systematic and genuine processes of obtaining information from key stakeholders with broad experience of these interventions. This information is constructed into a program framework in which practice-based and research-informed elements are both valued. We used this approach to develop a workplace education program to reduce the onset and impact of a common chronic disease - osteoarthritis. To gain information systematically at a national level, a structured concept mapping workshop with 47 participants from across Australia was undertaken. Participants were selected to maximise the whole-of-workplace perspective and included health education providers, academics, clinicians and policymakers. Participants generated statements in response to a seeding statement: Thinking as broadly as possible, what changes in education and support should occur in the workplace to help in the prevention and management of arthritis? Participants grouped the resulting statements into conceptually coherent groups and a computer program was used to generate a 'cluster map' along with a list of statements sorted according to cluster membership. In combination with research-based evidence, the concept map informed the development of a program logic model incorporating the program's guiding principles, possible service providers, services, training modes, program elements and the causal processes by which participants might benefit. The program logic model components were further validated through research findings from diverse fields, including health education, coaching, organisational learning, workplace interventions, workforce development and osteoarthritis disability prevention. In summary, wide and genuine consultation, concept mapping, and evidence-based program logic development were integrated to develop a whole-of-system complex intervention in which potential effectiveness and assimilation into the workplace for which optimised. Copyright 2009 Elsevier Ltd. All rights reserved.
How to decompose arbitrary continuous-variable quantum operations.
Sefi, Seckin; van Loock, Peter
2011-10-21
We present a general, systematic, and efficient method for decomposing any given exponential operator of bosonic mode operators, describing an arbitrary multimode Hamiltonian evolution, into a set of universal unitary gates. Although our approach is mainly oriented towards continuous-variable quantum computation, it may be used more generally whenever quantum states are to be transformed deterministically, e.g., in quantum control, discrete-variable quantum computation, or Hamiltonian simulation. We illustrate our scheme by presenting decompositions for various nonlinear Hamiltonians including quartic Kerr interactions. Finally, we conclude with two potential experiments utilizing offline-prepared optical cubic states and homodyne detections, in which quantum information is processed optically or in an atomic memory using quadratic light-atom interactions. © 2011 American Physical Society
Beyond BCS pairing in high-density neutron matter
NASA Astrophysics Data System (ADS)
Rios, A.; Ding, D.; Dussan, H.; Dickhoff, W. H.; Witte, S. J.; Polls, A.
2018-01-01
Pairing gaps in neutron matter need to be computed in a wide range of densities to address open questions in neutron star phenomenology. Traditionally, the Bardeen-Cooper-Schrieffer approach has been used to compute gaps from bare nucleon-nucleon interactions. Here, we incorporate the influence of short- and long-range correlations into pairing properties. Short-range correlations are treated including the appropriate fragmentation of single-particle states, and they suppress the gaps substantially. Long-range correlations dress the pairing interaction via density and spin modes, and provide a relatively small correction. We use three different interactions as a starting point to control for any systematic effects. Results are relevant for neutron-star cooling scenarios, in particular in view of the recent observational data on Cassiopeia A.
OPDOT: A computer program for the optimum preliminary design of a transport airplane
NASA Technical Reports Server (NTRS)
Sliwa, S. M.; Arbuckle, P. D.
1980-01-01
A description of a computer program, OPDOT, for the optimal preliminary design of transport aircraft is given. OPDOT utilizes constrained parameter optimization to minimize a performance index (e.g., direct operating cost per block hour) while satisfying operating constraints. The approach in OPDOT uses geometric descriptors as independent design variables. The independent design variables are systematically iterated to find the optimum design. The technical development of the program is provided and a program listing with sample input and output are utilized to illustrate its use in preliminary design. It is not meant to be a user's guide, but rather a description of a useful design tool developed for studying the application of new technologies to transport airplanes.
The role of voice input for human-machine communication.
Cohen, P R; Oviatt, S L
1995-01-01
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803
The next step in biology: a periodic table?
Dhar, Pawan K
2007-08-01
Systems biology is an approach to explain the behaviour of a system in relation to its individual components. Synthetic biology uses key hierarchical and modular concepts of systems biology to engineer novel biological systems. In my opinion the next step in biology is to use molecule-to-phenotype data using these approaches and integrate them in the form a periodic table. A periodic table in biology would provide chassis to classify, systematize and compare diversity of component properties vis-a-vis system behaviour. Using periodic table it could be possible to compute higher- level interactions from component properties. This paper examines the concept of building a bio-periodic table using protein fold as the fundamental unit.
Dolgov, Igor; Birchfield, David A; McBeath, Michael K; Thornburg, Harvey; Todd, Christopher G
2009-04-01
Perception of floor-projected moving geometric shapes was examined in the context of the Situated Multimedia Arts Learning Laboratory (SMALLab), an immersive, mixed-reality learning environment. As predicted, the projected destinations of shapes which retreated in depth (proximal origin) were judged significantly less accurately than those that approached (distal origin). Participants maintained similar magnitudes of error throughout the session, and no effect of practice was observed. Shape perception in an immersive multimedia environment is comparable to the real world. One may conclude that systematic exploration of basic psychological phenomena in novel mediated environments is integral to an understanding of human behavior in novel human-computer interaction architectures.
Implementation of efficient trajectories for an ultrasonic scanner using chaotic maps
NASA Astrophysics Data System (ADS)
Almeda, A.; Baltazar, A.; Treesatayapun, C.; Mijarez, R.
2012-05-01
Typical ultrasonic methodology for nondestructive scanning evaluation uses systematic scanning paths. In many cases, this approach is time inefficient and also energy and computational power consuming. Here, a methodology for the scanning of defects using an ultrasonic echo-pulse scanning technique combined with chaotic trajectory generation is proposed. This is implemented in a Cartesian coordinate robotic system developed in our lab. To cover the entire search area, a chaotic function and a proposed mirror mapping were incorporated. To improve detection probability, our proposed scanning methodology is complemented with a probabilistic approach of discontinuity detection. The developed methodology was found to be more efficient than traditional ones used to localize and characterize hidden flaws.
Dynamic contrast enhanced CT in nodule characterization: How we review and report.
Qureshi, Nagmi R; Shah, Andrew; Eaton, Rosemary J; Miles, Ken; Gilbert, Fiona J
2016-07-18
Incidental indeterminate solitary pulmonary nodules (SPN) that measure less than 3 cm in size are an increasingly common finding on computed tomography (CT) worldwide. Once identified there are a number of imaging strategies that can be performed to help with nodule characterization. These include interval CT, dynamic contrast enhanced computed tomography (DCE-CT), (18)F-fluorodeoxyglucose positron emission tomography-computed tomography ((18)F-FDG-PET-CT). To date the most cost effective and efficient non-invasive test or combination of tests for optimal nodule characterization has yet to be determined.DCE-CT is a functional test that involves the acquisition of a dynamic series of images of a nodule before and following the administration of intravenous iodinated contrast medium. This article provides an overview of the current indications and limitations of DCE- CT in nodule characterization and a systematic approach to how to perform, analyse and interpret a DCE-CT scan.
Dimitrov, Borislav D; Motterlini, Nicola; Fahey, Tom
2015-01-01
Objective Estimating calibration performance of clinical prediction rules (CPRs) in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a) ABCD2 rule for prediction of 7 day stroke; and b) CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”). As confirmation, a logistic regression model (with derivation study coefficients) was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs), 95% confidence intervals (CIs), and indexes of heterogeneity (I2) on forest plots (fixed and random effects models), with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points), indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82), however, calibration in some studies was low. In such cases with miscalibration, the under-prediction (RRs =0.73–0.91, 95% CIs 0.41–1.48) could be further corrected by intercept adjustment to account for incidence differences. An improvement of both heterogeneities and P-values (Hosmer-Lemeshow goodness-of-fit test) was observed. Better calibration and improved pooled RRs (0.90–1.06), with narrower 95% CIs (0.57–1.41) were achieved. Conclusion Our results have an immediate clinical implication in situations when predicted outcomes in CPR validation studies are lacking or deficient by describing how such predictions can be obtained by everyone using the derivation study alone, without any need for highly specialized knowledge or sophisticated statistics. PMID:25931829
Methods for converging correlation energies within the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
Method to manage integration error in the Green-Kubo method.
Oliveira, Laura de Sousa; Greaney, P Alex
2017-02-01
The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.
Method to manage integration error in the Green-Kubo method
NASA Astrophysics Data System (ADS)
Oliveira, Laura de Sousa; Greaney, P. Alex
2017-02-01
The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.
Lando, David; Stevens, Tim J; Basu, Srinjan; Laue, Ernest D
2018-01-01
Single-cell chromosome conformation capture approaches are revealing the extent of cell-to-cell variability in the organization and packaging of genomes. These single-cell methods, unlike their multi-cell counterparts, allow straightforward computation of realistic chromosome conformations that may be compared and combined with other, independent, techniques to study 3D structure. Here we discuss how single-cell Hi-C and subsequent 3D genome structure determination allows comparison with data from microscopy. We then carry out a systematic evaluation of recently published single-cell Hi-C datasets to establish a computational approach for the evaluation of single-cell Hi-C protocols. We show that the calculation of genome structures provides a useful tool for assessing the quality of single-cell Hi-C data because it requires a self-consistent network of interactions, relating to the underlying 3D conformation, with few errors, as well as sufficient longer-range cis- and trans-chromosomal contacts.
Fulcher, Ben D; Jones, Nick S
2017-11-22
Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
2013-01-01
Background The regenerative response of Schwann cells after peripheral nerve injury is a critical process directly related to the pathophysiology of a number of neurodegenerative diseases. This SC injury response is dependent on an intricate gene regulatory program coordinated by a number of transcription factors and microRNAs, but the interactions among them remain largely unknown. Uncovering the transcriptional and post-transcriptional regulatory networks governing the Schwann cell injury response is a key step towards a better understanding of Schwann cell biology and may help develop novel therapies for related diseases. Performing such comprehensive network analysis requires systematic bioinformatics methods to integrate multiple genomic datasets. Results In this study we present a computational pipeline to infer transcription factor and microRNA regulatory networks. Our approach combined mRNA and microRNA expression profiling data, ChIP-Seq data of transcription factors, and computational transcription factor and microRNA target prediction. Using mRNA and microRNA expression data collected in a Schwann cell injury model, we constructed a regulatory network and studied regulatory pathways involved in Schwann cell response to injury. Furthermore, we analyzed network motifs and obtained insights on cooperative regulation of transcription factors and microRNAs in Schwann cell injury recovery. Conclusions This work demonstrates a systematic method for gene regulatory network inference that may be used to gain new information on gene regulation by transcription factors and microRNAs. PMID:23387820
NASA Astrophysics Data System (ADS)
Perdigão, R. A. P.
2017-12-01
Predictability assessments are traditionally made on a case-by-case basis, often by running the particular model of interest with randomly perturbed initial/boundary conditions and parameters, producing computationally expensive ensembles. These approaches provide a lumped statistical view of uncertainty evolution, without eliciting the fundamental processes and interactions at play in the uncertainty dynamics. In order to address these limitations, we introduce a systematic dynamical framework for predictability assessment and forecast, by analytically deriving governing equations of predictability in terms of the fundamental architecture of dynamical systems, independent of any particular problem under consideration. The framework further relates multiple uncertainty sources along with their coevolutionary interplay, enabling a comprehensive and explicit treatment of uncertainty dynamics along time, without requiring the actual model to be run. In doing so, computational resources are freed and a quick and effective a-priori systematic dynamic evaluation is made of predictability evolution and its challenges, including aspects in the model architecture and intervening variables that may require optimization ahead of initiating any model runs. It further brings out universal dynamic features in the error dynamics elusive to any case specific treatment, ultimately shedding fundamental light on the challenging issue of predictability. The formulated approach, framed with broad mathematical physics generality in mind, is then implemented in dynamic models of nonlinear geophysical systems with various degrees of complexity, in order to evaluate their limitations and provide informed assistance on how to optimize their design and improve their predictability in fundamental dynamical terms.
Computational tools for comparative phenomics; the role and promise of ontologies
Gkoutos, Georgios V.; Schofield, Paul N.; Hoehndorf, Robert
2012-01-01
A major aim of the biological sciences is to gain an understanding of human physiology and disease. One important step towards such a goal is the discovery of the function of genes that will lead to better understanding of the physiology and pathophysiology of organisms ultimately providing better understanding, diagnosis, and therapy. Our increasing ability to phenotypically characterise genetic variants of model organisms coupled with systematic and hypothesis-driven mutagenesis is resulting in a wealth of information that could potentially provide insight to the functions of all genes in an organism. The challenge we are now facing is to develop computational methods that can integrate and analyse such data. The introduction of formal ontologies that make their semantics explicit and accessible to automated reasoning promises the tantalizing possibility of standardizing biomedical knowledge allowing for novel, powerful queries that bridge multiple domains, disciplines, species and levels of granularity. We review recent computational approaches that facilitate the integration of experimental data from model organisms with clinical observations in humans. These methods foster novel cross species analysis approaches, thereby enabling comparative phenomics and leading to the potential of translating basic discoveries from the model systems into diagnostic and therapeutic advances at the clinical level. PMID:22814867
A six step approach for developing computer based assessment in medical education.
Hassanien, Mohammed Ahmed; Al-Hayani, Abdulmoneam; Abu-Kamer, Rasha; Almazrooa, Adnan
2013-01-01
Assessment, which entails the systematic evaluation of student learning, is an integral part of any educational process. Computer-based assessment (CBA) techniques provide a valuable resource to students seeking to evaluate their academic progress through instantaneous, personalized feedback. CBA reduces examination, grading and reviewing workloads and facilitates training. This paper describes a six step approach for developing CBA in higher education and evaluates student perceptions of computer-based summative assessment at the College of Medicine, King Abdulaziz University. A set of questionnaires were distributed to 341 third year medical students (161 female and 180 male) immediately after examinations in order to assess the adequacy of the system for the exam program. The respondents expressed high satisfaction with the first Saudi experience of CBA for final examinations. However, about 50% of them preferred the use of a pilot CBA before its formal application; hence, many did not recommend its use for future examinations. Both male and female respondents reported that the range of advantages offered by CBA outweighed any disadvantages. Further studies are required to monitor the extended employment of CBA technology for larger classes and for a variety of subjects at universities.
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
NASA Astrophysics Data System (ADS)
Odry, Benjamin L.; Kiraly, Atilla P.; Novak, Carol L.; Naidich, David P.; Lerallut, Jean-Francois
2006-03-01
Pulmonary diseases such as bronchiectasis, asthma, and emphysema are characterized by abnormalities in airway dimensions. Multi-slice computed tomography (MSCT) has become one of the primary means to depict these abnormalities, as the availability of high-resolution near-isotropic data makes it possible to evaluate airways at oblique angles to the scanner plane. However, currently, clinical evaluation of airways is typically limited to subjective visual inspection only: systematic evaluation of the airways to take advantage of high-resolution data has not proved practical without automation. We present an automated method to quantitatively evaluate airway lumen diameter, wall thickness and broncho-arterial ratios. In addition, our method provides 3D visualization of these values, graphically illustrating the location and extent of disease. Our algorithm begins by automatic airway segmentation to extract paths to the distal airways, and to create a map of airway diameters. Normally, airway diameters decrease as paths progress distally; failure to taper indicates abnormal dilatation. Our approach monitors airway lumen diameters along each airway path in order to detect abnormal profiles, allowing even subtle degrees of pathologic dilatation to be identified. Our method also systematically computes the broncho-arterial ratio at every terminal branch of the tree model, as a ratio above 1 indicates potentially abnormal bronchial dilatation. Finally, the airway wall thickness is computed at corresponding locations. These measurements are used to highlight abnormal branches for closer inspection, and can be summed to compute a quantitative global score for the entire airway tree, allowing reproducible longitudinal assessment of disease severity. Preliminary tests on patients diagnosed with bronchiectasis demonstrated rapid identification of lack of tapering, which also was confirmed by corresponding demonstration of elevated broncho-arterial ratios.
Systematic size study of an insect antifreeze protein and its interaction with ice.
Liu, Kai; Jia, Zongchao; Chen, Guangju; Tung, Chenho; Liu, Ruozhuang
2005-02-01
Because of their remarkable ability to depress the freezing point of aqueous solutions, antifreeze proteins (AFPs) play a critical role in helping many organisms survive subzero temperatures. The beta-helical insect AFP structures solved to date, consisting of multiple repeating circular loops or coils, are perhaps the most regular protein structures discovered thus far. Taking an exceptional advantage of the unusually high structural regularity of insect AFPs, we have employed both semiempirical and quantum mechanics computational approaches to systematically investigate the relationship between the number of AFP coils and the AFP-ice interaction energy, an indicator of antifreeze activity. We generated a series of AFP models with varying numbers of 12-residue coils (sequence TCTxSxxCxxAx) and calculated their interaction energies with ice. Using several independent computational methods, we found that the AFP-ice interaction energy increased as the number of coils increased, until an upper bound was reached. The increase of interaction energy was significant for each of the first five coils, and there was a clear synergism that gradually diminished and even decreased with further increase of the number of coils. Our results are in excellent agreement with the recently reported experimental observations.
Systematic Size Study of an Insect Antifreeze Protein and Its Interaction with Ice
Liu, Kai; Jia, Zongchao; Chen, Guangju; Tung, Chenho; Liu, Ruozhuang
2005-01-01
Because of their remarkable ability to depress the freezing point of aqueous solutions, antifreeze proteins (AFPs) play a critical role in helping many organisms survive subzero temperatures. The β-helical insect AFP structures solved to date, consisting of multiple repeating circular loops or coils, are perhaps the most regular protein structures discovered thus far. Taking an exceptional advantage of the unusually high structural regularity of insect AFPs, we have employed both semiempirical and quantum mechanics computational approaches to systematically investigate the relationship between the number of AFP coils and the AFP-ice interaction energy, an indicator of antifreeze activity. We generated a series of AFP models with varying numbers of 12-residue coils (sequence TCTxSxxCxxAx) and calculated their interaction energies with ice. Using several independent computational methods, we found that the AFP-ice interaction energy increased as the number of coils increased, until an upper bound was reached. The increase of interaction energy was significant for each of the first five coils, and there was a clear synergism that gradually diminished and even decreased with further increase of the number of coils. Our results are in excellent agreement with the recently reported experimental observations. PMID:15713600
Tomesko, Jennifer; Touger-Decker, Riva; Dreker, Margaret; Zelig, Rena; Parrott, James Scott
2017-01-01
Purpose: To explore knowledge and skill acquisition outcomes related to learning physical examination (PE) through computer-assisted instruction (CAI) compared with a face-to-face (F2F) approach. Method: A systematic literature review and meta-analysis published between January 2001 and December 2016 was conducted. Databases searched included Medline, Cochrane, CINAHL, ERIC, Ebsco, Scopus, and Web of Science. Studies were synthesized by study design, intervention, and outcomes. Statistical analyses included DerSimonian-Laird random-effects model. Results: In total, 7 studies were included in the review, and 5 in the meta-analysis. There were no statistically significant differences for knowledge (mean difference [MD] = 5.39, 95% confidence interval [CI]: −2.05 to 12.84) or skill acquisition (MD = 0.35, 95% CI: −5.30 to 6.01). Conclusions: The evidence does not suggest a strong consistent preference for either CAI or F2F instruction to teach students/trainees PE. Further research is needed to identify conditions which examine knowledge and skill acquisition outcomes that favor one mode of instruction over the other. PMID:29349338
Fisher information and steric effect: study of the internal rotation barrier of ethane.
Esquivel, Rodolfo O; Liu, Shubin; Angulo, Juan Carlos; Dehesa, Jesús S; Antolín, Juan; Molina-Espíritu, Moyocoyani
2011-05-05
On the basis of a density-based quantification of the steric effect [Liu, S. B. J. Chem. Phys.2007, 126, 244103], the origin of the internal rotation barrier between the eclipsed and staggered conformers of ethane is systematically investigated in this work from an information-theoretical point of view by using the Fisher information measure in conjugated spaces. Two kinds of computational approaches are considered in this work: adiabatic (with optimal structure) and vertical (with fixed geometry). The analyses are performed systematically by following, in each case, the conformeric path by changing the dihedral angle from 0 to 180° . This is calculated at the HF, MP2, B3LYP, and CCSD(T) levels of theory and with several basis sets. Selected descriptors of the densities are utilized to support the observations. Our results show that in the adiabatic case the eclipsed conformer possesses a larger steric repulsion than the staggered conformer, but in the vertical cases the staggered conformer retains a larger steric repulsion. Our results verify the plausibility for defining and computing the steric effect in the post-Hartree-Fock level of theory according to the scheme proposed by Liu.
Effectiveness of technologies in the treatment of post-stroke anomia: A systematic review.
Lavoie, Monica; Macoir, Joël; Bier, Nathalie
Technologies are becoming increasingly popular in the treatment of language disorders and offer numerous possibilities, but little is known about their effectiveness and limitations. The aim of this systematic review was to investigate the effectiveness of treatments delivered by technology in the management of post-stroke anomia. As a guideline for conducting this review, we used the PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions. We conducted a systematic search of publications in PubMed, PsycInfo and Current Contents. We also consulted Google Scholar. Without any limitations as to publication date, we selected studies designed to assess the effectiveness of an intervention delivered by a technology, namely computer or smart tablet, to specifically improve anomia in post-stroke participants. The main outcomes studied were improvement in naming skills and generalisation to untreated items and daily communication. We examined 23 studies in this review. To date, computers constitute the most popular technology by far; only a few studies explored the effectiveness of smart tablets. In some studies, technology was used as a therapy tool in a clinical setting, in the presence of the clinician, while in others, therapy with technology was self-administered at home, without the clinician. All studies confirmed the effectiveness of therapy provided by technology to improve naming of trained items. However, generalisation to untrained items is unclear and assessment of generalisation to daily communication is rare. The results of this systematic review confirm that technology is an efficient approach in the management of post-stroke anomia. In future studies, ecological tasks aimed at evaluating therapy's effectiveness with word retrieval in real-life situations should be added since the ultimate goal of improving anomia is to increase the ability to retrieve words more easily in everyday life. Copyright © 2017 Elsevier Inc. All rights reserved.
Time and Learning Efficiency in Internet-Based Learning: A Systematic Review and Meta-Analysis
ERIC Educational Resources Information Center
Cook, David A.; Levinson, Anthony J.; Garside, Sarah
2010-01-01
Authors have claimed that Internet-based instruction promotes greater learning efficiency than non-computer methods. Objectives Determine, through a systematic synthesis of evidence in health professions education, how Internet-based instruction compares with non-computer instruction in time spent learning, and what features of Internet-based…
Feedbacks of sedimentation on crustal heat flow - New insights from the Vøring Basin, Norwegian Sea
NASA Astrophysics Data System (ADS)
Theissen, S.; Ruepke, L. H.
2009-04-01
Information on the nature and origin of rift basins is preserved in the presently observed stratigraphy. Basin modeling aims at recovering this information with the goal of quantifying a basin's structural and thermal evolution. Decompaction and backstripping analysis is a classic and still popular approach to basin reconstruction [Steckler and Watts, 1978]. The total and tectonic subsidences, as well as sedimentation rates are calculated by the consecutive decompaction and removal of individual layers. The thermal history has to be computed separately using forward thermal models. An alternative is coupled forward modeling, where the structural and thermal history is computed simultaneously. A key difference between these reconstruction methods is that feedbacks of sedimentation on crustal heat flow are often neglected in backstripping methods. In this work we use the coupled basin modeling approach presented by Rüpke et al. [2008] to quantify some of the feedbacks between sedimentation and heat flow and to explore the differences between both reconstruction approaches in a case study from the Vøring Basin, Norwegian Sea. In a series of synthetic model runs we have reviewed the effects of sedimentation on basement heat flow. These example calculations clearly confirm the well-known blanketing effect of sedimentation and show that it is largest for high sedimentation rates. Recovery of sedimentation rates from the stratigraphy is, however, not straightforward. Decompaction-based methods may systematically underestimate sedimentation rates as sediment thickness is assumed to not change/thin during stretching. We present a new method for computing sedimentation rates based on forward modeling and demonstrate the differences between both methods in terms of rates and thermal feedbacks in a reconstruction of the Vøring basin (Euromargin transect 2). We find that sedimentation rates are systematically higher in forward models and heat flow is clearly depressed during times of high sedimentation. In addition, computed subsidence curves can differ significantly between backtripping and forward modeling methods. This shows that integrated basin modeling is important for improved reconstructions of sedimentary basins and passive margins. Rupke, L. H., et al. (2008), Automated thermotectonostratigraphic basin reconstruction: Viking Graben case study, AAPG Bulletin, 92(3), 309-326. Steckler, M. S., and A. B. Watts (1978), SUBSIDENCE OF ATLANTIC-TYPE CONTINENTAL-MARGIN OFF NEW-YORK, Earth and Planetary Science Letters, 41(1), 1-13.
Health literacy screening instruments for eHealth applications: a systematic review.
Collins, Sarah A; Currie, Leanne M; Bakken, Suzanne; Vawdrey, David K; Stone, Patricia W
2012-06-01
To systematically review current health literacy (HL) instruments for use in consumer-facing and mobile health information technology screening and evaluation tools. The databases, PubMed, OVID, Google Scholar, Cochrane Library and Science Citation Index, were searched for health literacy assessment instruments using the terms "health", "literacy", "computer-based," and "psychometrics". All instruments identified by this method were critically appraised according to their reported psychometric properties and clinical feasibility. Eleven different health literacy instruments were found. Screening questions, such as asking a patient about his/her need for assistance in navigating health information, were evaluated in seven different studies and are promising for use as a valid, reliable, and feasible computer-based approach to identify patients that struggle with low health literacy. However, there was a lack of consistency in the types of screening questions proposed. There is also a lack of information regarding the psychometric properties of computer-based health literacy instruments. Only English language health literacy assessment instruments were reviewed and analyzed. Current health literacy screening tools demonstrate varying benefits depending on the context of their use. In many cases, it seems that a single screening question may be a reliable, valid, and feasible means for establishing health literacy. A combination of screening questions that assess health literacy and technological literacy may enable tailoring eHealth applications to user needs. Further research should determine the best screening question(s) and the best synthesis of various instruments' content and methodologies for computer-based health literacy screening and assessment. Copyright © 2012 Elsevier Inc. All rights reserved.
Health Literacy Screening Instruments for eHealth Applications: A Systematic Review
Collins, Sarah A.; Currie, Leanne M.; Bakken, Suzanne; Vawdrey, David K.; Stone, Patricia W.
2012-01-01
Objective To systematically review current health literacy (HL) instruments for use in consumer-facing and mobile health information technology screening and evaluation tools. Design The databases, PubMed, OVID, Google Scholar, Cochrane Library and Science Citation Index, were searched for health literacy assessment instruments using the terms “health”, “literacy”, “computer-based,” and “psychometrics”. All instruments identified by this method were critically appraised according to their reported psychometric properties and clinical feasibility. Results Eleven different health literacy instruments were found. Screening questions, such as asking a patient about his/her need for assistance in navigating health information, were evaluated in 7 different studies and are promising for use as a valid, reliable, and feasible computer-based approach to identify patients that struggle with low health literacy. However, there was a lack of consistency in the types of screening questions proposed. There is also a lack of information regarding the psychometric properties of computer-based health literacy instruments. Limitations Only English language health literacy assessment instruments were reviewed and analyzed. Conclusions Current health literacy screening tools demonstrate varying benefits depending on the context of their use. In many cases, it seems that a single screening question may be a reliable, valid, and feasible means for establishing health literacy. A combination of screening questions that assess health literacy and technological literacy may enable tailoring eHealth applications to user needs. Further research should determine the best screening question(s) and the best synthesis of various instruments’ content and methodologies for computer-based health literacy screening and assessment. PMID:22521719
An Approach to Remove the Systematic Bias from the Storm Surge forecasts in the Venice Lagoon
NASA Astrophysics Data System (ADS)
Canestrelli, A.
2017-12-01
In this work a novel approach is proposed for removing the systematic bias from the storm surge forecast computed by a two-dimensional shallow-water model. The model covers both the Adriatic and Mediterranean seas and provides the forecast at the entrance of the Venice Lagoon. The wind drag coefficient at the water-air interface is treated as a calibration parameter, with a different value for each range of wind velocities and wind directions. This sums up to a total of 16-64 parameters to be calibrated, depending on the chosen resolution. The best set of parameters is determined by means of an optimization procedure, which minimizes the RMS error between measured and modeled water level in Venice for the period 2011-2015. It is shown that a bias is present, for which the peaks of wind velocities provided by the weather forecast are largely underestimated, and that the calibration procedure removes this bias. When the calibrated model is used to reproduce events not included in the calibration dataset, the forecast error is strongly reduced, thus confirming the quality of our procedure. The proposed approach it is not site-specific and could be applied to different situations, such as storm surges caused by intense hurricanes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruneval, Fabien; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Department of Physics, University of California, Berkeley, California 94720
2015-06-28
The predictive power of the ab initio Bethe-Salpeter equation (BSE) approach, rigorously based on many-body Green’s function theory but incorporating information from density functional theory, has already been demonstrated for the optical gaps and spectra of solid-state systems. Interest in photoactive hybrid organic/inorganic systems has recently increased and so has the use of the BSE for computing neutral excitations of organic molecules. However, no systematic benchmarks of the BSE for neutral electronic excitations of organic molecules exist. Here, we study the performance of the BSE for the 28 small molecules in Thiel’s widely used time-dependent density functional theory benchmark setmore » [Schreiber et al., J. Chem. Phys. 128, 134110 (2008)]. We observe that the BSE produces results that depend critically on the mean-field starting point employed in the perturbative approach. We find that this starting point dependence is mainly introduced through the quasiparticle energies obtained at the intermediate GW step and that with a judicious choice of starting mean-field, singlet excitation energies obtained from BSE are in excellent quantitative agreement with higher-level wavefunction methods. The quality of the triplet excitations is slightly less satisfactory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N 3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-02
We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N 3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less
Variable neighborhood search for reverse engineering of gene regulatory networks.
Nicholson, Charles; Goodwin, Leslie; Clark, Corey
2017-01-01
A new search heuristic, Divided Neighborhood Exploration Search, designed to be used with inference algorithms such as Bayesian networks to improve on the reverse engineering of gene regulatory networks is presented. The approach systematically moves through the search space to find topologies representative of gene regulatory networks that are more likely to explain microarray data. In empirical testing it is demonstrated that the novel method is superior to the widely employed greedy search techniques in both the quality of the inferred networks and computational time. Copyright © 2016 Elsevier Inc. All rights reserved.
Although the MYC oncogene has been implicated in cancer, a systematic assessment of alterations of MYC, related transcription factors, and co-regulatory proteins, forming the proximal MYC network (PMN), across human cancers is lacking. Using computational approaches, we define genomic and proteomic features associated with MYC and the PMN across the 33 cancers of The Cancer Genome Atlas. Pan-cancer, 28% of all samples had at least one of the MYC paralogs amplified.
Skinner, Sarah
2015-08-01
Thoracic imaging is commonly ordered in general practice. Guidelines exist for ordering thoracic imaging but few are specific for general practice. This article summarises current indications for imaging the thorax with chest X-ray and computed tomography. A simple frame-work for interpretation of the chest X-ray, suitable for trainees and practitioners providing primary care imaging in rural and remote locations, is presented. Interpretation of thoracic imaging is best done using a systematic approach. Radiological investigation is not warranted in un-complicated upper respiratory tract infections or asthma, minor trauma or acute-on-chronic chest pain.
NASA Astrophysics Data System (ADS)
Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg
2014-06-01
A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.
Making the Most of What We Already Know: A Three-Stage Approach to Systematic Reviewing.
Rebelo Da Silva, Natalie; Zaranyika, Hazel; Langer, Laurenz; Randall, Nicola; Muchiri, Evans; Stewart, Ruth
2016-09-06
Conducting a systematic review in social policy is a resource-intensive process in terms of time and funds. It is thus important to understand the scope of the evidence base of a topic area prior to conducting a synthesis of primary research in order to maximize these resources. One approach to conserving resources is to map out the available evidence prior to undertaking a traditional synthesis. A few examples of this approach exist in the form of gap maps, overviews of reviews, and systematic maps supported by social policy and systematic review agencies alike. Despite this growing call for alternative approaches to systematic reviews, it is still common for systematic review teams to embark on a traditional in-depth review only. This article describes a three-stage approach to systematic reviewing that was applied to a systematic review focusing in interventions for smallholder farmers in Africa. We argue that this approach proved useful in helping us to understand the evidence base. By applying preliminary steps as part of a three-stage approach, we were able to maximize the resources needed to conduct a traditional systematic review on a more focused research question. This enabled us to identify and fill real knowledge gaps, build on work that had already been done, and avoid wasting resources on areas of work that would have no useful outcome. It also facilitated meaningful engagement between the review team and our key policy stakeholders. © The Author(s) 2016.
Agent-Based Modeling in Public Health: Current Applications and Future Directions.
Tracy, Melissa; Cerdá, Magdalena; Keyes, Katherine M
2018-04-01
Agent-based modeling is a computational approach in which agents with a specified set of characteristics interact with each other and with their environment according to predefined rules. We review key areas in public health where agent-based modeling has been adopted, including both communicable and noncommunicable disease, health behaviors, and social epidemiology. We also describe the main strengths and limitations of this approach for questions with public health relevance. Finally, we describe both methodologic and substantive future directions that we believe will enhance the value of agent-based modeling for public health. In particular, advances in model validation, comparisons with other causal modeling procedures, and the expansion of the models to consider comorbidity and joint influences more systematically will improve the utility of this approach to inform public health research, practice, and policy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
Mickan, Sharon; Tilson, Julie K; Atherton, Helen; Roberts, Nia Wyn; Heneghan, Carl
2013-10-28
Handheld computers and mobile devices provide instant access to vast amounts and types of useful information for health care professionals. Their reduced size and increased processing speed has led to rapid adoption in health care. Thus, it is important to identify whether handheld computers are actually effective in clinical practice. A scoping review of systematic reviews was designed to provide a quick overview of the documented evidence of effectiveness for health care professionals using handheld computers in their clinical work. A detailed search, sensitive for systematic reviews was applied for Cochrane, Medline, EMBASE, PsycINFO, Allied and Complementary Medicine Database (AMED), Global Health, and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases. All outcomes that demonstrated effectiveness in clinical practice were included. Classroom learning and patient use of handheld computers were excluded. Quality was assessed using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. A previously published conceptual framework was used as the basis for dual data extraction. Reported outcomes were summarized according to the primary function of the handheld computer. Five systematic reviews met the inclusion and quality criteria. Together, they reviewed 138 unique primary studies. Most reviewed descriptive intervention studies, where physicians, pharmacists, or medical students used personal digital assistants. Effectiveness was demonstrated across four distinct functions of handheld computers: patient documentation, patient care, information seeking, and professional work patterns. Within each of these functions, a range of positive outcomes were reported using both objective and self-report measures. The use of handheld computers improved patient documentation through more complete recording, fewer documentation errors, and increased efficiency. Handheld computers provided easy access to clinical decision support systems and patient management systems, which improved decision making for patient care. Handheld computers saved time and gave earlier access to new information. There were also reports that handheld computers enhanced work patterns and efficiency. This scoping review summarizes the secondary evidence for effectiveness of handheld computers and mhealth. It provides a snapshot of effective use by health care professionals across four key functions. We identified evidence to suggest that handheld computers provide easy and timely access to information and enable accurate and complete documentation. Further, they can give health care professionals instant access to evidence-based decision support and patient management systems to improve clinical decision making. Finally, there is evidence that handheld computers allow health professionals to be more efficient in their work practices. It is anticipated that this evidence will guide clinicians and managers in implementing handheld computers in clinical practice and in designing future research.
NASA Astrophysics Data System (ADS)
Sanyal, Tanmoy; Shell, M. Scott
2016-07-01
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.
Systematic Proteomic Approach to Characterize the Impacts of ...
Chemical interactions have posed a big challenge in toxicity characterization and human health risk assessment of environmental mixtures. To characterize the impacts of chemical interactions on protein and cytotoxicity responses to environmental mixtures, we established a systems biology approach integrating proteomics, bioinformatics, statistics, and computational toxicology to measure expression or phosphorylation levels of 21 critical toxicity pathway regulators and 445 downstream proteins in human BEAS-28 cells treated with 4 concentrations of nickel, 2 concentrations each of cadmium and chromium, as well as 12 defined binary and 8 defined ternary mixtures of these metals in vitro. Multivariate statistical analysis and mathematical modeling of the metal-mediated proteomic response patterns showed a high correlation between changes in protein expression or phosphorylation and cellular toxic responses to both individual metals and metal mixtures. Of the identified correlated proteins, only a small set of proteins including HIF-1a is likely to be responsible for selective cytotoxic responses to different metals and metals mixtures. Furthermore, support vector machine learning was utilized to computationally predict protein responses to uncharacterized metal mixtures using experimentally generated protein response profiles corresponding to known metal mixtures. This study provides a novel proteomic approach for characterization and prediction of toxicities of
Frozen-Orbital and Downfolding Calculations with Auxiliary-Field Quantum Monte Carlo.
Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry
2013-11-12
We describe the implementation of the frozen-orbital and downfolding approximations in the auxiliary-field quantum Monte Carlo (AFQMC) method. These approaches can provide significant computational savings, compared to fully correlating all of the electrons. While the many-body wave function is never explicit in AFQMC, its random walkers are Slater determinants, whose orbitals may be expressed in terms of any one-particle orbital basis. It is therefore straightforward to partition the full N-particle Hilbert space into active and inactive parts to implement the frozen-orbital method. In the frozen-core approximation, for example, the core electrons can be eliminated in the correlated part of the calculations, greatly increasing the computational efficiency, especially for heavy atoms. Scalar relativistic effects are easily included using the Douglas-Kroll-Hess theory. Using this method, we obtain a way to effectively eliminate the error due to single-projector, norm-conserving pseudopotentials in AFQMC. We also illustrate a generalization of the frozen-orbital approach that downfolds high-energy basis states to a physically relevant low-energy sector, which allows a systematic approach to produce realistic model Hamiltonians to further increase efficiency for extended systems.
Multifidelity Analysis and Optimization for Supersonic Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory
2010-01-01
Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.
Metabolomics and Diabetes: Analytical and Computational Approaches
Sas, Kelli M.; Karnovsky, Alla; Michailidis, George
2015-01-01
Diabetes is characterized by altered metabolism of key molecules and regulatory pathways. The phenotypic expression of diabetes and associated complications encompasses complex interactions between genetic, environmental, and tissue-specific factors that require an integrated understanding of perturbations in the network of genes, proteins, and metabolites. Metabolomics attempts to systematically identify and quantitate small molecule metabolites from biological systems. The recent rapid development of a variety of analytical platforms based on mass spectrometry and nuclear magnetic resonance have enabled identification of complex metabolic phenotypes. Continued development of bioinformatics and analytical strategies has facilitated the discovery of causal links in understanding the pathophysiology of diabetes and its complications. Here, we summarize the metabolomics workflow, including analytical, statistical, and computational tools, highlight recent applications of metabolomics in diabetes research, and discuss the challenges in the field. PMID:25713200
Lindén, Rolf O; Eronen, Ville-Pekka; Aittokallio, Tero
2011-03-24
High-throughput genetic screening approaches have enabled systematic means to study how interactions among gene mutations contribute to quantitative fitness phenotypes, with the aim of providing insights into the functional wiring diagrams of genetic interaction networks on a global scale. However, it is poorly known how well these quantitative interaction measurements agree across the screening approaches, which hinders their integrated use toward improving the coverage and quality of the genetic interaction maps in yeast and other organisms. Using large-scale data matrices from epistatic miniarray profiling (E-MAP), genetic interaction mapping (GIM), and synthetic genetic array (SGA) approaches, we carried out here a systematic comparative evaluation among these quantitative maps of genetic interactions in yeast. The relatively low association between the original interaction measurements or their customized scores could be improved using a matrix-based modelling framework, which enables the use of single- and double-mutant fitness estimates and measurements, respectively, when scoring genetic interactions. Toward an integrative analysis, we show how the detections from the different screening approaches can be combined to suggest novel positive and negative interactions which are complementary to those obtained using any single screening approach alone. The matrix approximation procedure has been made available to support the design and analysis of the future screening studies. We have shown here that even if the correlation between the currently available quantitative genetic interaction maps in yeast is relatively low, their comparability can be improved by means of our computational matrix approximation procedure, which will enable integrative analysis and detection of a wider spectrum of genetic interactions using data from the complementary screening approaches.
Personal Health Records: A Systematic Literature Review
2017-01-01
Background Information and communication technology (ICT) has transformed the health care field worldwide. One of the main drivers of this change is the electronic health record (EHR). However, there are still open issues and challenges because the EHR usually reflects the partial view of a health care provider without the ability for patients to control or interact with their data. Furthermore, with the growth of mobile and ubiquitous computing, the number of records regarding personal health is increasing exponentially. This movement has been characterized as the Internet of Things (IoT), including the widespread development of wearable computing technology and assorted types of health-related sensors. This leads to the need for an integrated method of storing health-related data, defined as the personal health record (PHR), which could be used by health care providers and patients. This approach could combine EHRs with data gathered from sensors or other wearable computing devices. This unified view of patients’ health could be shared with providers, who may not only use previous health-related records but also expand them with data resulting from their interactions. Another PHR advantage is that patients can interact with their health data, making decisions that may positively affect their health. Objective This work aimed to explore the recent literature related to PHRs by defining the taxonomy and identifying challenges and open questions. In addition, this study specifically sought to identify data types, standards, profiles, goals, methods, functions, and architecture with regard to PHRs. Methods The method to achieve these objectives consists of using the systematic literature review approach, which is guided by research questions using the population, intervention, comparison, outcome, and context (PICOC) criteria. Results As a result, we reviewed more than 5000 scientific studies published in the last 10 years, selected the most significant approaches, and thoroughly surveyed the health care field related to PHRs. We developed an updated taxonomy and identified challenges, open questions, and current data types, related standards, main profiles, input strategies, goals, functions, and architectures of the PHR. Conclusions All of these results contribute to the achievement of a significant degree of coverage regarding the technology related to PHRs. PMID:28062391
Khalid-de Bakker, C; Jonkers, D; Smits, K; Mesters, I; Masclee, A; Stockbrügger, R
2011-12-01
Colorectal cancer (CRC) screening is implemented by an increasing number of countries. Participation rates of screening programs influence the health benefit and cost-effectiveness of the applied method. The aim was to systematically review participation rate after first-time invitation for CRC screening with fecal occult blood test (FOBT), sigmoidoscopy, colonoscopy, and/or computed tomography (CT) colonography. A systematic literature search was performed prior to October 1 2009. Prospective CRC screening studies of unselected populations reporting participation rates were included. After meta-analyses, overall participation rates were found to be 47 % for FOBT, 42 % for fecal immunologic tests (FITs), 35 % for sigmoidoscopy, 41 % for sigmoidoscopy combined with FIT/FOBT, 28 % for colonoscopy, and 22 % for CT colonography. Studies comparing screening methods showed higher participation rates for less invasive methods. Studies comparing invitation methods showed higher participation rates with general practitioner involvement, a more personalized recruitment approach, and reduction of barriers that discourage participation. Knowledge of identified factors affecting CRC screening participation can be used to improve screening programs. © Georg Thieme Verlag KG Stuttgart · New York.
A depth-first search algorithm to compute elementary flux modes by linear programming
2014-01-01
Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068
Goudey, Benjamin; Abedini, Mani; Hopper, John L; Inouye, Michael; Makalic, Enes; Schmidt, Daniel F; Wagner, John; Zhou, Zeyu; Zobel, Justin; Reumann, Matthias
2015-01-01
Genome-wide association studies (GWAS) are a common approach for systematic discovery of single nucleotide polymorphisms (SNPs) which are associated with a given disease. Univariate analysis approaches commonly employed may miss important SNP associations that only appear through multivariate analysis in complex diseases. However, multivariate SNP analysis is currently limited by its inherent computational complexity. In this work, we present a computational framework that harnesses supercomputers. Based on our results, we estimate a three-way interaction analysis on 1.1 million SNP GWAS data requiring over 5.8 years on the full "Avoca" IBM Blue Gene/Q installation at the Victorian Life Sciences Computation Initiative. This is hundreds of times faster than estimates for other CPU based methods and four times faster than runtimes estimated for GPU methods, indicating how the improvement in the level of hardware applied to interaction analysis may alter the types of analysis that can be performed. Furthermore, the same analysis would take under 3 months on the currently largest IBM Blue Gene/Q supercomputer "Sequoia" at the Lawrence Livermore National Laboratory assuming linear scaling is maintained as our results suggest. Given that the implementation used in this study can be further optimised, this runtime means it is becoming feasible to carry out exhaustive analysis of higher order interaction studies on large modern GWAS.
Accelerating Sequential Gaussian Simulation with a constant path
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus
2018-03-01
Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.
The Potentials of Using Cloud Computing in Schools: A Systematic Literature Review
ERIC Educational Resources Information Center
Hartmann, Simon Birk; Braae, Lotte Qulleq Nygaard; Pedersen, Sine; Khalid, Md. Saifuddin
2017-01-01
Cloud Computing (CC) refers to the physical structure of a communications network, where data is stored in large data centers and can be accessed anywhere, at any time, and from different devices. This systematic literature review identifies and categorizes the potential and barriers of cloud-based teaching in schools from an international…
ERIC Educational Resources Information Center
Rosenberg, Harold; Grad, Helen A.; Matear, David W.
2003-01-01
Performed a systematic review of the published literature comparing computer-aided learning (CAL) with other teaching methods in dental education. Concluded that CAL is as effective as other methods of teaching and can be used as an adjunct to traditional education or as a means of self-instruction. (EV)
Analyzing the Use of Concept Maps in Computer Science: A Systematic Mapping Study
ERIC Educational Resources Information Center
dos Santos, Vinicius; de Souza, Érica F.; Felizardo, Katia R; Vijaykumar, Nandamudi L.
2017-01-01
Context: concept Maps (CMs) enable the creation of a schematic representation of a domain knowledge. For this reason, CMs have been applied in different research areas, including Computer Science. Objective: the objective of this paper is to present the results of a systematic mapping study conducted to collect and evaluate existing research on…
ERIC Educational Resources Information Center
Sandlund, Marlene; McDonough, Suzanne; Hager-Ross, Charlotte
2009-01-01
The aim of this review was to examine systematically the evidence for the application of interactive computer play in the rehabilitation of children with sensorimotor disorders. A literature search of 11 electronic databases was conducted to identify articles published between January 1995 and May 2008. The review was restricted to reports of…
System Dynamics Modeling for Public Health: Background and Opportunities
Homer, Jack B.; Hirsch, Gary B.
2006-01-01
The systems modeling methodology of system dynamics is well suited to address the dynamic complexity that characterizes many public health issues. The system dynamics approach involves the development of computer simulation models that portray processes of accumulation and feedback and that may be tested systematically to find effective policies for overcoming policy resistance. System dynamics modeling of chronic disease prevention should seek to incorporate all the basic elements of a modern ecological approach, including disease outcomes, health and risk behaviors, environmental factors, and health-related resources and delivery systems. System dynamics shows promise as a means of modeling multiple interacting diseases and risks, the interaction of delivery systems and diseased populations, and matters of national and state policy. PMID:16449591
A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation.
Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro
2012-06-01
To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian® On-Board Imager® (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian® OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp (±0.2 mm Al and ±2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 × 5 cm(2) to 40 × 40 cm(2). The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within 2% for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm. © 2012 American Association of Physicists in Medicine.
Cummins, Carla A; McInerney, James O
2011-12-01
Current phylogenetic methods attempt to account for evolutionary rate variation across characters in a matrix. This is generally achieved by the use of sophisticated evolutionary models, combined with dense sampling of large numbers of characters. However, systematic biases and superimposed substitutions make this task very difficult. Model adequacy can sometimes be achieved at the cost of adding large numbers of free parameters, with each parameter being optimized according to some criterion, resulting in increased computation times and large variances in the model estimates. In this study, we develop a simple approach that estimates the relative evolutionary rate of each homologous character. The method that we describe uses the similarity between characters as a proxy for evolutionary rate. In this article, we work on the premise that if the character-state distribution of a homologous character is similar to many other characters, then this character is likely to be relatively slowly evolving. If the character-state distribution of a homologous character is not similar to many or any of the rest of the characters in a data set, then it is likely to be the result of rapid evolution. We show that in some test cases, at least, the premise can hold and the inferences are robust. Importantly, the method does not use a "starting tree" to make the inference and therefore is tree independent. We demonstrate that this approach can work as well as a maximum likelihood (ML) approach, though the ML method needs to have a known phylogeny, or at least a very good estimate of that phylogeny. We then demonstrate some uses for this method of analysis, including the improvement in phylogeny reconstruction for both deep-level and recent relationships and overcoming systematic biases such as base composition bias. Furthermore, we compare this approach to two well-established methods for reweighting or removing characters. These other methods are tree-based and we show that they can be systematically biased. We feel this method can be useful for phylogeny reconstruction, understanding evolutionary rate variation, and for understanding selection variation on different characters.
Performing Systematic Literature Reviews with Novices: An Iterative Approach
ERIC Educational Resources Information Center
Lavallée, Mathieu; Robillard, Pierre-N.; Mirsalari, Reza
2014-01-01
Reviewers performing systematic literature reviews require understanding of the review process and of the knowledge domain. This paper presents an iterative approach for conducting systematic literature reviews that addresses the problems faced by reviewers who are novices in one or both levels of understanding. This approach is derived from…
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Training programs at DOE facilities should prepare personnel to safely and efficiently operate and maintain the facilities in accordance with DOE requirements. This guide presents good practices for a systematic approach to on-the-job training (OJT) and OJT programs and should be used in conjunction with DOE Training Program Handbook: A Systematic Approach to Training, and with the DOE Handbook entitled Alternative Systematic Approaches to Training to develop performance-based OJT programs. DOE contractors may also use this guide to modify existing OJT programs that do not meet the systematic approach to training (SAT) objectives.
NASA Astrophysics Data System (ADS)
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.
Mohiuddin, Syed; Busby, John; Savović, Jelena; Richards, Alison; Northstone, Kate; Hollingworth, William; Donovan, Jenny L; Vasilakis, Christos
2017-01-01
Objectives Overcrowding in the emergency department (ED) is common in the UK as in other countries worldwide. Computer simulation is one approach used for understanding the causes of ED overcrowding and assessing the likely impact of changes to the delivery of emergency care. However, little is known about the usefulness of computer simulation for analysis of ED patient flow. We undertook a systematic review to investigate the different computer simulation methods and their contribution for analysis of patient flow within EDs in the UK. Methods We searched eight bibliographic databases (MEDLINE, EMBASE, COCHRANE, WEB OF SCIENCE, CINAHL, INSPEC, MATHSCINET and ACM DIGITAL LIBRARY) from date of inception until 31 March 2016. Studies were included if they used a computer simulation method to capture patient progression within the ED of an established UK National Health Service hospital. Studies were summarised in terms of simulation method, key assumptions, input and output data, conclusions drawn and implementation of results. Results Twenty-one studies met the inclusion criteria. Of these, 19 used discrete event simulation and 2 used system dynamics models. The purpose of many of these studies (n=16; 76%) centred on service redesign. Seven studies (33%) provided no details about the ED being investigated. Most studies (n=18; 86%) used specific hospital models of ED patient flow. Overall, the reporting of underlying modelling assumptions was poor. Nineteen studies (90%) considered patient waiting or throughput times as the key outcome measure. Twelve studies (57%) reported some involvement of stakeholders in the simulation study. However, only three studies (14%) reported on the implementation of changes supported by the simulation. Conclusions We found that computer simulation can provide a means to pretest changes to ED care delivery before implementation in a safe and efficient manner. However, the evidence base is small and poorly developed. There are some methodological, data, stakeholder, implementation and reporting issues, which must be addressed by future studies. PMID:28487459
A Systematic Review of Serious Games in Training Health Care Professionals.
Wang, Ryan; DeMaria, Samuel; Goldberg, Andrew; Katz, Daniel
2016-02-01
Serious games are computer-based games designed for training purposes. They are poised to expand their role in medical education. This systematic review, conducted in accordance with PRISMA guidelines, aimed to synthesize current serious gaming trends in health care training, especially those pertaining to developmental methodologies and game evaluation. PubMed, EMBASE, and Cochrane databases were queried for relevant documents published through December 2014. Of the 3737 publications identified, 48 of them, covering 42 serious games, were included. From 2007 to 2014, they demonstrate a growth from 2 games and 2 genres to 42 games and 8 genres. Overall, study design was heterogeneous and methodological quality by MERQSI score averaged 10.5/18, which is modest. Seventy-nine percent of serious games were evaluated for training outcomes. As the number of serious games for health care training continues to grow, having schemas that organize how educators approach their development and evaluation is essential for their success.
Electronic communication based interventions for hazardous young drinkers: A systematic review.
O Rourke, L; Humphris, G; Baldacchino, A
2016-09-01
Previous reviews have specifically looked at computer-based or Internet-based approaches. However, there has been no systematic review focused upon electronic communication based interventions for hazardous young drinkers. Out of 3298 relevant citations, 13 papers consisting of 11 studies met the inclusion criteria. Effectiveness of intervention delivery was assessed using behavioural outcomes. Eight papers delivered interventions using the Web, three implemented text messaging, one used a mobile phone app and the remaining paper used a social networking site. The ability to provide personalized electronic feedback resulted in a reduction in alcohol consumption, frequency of binge drinking, and drinking in a non-risky way. However, intervention length did not appear to have an impact on overall effectiveness. Usage of text messaging and Social Network Sites (SNS) increased accessibility and ease of engaging in an intervention that is appealing and acceptable for young adults. Copyright © 2016 Elsevier Ltd. All rights reserved.
Xu, Min; Chai, Xiaoqi; Muthakana, Hariank; Liang, Xiaodan; Yang, Ge; Zeev-Ben-Mordehai, Tzviya; Xing, Eric P.
2017-01-01
Abstract Motivation: Cellular Electron CryoTomography (CECT) enables 3D visualization of cellular organization at near-native state and in sub-molecular resolution, making it a powerful tool for analyzing structures of macromolecular complexes and their spatial organizations inside single cells. However, high degree of structural complexity together with practical imaging limitations makes the systematic de novo discovery of structures within cells challenging. It would likely require averaging and classifying millions of subtomograms potentially containing hundreds of highly heterogeneous structural classes. Although it is no longer difficult to acquire CECT data containing such amount of subtomograms due to advances in data acquisition automation, existing computational approaches have very limited scalability or discrimination ability, making them incapable of processing such amount of data. Results: To complement existing approaches, in this article we propose a new approach for subdividing subtomograms into smaller but relatively homogeneous subsets. The structures in these subsets can then be separately recovered using existing computation intensive methods. Our approach is based on supervised structural feature extraction using deep learning, in combination with unsupervised clustering and reference-free classification. Our experiments show that, compared with existing unsupervised rotation invariant feature and pose-normalization based approaches, our new approach achieves significant improvements in both discrimination ability and scalability. More importantly, our new approach is able to discover new structural classes and recover structures that do not exist in training data. Availability and Implementation: Source code freely available at http://www.cs.cmu.edu/∼mxu1/software. Contact: mxu1@cs.cmu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881965
A simulation based method to assess inversion algorithms for transverse relaxation data
NASA Astrophysics Data System (ADS)
Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong
2008-04-01
NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.
ERIC Educational Resources Information Center
Simpson, Andrea; El-Refaie, Amr; Stephenson, Caitlin; Chen, Yi-Ping Phoebe; Deng, Dennis; Erickson, Shane; Tay, David; Morris, Meg E.; Doube, Wendy; Caelli, Terry
2015-01-01
The purpose of this systematic review was to examine whether online or computer-based technologies were effective in assisting the development of speech and language skills in children with hearing loss. Relevant studies of children with hearing loss were analysed with reference to (1) therapy outcomes, (2) factors affecting outcomes, and (3)…
Computational Approaches to Viral Evolution and Rational Vaccine Design
NASA Astrophysics Data System (ADS)
Bhattacharya, Tanmoy
2006-10-01
Viral pandemics, including HIV, are a major health concern across the world. Experimental techniques available today have uncovered a great wealth of information about how these viruses infect, grow, and cause disease; as well as how our body attempts to defend itself against them. Nevertheless, due to the high variability and fast evolution of many of these viruses, the traditional method of developing vaccines by presenting a heuristically chosen strain to the body fails and an effective intervention strategy still eludes us. A large amount of carefully curated genomic data on a number of these viruses are now available, often annotated with disease and immunological context. The availability of parallel computers has now made it possible to carry out a systematic analysis of this data within an evolutionary framework. I will describe, as an example, how computations on such data has allowed us to understand the origins and diversification of HIV, the causative agent of AIDS. On the practical side, computations on the same data is now being used to inform choice or defign of optimal vaccine strains.
Quantum computation with realistic magic-state factories
NASA Astrophysics Data System (ADS)
O'Gorman, Joe; Campbell, Earl T.
2017-03-01
Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.
Linshiz, Gregory; Goldberg, Alex; Konry, Tania; Hillson, Nathan J
2012-01-01
Synthetic biology is a nascent field that emerged in earnest only around the turn of the millennium. It aims to engineer new biological systems and impart new biological functionality, often through genetic modifications. The design and construction of new biological systems is a complex, multistep process, requiring multidisciplinary collaborative efforts from "fusion" scientists who have formal training in computer science or engineering, as well as hands-on biological expertise. The public has high expectations for synthetic biology and eagerly anticipates the development of solutions to the major challenges facing humanity. This article discusses laboratory practices and the conduct of research in synthetic biology. It argues that the fusion science approach, which integrates biology with computer science and engineering best practices, including standardization, process optimization, computer-aided design and laboratory automation, miniaturization, and systematic management, will increase the predictability and reproducibility of experiments and lead to breakthroughs in the construction of new biological systems. The article also discusses several successful fusion projects, including the development of software tools for DNA construction design automation, recursive DNA construction, and the development of integrated microfluidics systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raugei, Simone; DuBois, Daniel L.; Rousseau, Roger J.
Rational design of molecular catalysts requires a systematic approach to designing ligands with specific functionality and precisely tailored electronic and steric properties. It then becomes possible to devise computer protocols to predict accurately the required properties and ultimately to design catalysts by computer. In this account we first review how thermodynamic properties such as oxidation-reduction potentials (E0), acidities (pKa), and hydride donor abilities (ΔGH-) form the basis for a systematic design of molecular catalysts for reactions that are critical for a secure energy future (hydrogen evolution and oxidation, oxygen and nitrogen reduction, and carbon dioxide reduction). We highlight how densitymore » functional theory allows us to determine and predict these properties within “chemical” accuracy (~ 0.06 eV for redox potentials, ~ 1 pKa unit for pKa values, and ~ 1.5 kcal/mol for hydricities). These quantities determine free energy maps and profiles associated with catalytic cycles, i.e. the relative energies of intermediates, and help us distinguish between desirable and high-energy pathways and mechanisms. Good catalysts have flat profiles that avoid high activation barriers due to low and high energy intermediates. We illustrate how the criterion of a flat energy profile lends itself to the prediction of design points by computer for optimum catalysts. This research was carried out in the Center for Molecular Electro-catalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences. Pacific Northwest National Laboratory (PNNL) is operated for the DOE by Battelle.« less
A Systematic Literature Mapping of Risk Analysis of Big Data in Cloud Computing Environment
NASA Astrophysics Data System (ADS)
Bee Yusof Ali, Hazirah; Marziana Abdullah, Lili; Kartiwi, Mira; Nordin, Azlin; Salleh, Norsaremah; Sham Awang Abu Bakar, Normi
2018-05-01
This paper investigates previous literature that focusses on the three elements: risk assessment, big data and cloud. We use a systematic literature mapping method to search for journals and proceedings. The systematic literature mapping process is utilized to get a properly screened and focused literature. With the help of inclusion and exclusion criteria, the search of literature is further narrowed. Classification helps us in grouping the literature into categories. At the end of the mapping, gaps can be seen. The gap is where our focus should be in analysing risk of big data in cloud computing environment. Thus, a framework of how to assess the risk of security, privacy and trust associated with big data and cloud computing environment is highly needed.
Preprocessing and meta-classification for brain-computer interfaces.
Hammon, Paul S; de Sa, Virginia R
2007-03-01
A brain-computer interface (BCI) is a system which allows direct translation of brain states into actions, bypassing the usual muscular pathways. A BCI system works by extracting user brain signals, applying machine learning algorithms to classify the user's brain state, and performing a computer-controlled action. Our goal is to improve brain state classification. Perhaps the most obvious way to improve classification performance is the selection of an advanced learning algorithm. However, it is now well known in the BCI community that careful selection of preprocessing steps is crucial to the success of any classification scheme. Furthermore, recent work indicates that combining the output of multiple classifiers (meta-classification) leads to improved classification rates relative to single classifiers (Dornhege et al., 2004). In this paper, we develop an automated approach which systematically analyzes the relative contributions of different preprocessing and meta-classification approaches. We apply this procedure to three data sets drawn from BCI Competition 2003 (Blankertz et al., 2004) and BCI Competition III (Blankertz et al., 2006), each of which exhibit very different characteristics. Our final classification results compare favorably with those from past BCI competitions. Additionally, we analyze the relative contributions of individual preprocessing and meta-classification choices and discuss which types of BCI data benefit most from specific algorithms.
2013-01-01
Background Handheld computers and mobile devices provide instant access to vast amounts and types of useful information for health care professionals. Their reduced size and increased processing speed has led to rapid adoption in health care. Thus, it is important to identify whether handheld computers are actually effective in clinical practice. Objective A scoping review of systematic reviews was designed to provide a quick overview of the documented evidence of effectiveness for health care professionals using handheld computers in their clinical work. Methods A detailed search, sensitive for systematic reviews was applied for Cochrane, Medline, EMBASE, PsycINFO, Allied and Complementary Medicine Database (AMED), Global Health, and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases. All outcomes that demonstrated effectiveness in clinical practice were included. Classroom learning and patient use of handheld computers were excluded. Quality was assessed using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. A previously published conceptual framework was used as the basis for dual data extraction. Reported outcomes were summarized according to the primary function of the handheld computer. Results Five systematic reviews met the inclusion and quality criteria. Together, they reviewed 138 unique primary studies. Most reviewed descriptive intervention studies, where physicians, pharmacists, or medical students used personal digital assistants. Effectiveness was demonstrated across four distinct functions of handheld computers: patient documentation, patient care, information seeking, and professional work patterns. Within each of these functions, a range of positive outcomes were reported using both objective and self-report measures. The use of handheld computers improved patient documentation through more complete recording, fewer documentation errors, and increased efficiency. Handheld computers provided easy access to clinical decision support systems and patient management systems, which improved decision making for patient care. Handheld computers saved time and gave earlier access to new information. There were also reports that handheld computers enhanced work patterns and efficiency. Conclusions This scoping review summarizes the secondary evidence for effectiveness of handheld computers and mhealth. It provides a snapshot of effective use by health care professionals across four key functions. We identified evidence to suggest that handheld computers provide easy and timely access to information and enable accurate and complete documentation. Further, they can give health care professionals instant access to evidence-based decision support and patient management systems to improve clinical decision making. Finally, there is evidence that handheld computers allow health professionals to be more efficient in their work practices. It is anticipated that this evidence will guide clinicians and managers in implementing handheld computers in clinical practice and in designing future research. PMID:24165786
Structural Optimization of a Force Balance Using a Computational Experiment Design
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2002-01-01
This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.
Model-Driven Useware Engineering
NASA Astrophysics Data System (ADS)
Meixner, Gerrit; Seissler, Marc; Breiner, Kai
User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.
A time domain frequency-selective multivariate Granger causality approach.
Leistritz, Lutz; Witte, Herbert
2016-08-01
The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Veawab, A.
2013-03-01
This study proposes a sequential factorial analysis (SFA) approach for supporting regional air quality management under uncertainty. SFA is capable not only of examining the interactive effects of input parameters, but also of analyzing the effects of constraints. When there are too many factors involved in practical applications, SFA has the advantage of conducting a sequence of factorial analyses for characterizing the effects of factors in a systematic manner. The factor-screening strategy employed in SFA is effective in greatly reducing the computational effort. The proposed SFA approach is applied to a regional air quality management problem for demonstrating its applicability. The results indicate that the effects of factors are evaluated quantitatively, which can help decision makers identify the key factors that have significant influence on system performance and explore the valuable information that may be veiled beneath their interrelationships.
Multi-Step Usage of in Vivo Models During Rational Drug Design and Discovery
Williams, Charles H.; Hong, Charles C.
2011-01-01
In this article we propose a systematic development method for rational drug design while reviewing paradigms in industry, emerging techniques and technologies in the field. Although the process of drug development today has been accelerated by emergence of computational methodologies, it is a herculean challenge requiring exorbitant resources; and often fails to yield clinically viable results. The current paradigm of target based drug design is often misguided and tends to yield compounds that have poor absorption, distribution, metabolism, and excretion, toxicology (ADMET) properties. Therefore, an in vivo organism based approach allowing for a multidisciplinary inquiry into potent and selective molecules is an excellent place to begin rational drug design. We will review how organisms like the zebrafish and Caenorhabditis elegans can not only be starting points, but can be used at various steps of the drug development process from target identification to pre-clinical trial models. This systems biology based approach paired with the power of computational biology; genetics and developmental biology provide a methodological framework to avoid the pitfalls of traditional target based drug design. PMID:21731440
Quantum chemical approaches in structure-based virtual screening and lead optimization
NASA Astrophysics Data System (ADS)
Cavasotto, Claudio N.; Adler, Natalia S.; Aucar, Maria G.
2018-05-01
Today computational chemistry is a consolidated tool in drug lead discovery endeavors. Due to methodological developments and to the enormous advance in computer hardware, methods based on quantum mechanics (QM) have gained great attention in the last 10 years, and calculations on biomacromolecules are becoming increasingly explored, aiming to provide better accuracy in the description of protein-ligand interactions and the prediction of binding affinities. In principle, the QM formulation includes all contributions to the energy, accounting for terms usually missing in molecular mechanics force-fields, such as electronic polarization effects, metal coordination, and covalent binding; moreover, QM methods are systematically improvable, and provide a greater degree of transferability. In this mini-review we present recent applications of explicit QM-based methods in small-molecule docking and scoring, and in the calculation of binding free-energy in protein-ligand systems. Although the routine use of QM-based approaches in an industrial drug lead discovery setting remains a formidable challenging task, it is likely they will increasingly become active players within the drug discovery pipeline.
Coarse-Graining of Polymer Dynamics via Energy Renormalization
NASA Astrophysics Data System (ADS)
Xia, Wenjie; Song, Jake; Phelan, Frederick; Douglas, Jack; Keten, Sinan
The computational prediction of the properties of polymeric materials to serve the needs of materials design and prediction of their performance is a grand challenge due to the prohibitive computational times of all-atomistic (AA) simulations. Coarse-grained (CG) modeling is an essential strategy for making progress on this problem. While there has been intense activity in this area, effective methods of coarse-graining have been slow to develop. Our approach to this fundamental problem starts from the observation that integrating out degrees of freedom of the AA model leads to a strong modification of the configurational entropy and cohesive interaction. Based on this observation, we propose a temperature-dependent systematic renormalization of the cohesive interaction in the CG modeling to recover the thermodynamic modifications in the system and the dynamics of the AA model. Here, we show that this energy renormalization approach to CG can faithfully estimate the diffusive, segmental and glassy dynamics of the AA model over a large temperature range spanning from the Arrhenius melt to the non-equilibrium glassy states. Our proposed CG strategy offers a promising strategy for developing thermodynamically consistent CG models with temperature transferability.
Schumann, Marcel; Armen, Roger S
2013-05-30
Molecular docking of small-molecules is an important procedure for computer-aided drug design. Modeling receptor side chain flexibility is often important or even crucial, as it allows the receptor to adopt new conformations as induced by ligand binding. However, the accurate and efficient incorporation of receptor side chain flexibility has proven to be a challenge due to the huge computational complexity required to adequately address this problem. Here we describe a new docking approach with a very fast, graph-based optimization algorithm for assignment of the near-optimal set of residue rotamers. We extensively validate our approach using the 40 DUD target benchmarks commonly used to assess virtual screening performance and demonstrate a large improvement using the developed side chain optimization over rigid receptor docking (average ROC AUC of 0.693 vs. 0.623). Compared to numerous benchmarks, the overall performance is better than nearly all other commonly used procedures. Furthermore, we provide a detailed analysis of the level of receptor flexibility observed in docking results for different classes of residues and elucidate potential avenues for further improvement. Copyright © 2013 Wiley Periodicals, Inc.
Semiclassical Path Integral Calculation of Nonlinear Optical Spectroscopy.
Provazza, Justin; Segatta, Francesco; Garavelli, Marco; Coker, David F
2018-02-13
Computation of nonlinear optical response functions allows for an in-depth connection between theory and experiment. Experimentally recorded spectra provide a high density of information, but to objectively disentangle overlapping signals and to reach a detailed and reliable understanding of the system dynamics, measurements must be integrated with theoretical approaches. Here, we present a new, highly accurate and efficient trajectory-based semiclassical path integral method for computing higher order nonlinear optical response functions for non-Markovian open quantum systems. The approach is, in principle, applicable to general Hamiltonians and does not require any restrictions on the form of the intrasystem or system-bath couplings. This method is systematically improvable and is shown to be valid in parameter regimes where perturbation theory-based methods qualitatively breakdown. As a test of the methodology presented here, we study a system-bath model for a coupled dimer for which we compare against numerically exact results and standard approximate perturbation theory-based calculations. Additionally, we study a monomer with discrete vibronic states that serves as the starting point for future investigation of vibronic signatures in nonlinear electronic spectroscopy.
CBF measured by Xe-CT: Approach to analysis and normal values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yonas, H.; Darby, J.M.; Marks, E.C.
1991-09-01
Normal reference values and a practical approach to CBF analysis are needed for routine clinical analysis and interpretation of xenon-enhanced computed tomography (CT) CBF studies. The authors measured CBF in 67 normal individuals with the GE 9800 CT scanner adapted for CBF imaging with stable Xe. CBF values for vascular territories were systematically analyzed using the clustering of contiguous 2-cm circular regions of interest (ROIs) placed within the cortical mantle and basal ganglia. Mixed cortical flows averaged 51 {plus minus} 10ml.100g-1.min-1. High and low flow compartments, sampled by placing 5-mm circular ROIs in regions containing the highest and lowest flowmore » values in each hemisphere, averaged 84 {plus minus} 14 and 20 {plus minus} 5 ml.100 g-1.min-1, respectively. Mixed cortical flow values as well as values within the high flow compartment demonstrated significant decline with age; however, there were no significant age-related changes in the low flow compartment. The clustering of systematically placed cortical and subcortical ROIs has provided a normative data base for Xe-CT CBF and a flexible and uncomplicated method for the analysis of CBF maps generated by Xe-enhanced CT.« less
Using Laser Scanners to Augment the Systematic Error Pointing Model
NASA Astrophysics Data System (ADS)
Wernicke, D. R.
2016-08-01
The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.
NASA Astrophysics Data System (ADS)
Lindsay, Jan M.; Robertson, Richard E. A.
2018-04-01
We report on the process of generating the first suite of integrated volcanic hazard zonation maps for the islands of Dominica, Grenada (including Kick 'em Jenny and Ronde/Caille), Nevis, Saba, St. Eustatius, St. Kitts, Saint Lucia and St Vincent in the Lesser Antilles. We developed a systematic approach that accommodated the range in prior knowledge of the volcanoes in the region. A first-order hazard assessment for each island was used to develop one or more scenario(s) of likely future activity, for which scenario-based hazard maps were generated. For the most-likely scenario on each island we also produced a poster-sized integrated volcanic hazard zonation map, which combined the individual hazardous phenomena depicted in the scenario-based hazard maps into integrated hazard zones. We document the philosophy behind the generation of this suite of maps, and the method by which hazard information was combined to create integrated hazard zonation maps, and illustrate our approach through a case study of St. Vincent. We also outline some of the challenges we faced using this approach, and the lessons we have learned by observing how stakeholders have interacted with the maps over the past 10 years. Based on our experience, we recommend that future map makers involve stakeholders in the entire map generation process, especially when making design choices such as type of base map, use of colour and gradational boundaries, and indeed what to depict on the map. We also recommend careful consideration of how to evaluate and depict offshore hazard of island volcanoes, and recommend computer-assisted modelling of all phenomena to generate more realistic hazard footprints. Finally, although our systematic approach to integrating individual hazard data into zones generally worked well, we suggest that a better approach might be to treat the integration of hazards on a case-by-case basis to ensure the final product meets map users' needs. We hope that the documentation of our experience might be useful for other map makers to take into account when creating new or updating existing maps.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
NASA Astrophysics Data System (ADS)
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.
Structure, function, and behaviour of computational models in systems biology
2013-01-01
Background Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such “bio-models” necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. Results We present a conceptual framework – the meaning facets – which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model’s components (structure), the meaning of the model’s intended use (function), and the meaning of the model’s dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. Conclusions The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research. PMID:23721297
ERIC Educational Resources Information Center
Kankaanpää, Irja; Isomäki, Hannakaisa
2013-01-01
This paper reviews research literature on the production and commercialization of IT-enabled higher education in computer science. Systematic literature review (SLR) was carried out in order to find out to what extent this area has been studied, more specifically how much it has been studied and to what detail. The results of this paper make a…
The cyber threat landscape: Challenges and future research directions
NASA Astrophysics Data System (ADS)
Gil, Santiago; Kott, Alexander; Barabási, Albert-László
2014-07-01
While much attention has been paid to the vulnerability of computer networks to node and link failure, there is limited systematic understanding of the factors that determine the likelihood that a node (computer) is compromised. We therefore collect threat log data in a university network to study the patterns of threat activity for individual hosts. We relate this information to the properties of each host as observed through network-wide scans, establishing associations between the network services a host is running and the kinds of threats to which it is susceptible. We propose a methodology to associate services to threats inspired by the tools used in genetics to identify statistical associations between mutations and diseases. The proposed approach allows us to determine probabilities of infection directly from observation, offering an automated high-throughput strategy to develop comprehensive metrics for cyber-security.
Simulation of air pollution due to marine engines
NASA Astrophysics Data System (ADS)
Stan, L. C.
2017-08-01
This paperwork tried to simulate the combustion inside the marine engines using the newest computer methods and technologies with the result of a diverse and rich palette of solutions, extremely useful for the study and prediction of complex phenomena of the fuel combustion. The paperwork is contributing to the theoretical systematization of the area of interest bringing into attention a thoroughly inventory of the thermodynamic description of the phenomena which take place in the combustion process into the marine diesel engines; to the in depth multidimensional combustion models description along with the interdisciplinary phenomenology taking place in the combustion models; to the FEA (Finite Elements Method) modelling for the combustion chemistry in the nonpremixed mixtures approach considered too; the CFD (Computational Fluid Dynamics) model was issued for the combustion area and a rich palette of results interesting for any researcher of the process.
Toxcast and the Use of Human Relevant In Vitro Exposures ...
The path for incorporating new approach methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. These challenges include sufficient coverage of toxicological mechanisms to meaningfully interpret negative test results, development of increasingly relevant test systems, computational modeling to integrate experimental data, putting results in a dose and exposure context, characterizing uncertainty, and efficient validation of the test systems and computational models. The presentation will cover progress at the U.S. EPA in systematically addressing each of these challenges and delivering more human-relevant risk-based assessments. This abstract does not necessarily reflect U.S. EPA policy. Presentation at the British Toxicological Society Annual Congress on ToxCast and the Use of Human Relevant In Vitro Exposures: Incorporating high-throughput exposure and toxicity testing data for 21st century risk assessments .
A Development Architecture for Serious Games Using BCI (Brain Computer Interface) Sensors
Sung, Yunsick; Cho, Kyungeun; Um, Kyhyun
2012-01-01
Games that use brainwaves via brain–computer interface (BCI) devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories. PMID:23202227
A development architecture for serious games using BCI (brain computer interface) sensors.
Sung, Yunsick; Cho, Kyungeun; Um, Kyhyun
2012-11-12
Games that use brainwaves via brain-computer interface (BCI) devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories.
Large-scale structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1983-01-01
Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.
Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2006-01-01
Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.
Rosen, Eyal; Taschieri, Silvio; Del Fabbro, Massimo; Beitlitum, Ilan; Tsesis, Igor
2015-07-01
The aim of this study was to evaluate the diagnostic efficacy of cone-beam computed tomographic (CBCT) imaging in endodontics based on a systematic search and analysis of the literature using an efficacy model. A systematic search of the literature was performed to identify studies evaluating the use of CBCT imaging in endodontics. The identified studies were subjected to strict inclusion criteria followed by an analysis using a hierarchical model of efficacy (model) designed for appraisal of the literature on the levels of efficacy of a diagnostic imaging modality. Initially, 485 possible relevant articles were identified. After title and abstract screening and a full-text evaluation, 58 articles (12%) that met the inclusion criteria were analyzed and allocated to levels of efficacy. Most eligible articles (n = 52, 90%) evaluated technical characteristics or the accuracy of CBCT imaging, which was defined in this model as low levels of efficacy. Only 6 articles (10%) proclaimed to evaluate the efficacy of CBCT imaging to support the practitioner's decision making; treatment planning; and, ultimately, the treatment outcome, which was defined as higher levels of efficacy. The expected ultimate benefit of CBCT imaging to the endodontic patient as evaluated by its level of diagnostic efficacy is unclear and is mainly limited to its technical and diagnostic accuracy efficacies. Even for these low levels of efficacy, current knowledge is limited. Therefore, a cautious and rational approach is advised when considering CBCT imaging for endodontic purposes. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
2013-01-01
Background MicroRNAs (miRNAs) are important post-transcriptional regulators that have been demonstrated to play an important role in human diseases. Elucidating the associations between miRNAs and diseases at the systematic level will deepen our understanding of the molecular mechanisms of diseases. However, miRNA-disease associations identified by previous computational methods are far from completeness and more effort is needed. Results We developed a computational framework to identify miRNA-disease associations by performing random walk analysis, and focused on the functional link between miRNA targets and disease genes in protein-protein interaction (PPI) networks. Furthermore, a bipartite miRNA-disease network was constructed, from which several miRNA-disease co-regulated modules were identified by hierarchical clustering analysis. Our approach achieved satisfactory performance in identifying known cancer-related miRNAs for nine human cancers with an area under the ROC curve (AUC) ranging from 71.3% to 91.3%. By systematically analyzing the global properties of the miRNA-disease network, we found that only a small number of miRNAs regulated genes involved in various diseases, genes associated with neurological diseases were preferentially regulated by miRNAs and some immunological diseases were associated with several specific miRNAs. We also observed that most diseases in the same co-regulated module tended to belong to the same disease category, indicating that these diseases might share similar miRNA regulatory mechanisms. Conclusions In this study, we present a computational framework to identify miRNA-disease associations, and further construct a bipartite miRNA-disease network for systematically analyzing the global properties of miRNA regulation of disease genes. Our findings provide a broad perspective on the relationships between miRNAs and diseases and could potentially aid future research efforts concerning miRNA involvement in disease pathogenesis. PMID:24103777
NASA Technical Reports Server (NTRS)
Jourdan, Didier; Gautier, Catherine
1995-01-01
Comprehensive Ocean-Atmosphere Data Set (COADS) and satellite-derived parameters are input to a similarity theory-based model and treated in completely equivalent ways to compute global latent heat flux (LHF). In order to compute LHF exclusively from satellite measurements, an empirical relationship (Q-W relationship) is used to compute the air mixing ratio from Special Sensor Microwave/Imager (SSM/I) precipitable water W and a new one is derived to compute the air temperature also from retrieved W(T-W relationship). First analyses indicate that in situ and satellite LHF computations compare within 40%, but systematic errors increase the differences up to 100% in some regions. By investigating more closely the origin of the discrepancies, the spatial sampling of ship reports has been found to be an important source of error in the observed differences. When the number of in situ data records increases (more than 20 per month), the agreement is about 50 W/sq m rms (40 W/sq m rms for multiyear averages). Limitations of both empirical relationships and W retrieval errors strongly affect the LHF computation. Systematic LHF overestimation occurs in strong subsidence regions and LHF underestimation occurs within surface convergence zones and over oceanic upwelling areas. The analysis of time series of the different parameters in these regions confirms that systematic LHF discrepancies are negatively correlated with the differences between COADS and satellite-derived values of the air mixing ratio and air temperature. To reduce the systematic differences in satellite-derived LHF, a preliminary ship-satellite blending procedure has been developed for the air mixing ratio and air temperature.
Language-Agnostic Reproducible Data Analysis Using Literate Programming.
Vassilev, Boris; Louhimo, Riku; Ikonen, Elina; Hautaniemi, Sampsa
2016-01-01
A modern biomedical research project can easily contain hundreds of analysis steps and lack of reproducibility of the analyses has been recognized as a severe issue. While thorough documentation enables reproducibility, the number of analysis programs used can be so large that in reality reproducibility cannot be easily achieved. Literate programming is an approach to present computer programs to human readers. The code is rearranged to follow the logic of the program, and to explain that logic in a natural language. The code executed by the computer is extracted from the literate source code. As such, literate programming is an ideal formalism for systematizing analysis steps in biomedical research. We have developed the reproducible computing tool Lir (literate, reproducible computing) that allows a tool-agnostic approach to biomedical data analysis. We demonstrate the utility of Lir by applying it to a case study. Our aim was to investigate the role of endosomal trafficking regulators to the progression of breast cancer. In this analysis, a variety of tools were combined to interpret the available data: a relational database, standard command-line tools, and a statistical computing environment. The analysis revealed that the lipid transport related genes LAPTM4B and NDRG1 are coamplified in breast cancer patients, and identified genes potentially cooperating with LAPTM4B in breast cancer progression. Our case study demonstrates that with Lir, an array of tools can be combined in the same data analysis to improve efficiency, reproducibility, and ease of understanding. Lir is an open-source software available at github.com/borisvassilev/lir.
Language-Agnostic Reproducible Data Analysis Using Literate Programming
Vassilev, Boris; Louhimo, Riku; Ikonen, Elina; Hautaniemi, Sampsa
2016-01-01
A modern biomedical research project can easily contain hundreds of analysis steps and lack of reproducibility of the analyses has been recognized as a severe issue. While thorough documentation enables reproducibility, the number of analysis programs used can be so large that in reality reproducibility cannot be easily achieved. Literate programming is an approach to present computer programs to human readers. The code is rearranged to follow the logic of the program, and to explain that logic in a natural language. The code executed by the computer is extracted from the literate source code. As such, literate programming is an ideal formalism for systematizing analysis steps in biomedical research. We have developed the reproducible computing tool Lir (literate, reproducible computing) that allows a tool-agnostic approach to biomedical data analysis. We demonstrate the utility of Lir by applying it to a case study. Our aim was to investigate the role of endosomal trafficking regulators to the progression of breast cancer. In this analysis, a variety of tools were combined to interpret the available data: a relational database, standard command-line tools, and a statistical computing environment. The analysis revealed that the lipid transport related genes LAPTM4B and NDRG1 are coamplified in breast cancer patients, and identified genes potentially cooperating with LAPTM4B in breast cancer progression. Our case study demonstrates that with Lir, an array of tools can be combined in the same data analysis to improve efficiency, reproducibility, and ease of understanding. Lir is an open-source software available at github.com/borisvassilev/lir. PMID:27711123
A direct method for nonlinear ill-posed problems
NASA Astrophysics Data System (ADS)
Lakhal, A.
2018-02-01
We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.
A data mining method to facilitate SAR transfer.
Wassermann, Anne Mai; Bajorath, Jürgen
2011-08-22
A challenging practical problem in medicinal chemistry is the transfer of SAR information from one chemical series to another. Currently, there are no computational methods available to rationalize or support this process. Herein, we present a data mining approach that enables the identification of alternative analog series with different core structures, corresponding substitution patterns, and comparable potency progression. Scaffolds can be exchanged between these series and new analogs suggested that incorporate preferred R-groups. The methodology can be applied to search for alternative analog series if one series is known or, alternatively, to systematically assess SAR transfer potential in compound databases.
An insight into cyanobacterial genomics--a perspective.
Lakshmi, Palaniswamy Thanga Velan
2007-05-20
At the turn of the millennium, cyanobacteria deserve attention to be reviewed to understand the past, present and future. The advent of post genomic research, which encompasses functional genomics, structural genomics, transcriptomics, pharmacogenomics, proteomics and metabolomics that allows a systematic wide approach for biological system studies. Thus by exploiting genomic and associated protein information through computational analyses, the fledging information that are generated by biotechnological analyses, could be well extrapolated to fill in the lacuna of scarce information on cyanobacteria and as an effort this paper attempts to highlights the perspectives available and awakens researcher to concentrate in the field of cyanobacterial informatics.
NASA Astrophysics Data System (ADS)
Siontorou, Christina G.
2012-12-01
Herbal products have gained increasing popularity in the last decades, and are now broadly used to treat illness and improve health. Notwithstanding the public opinion, both, safety and efficacy, are major sources of dispute among the scientific community, mainly due to lack of (or scarcity or scattered) conclusive data linking a herbal constituent to pharmacological action in vivo, in a way that benefit overrides risk. This paper presents a methodological framework for addressing natural medicine in a systematic and holistic way with a view to providing medicinal products based on interactive chemical/herbal ingredients.
Tensorial Minkowski functionals of triply periodic minimal surfaces
Mickel, Walter; Schröder-Turk, Gerd E.; Mecke, Klaus
2012-01-01
A fundamental understanding of the formation and properties of a complex spatial structure relies on robust quantitative tools to characterize morphology. A systematic approach to the characterization of average properties of anisotropic complex interfacial geometries is provided by integral geometry which furnishes a family of morphological descriptors known as tensorial Minkowski functionals. These functionals are curvature-weighted integrals of tensor products of position vectors and surface normal vectors over the interfacial surface. We here demonstrate their use by application to non-cubic triply periodic minimal surface model geometries, whose Weierstrass parametrizations allow for accurate numerical computation of the Minkowski tensors. PMID:24098847
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hattrick-Simpers, Jason R.; Gregoire, John M.; Kusne, A. Gilad
With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. Here, we review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams andmore » beyond.« less
High-Order Entropy Stable Formulations for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Fisher, Travis C.
2013-01-01
A systematic approach is presented for developing entropy stable (SS) formulations of any order for the Navier-Stokes equations. These SS formulations discretely conserve mass, momentum, energy and satisfy a mathematical entropy inequality. They are valid for smooth as well as discontinuous flows provided sufficient dissipation is added at shocks and discontinuities. Entropy stable formulations exist for all diagonal norm, summation-by-parts (SBP) operators, including all centered finite-difference operators, Legendre collocation finite-element operators, and certain finite-volume operators. Examples are presented using various entropy stable formulations that demonstrate the current state-of-the-art of these schemes.
Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM)
NASA Astrophysics Data System (ADS)
Sinitskiy, Anton V.; Voth, Gregory A.
2018-01-01
Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.
Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM).
Sinitskiy, Anton V; Voth, Gregory A
2018-01-07
Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.
Gravitational decoupling and the Picard-Lefschetz approach
NASA Astrophysics Data System (ADS)
Brown, Jon; Cole, Alex; Shiu, Gary; Cottrell, William
2018-01-01
In this work, we consider tunneling between nonmetastable states in gravitational theories. Such processes arise in various contexts, e.g., in inflationary scenarios where the inflaton potential involves multiple fields or multiple branches. They are also relevant for bubble wall nucleation in some cosmological settings. However, we show that the transition amplitudes computed using the Euclidean method generally do not approach the corresponding field theory limit as Mp→∞ . This implies that in the Euclidean framework, there is no systematic expansion in powers of GN for such processes. Such considerations also carry over directly to no-boundary scenarios involving Hawking-Turok instantons. In this note, we illustrate this failure of decoupling in the Euclidean approach with a simple model of axion monodromy and then argue that the situation can be remedied with a Lorentzian prescription such as the Picard-Lefschetz theory. As a proof of concept, we illustrate with a simple model how tunneling transition amplitudes can be calculated using the Picard-Lefschetz approach.
Hybrid Density Functional Study of the Local Structures and Energy Levels of CaAl2O4:Ce3.
Lou, Bibo; Jing, Weiguo; Lou, Liren; Zhang, Yongfan; Yin, Min; Duan, Chang-Kui
2018-05-03
First-principles calculations were carried out for the electronic structures of Ce 3+ in calcium aluminate phosphors, CaAl 2 O 4 , and their effects on luminescence properties. Hybrid density functional approaches were used to overcome the well-known underestimation of band gaps of conventional density functional approaches and to calculate the energy levels of Ce 3+ ions more accurately. The obtained 4f-5d excitation and emission energies show good consistency with measured values. A detailed energy diagram of all three sites is obtained, which explains qualitatively all of the luminescent phenomena. With the results of energy levels calculated by combining the hybrid functional of Heyd, Scuseria, and Ernzerhof (HSE06) and the constraint occupancy approach, we are able to construct a configurational coordinate diagram to analyze the processes of capture of a hole or an electron and luminescence. This approach can be applied for systematic high-throughput calculations in predicting Ce 3+ activated luminescent materials with a moderate computing requirement.
NASA Astrophysics Data System (ADS)
Pierson, Kyle D.; Hochhalter, Jacob D.; Spear, Ashley D.
2018-05-01
Systematic correlation analysis was performed between simulated micromechanical fields in an uncracked polycrystal and the known path of an eventual fatigue-crack surface based on experimental observation. Concurrent multiscale finite-element simulation of cyclic loading was performed using a high-fidelity representation of grain structure obtained from near-field high-energy x-ray diffraction microscopy measurements. An algorithm was developed to parameterize and systematically correlate the three-dimensional (3D) micromechanical fields from simulation with the 3D fatigue-failure surface from experiment. For comparison, correlation coefficients were also computed between the micromechanical fields and hypothetical, alternative surfaces. The correlation of the fields with hypothetical surfaces was found to be consistently weaker than that with the known crack surface, suggesting that the micromechanical fields of the cyclically loaded, uncracked microstructure might provide some degree of predictiveness for microstructurally small fatigue-crack paths, although the extent of such predictiveness remains to be tested. In general, gradients of the field variables exhibit stronger correlations with crack path than the field variables themselves. Results from the data-driven approach implemented here can be leveraged in future model development for prediction of fatigue-failure surfaces (for example, to facilitate univariate feature selection required by convolution-based models).
MaPLE: A MapReduce Pipeline for Lattice-based Evaluation and Its Application to SNOMED CT
Zhang, Guo-Qiang; Zhu, Wei; Sun, Mengmeng; Tao, Shiqiang; Bodenreider, Olivier; Cui, Licong
2015-01-01
Non-lattice fragments are often indicative of structural anomalies in ontological systems and, as such, represent possible areas of focus for subsequent quality assurance work. However, extracting the non-lattice fragments in large ontological systems is computationally expensive if not prohibitive, using a traditional sequential approach. In this paper we present a general MapReduce pipeline, called MaPLE (MapReduce Pipeline for Lattice-based Evaluation), for extracting non-lattice fragments in large partially ordered sets and demonstrate its applicability in ontology quality assurance. Using MaPLE in a 30-node Hadoop local cloud, we systematically extracted non-lattice fragments in 8 SNOMED CT versions from 2009 to 2014 (each containing over 300k concepts), with an average total computing time of less than 3 hours per version. With dramatically reduced time, MaPLE makes it feasible not only to perform exhaustive structural analysis of large ontological hierarchies, but also to systematically track structural changes between versions. Our change analysis showed that the average change rates on the non-lattice pairs are up to 38.6 times higher than the change rates of the background structure (concept nodes). This demonstrates that fragments around non-lattice pairs exhibit significantly higher rates of change in the process of ontological evolution. PMID:25705725
MaPLE: A MapReduce Pipeline for Lattice-based Evaluation and Its Application to SNOMED CT.
Zhang, Guo-Qiang; Zhu, Wei; Sun, Mengmeng; Tao, Shiqiang; Bodenreider, Olivier; Cui, Licong
2014-10-01
Non-lattice fragments are often indicative of structural anomalies in ontological systems and, as such, represent possible areas of focus for subsequent quality assurance work. However, extracting the non-lattice fragments in large ontological systems is computationally expensive if not prohibitive, using a traditional sequential approach. In this paper we present a general MapReduce pipeline, called MaPLE (MapReduce Pipeline for Lattice-based Evaluation), for extracting non-lattice fragments in large partially ordered sets and demonstrate its applicability in ontology quality assurance. Using MaPLE in a 30-node Hadoop local cloud, we systematically extracted non-lattice fragments in 8 SNOMED CT versions from 2009 to 2014 (each containing over 300k concepts), with an average total computing time of less than 3 hours per version. With dramatically reduced time, MaPLE makes it feasible not only to perform exhaustive structural analysis of large ontological hierarchies, but also to systematically track structural changes between versions. Our change analysis showed that the average change rates on the non-lattice pairs are up to 38.6 times higher than the change rates of the background structure (concept nodes). This demonstrates that fragments around non-lattice pairs exhibit significantly higher rates of change in the process of ontological evolution.
Suleimanov, Yury V; Green, William H
2015-09-08
We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.
Peukert, Peter; Sieslack, Sonja; Barth, Gottfried; Batra, Anil
2010-07-01
Excessive and addictive internet use and computer game playing is reported as an increasing problem in outpatient care. The aim of this paper is to give an overview about the current scientific discussion of the overuse and addiction of internet and computer game playing. Pubmed was used for a systematic literature research considering original papers and review articles dealing with Internet/computer game addiction. Recent epidemiological data from Germany suggest that 1.5-3.5 % of adolescent computer and internet users show signs of an overuse or addictive use of computer and video games. Moreover there is evidence that the disorder is associated with higher rates of depression, anxiety, as well as lower achievements e. g. at school. Although the nosological assignment still remains unclear there is some evidence from neurobiological data that the disorder can be conceptualized as behavioral addiction. As treatment strategy CBT-techniques have been proposed, but there is still a lack of controlled clinical trials concerning their efficacy. Since the addicted persons often show little motivation for a behavioural change we consider it a promising approach to treat and train their relatives with the aim of increasing the motivation for a behavioural change of the addicted person.
Thermodynamics and proton activities of protic ionic liquids with quantum cluster equilibrium theory
NASA Astrophysics Data System (ADS)
Ingenmey, Johannes; von Domaros, Michael; Perlt, Eva; Verevkin, Sergey P.; Kirchner, Barbara
2018-05-01
We applied the binary Quantum Cluster Equilibrium (bQCE) method to a number of alkylammonium-based protic ionic liquids in order to predict boiling points, vaporization enthalpies, and proton activities. The theory combines statistical thermodynamics of van-der-Waals-type clusters with ab initio quantum chemistry and yields the partition functions (and associated thermodynamic potentials) of binary mixtures over a wide range of thermodynamic phase points. Unlike conventional cluster approaches that are limited to the prediction of thermodynamic properties, dissociation reactions can be effortlessly included into the bQCE formalism, giving access to ionicities, as well. The method is open to quantum chemical methods at any level of theory, but combination with low-cost composite density functional theory methods and the proposed systematic approach to generate cluster sets provides a computationally inexpensive and mostly parameter-free way to predict such properties at good-to-excellent accuracy. Boiling points can be predicted within an accuracy of 50 K, reaching excellent accuracy for ethylammonium nitrate. Vaporization enthalpies are predicted within an accuracy of 20 kJ mol-1 and can be systematically interpreted on a molecular level. We present the first theoretical approach to predict proton activities in protic ionic liquids, with results fitting well into the experimentally observed correlation. Furthermore, enthalpies of vaporization were measured experimentally for some alkylammonium nitrates and an excellent linear correlation with vaporization enthalpies of their respective parent amines is observed.
Causal learning with local computations.
Fernbach, Philip M; Sloman, Steven A
2009-05-01
The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require relatively small amounts of data, and need not respect normative prescriptions as inferences that are principled locally may violate those principles when combined. Over a series of 3 experiments, the authors found (a) systematic inferences from small amounts of data; (b) systematic inference of extraneous causal links; (c) influence of data presentation order on inferences; and (d) error reduction through pretraining. Without pretraining, a model based on local computations fitted data better than a Bayesian structural inference model. The data suggest that local computations serve as a heuristic for learning causal structure. Copyright 2009 APA, all rights reserved.
Up on the Roof: A Systematic Approach to Roof Maintenance.
ERIC Educational Resources Information Center
Burd, William
1979-01-01
A systematic roof maintenance program is characterized by carefully prepared long- and short-range plans. An essential feature of a systematic approach to roof maintenance is the stress on preventive measures rather than the patching of leaks. (Author)
Bertalan, Tom; Wu, Yan; Laing, Carlo; Gear, C. William; Kevrekidis, Ioannis G.
2017-01-01
Finding accurate reduced descriptions for large, complex, dynamically evolving networks is a crucial enabler to their simulation, analysis, and ultimately design. Here, we propose and illustrate a systematic and powerful approach to obtaining good collective coarse-grained observables—variables successfully summarizing the detailed state of such networks. Finding such variables can naturally lead to successful reduced dynamic models for the networks. The main premise enabling our approach is the assumption that the behavior of a node in the network depends (after a short initial transient) on the node identity: a set of descriptors that quantify the node properties, whether intrinsic (e.g., parameters in the node evolution equations) or structural (imparted to the node by its connectivity in the particular network structure). The approach creates a natural link with modeling and “computational enabling technology” developed in the context of Uncertainty Quantification. In our case, however, we will not focus on ensembles of different realizations of a problem, each with parameters randomly selected from a distribution. We will instead study many coupled heterogeneous units, each characterized by randomly assigned (heterogeneous) parameter value(s). One could then coin the term Heterogeneity Quantification for this approach, which we illustrate through a model dynamic network consisting of coupled oscillators with one intrinsic heterogeneity (oscillator individual frequency) and one structural heterogeneity (oscillator degree in the undirected network). The computational implementation of the approach, its shortcomings and possible extensions are also discussed. PMID:28659781
ERIC Educational Resources Information Center
Boody, Charles G., Ed.
1986-01-01
Six articles on music and computing address development of computer-based music technology, computer assisted instruction (CAI) in ear training and music fundamentals, a machine-independent data structure for musical pitch relationship representation, touch tablet input device in a melodic dictation CAI game, and systematic evaluation strategies…
Flight Dynamics Mission Support and Quality Assurance Process
NASA Technical Reports Server (NTRS)
Oh, InHwan
1996-01-01
This paper summarizes the method of the Computer Sciences Corporation Flight Dynamics Operation (FDO) quality assurance approach to support the National Aeronautics and Space Administration Goddard Space Flight Center Flight Dynamics Support Branch. Historically, a strong need has existed for developing systematic quality assurance using methods that account for the unique nature and environment of satellite Flight Dynamics mission support. Over the past few years FDO has developed and implemented proactive quality assurance processes applied to each of the six phases of the Flight Dynamics mission support life cycle: systems and operations concept, system requirements and specifications, software development support, operations planing and training, launch support, and on-orbit mission operations. Rather than performing quality assurance as a final step after work is completed, quality assurance has been built in as work progresses in the form of process assurance. Process assurance activities occur throughout the Flight Dynamics mission support life cycle. The FDO Product Assurance Office developed process checklists for prephase process reviews, mission team orientations, in-progress reviews, and end-of-phase audits. This paper will outline the evolving history of FDO quality assurance approaches, discuss the tailoring of Computer Science Corporations's process assurance cycle procedures, describe some of the quality assurance approaches that have been or are being developed, and present some of the successful results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Tanmoy; Shell, M. Scott, E-mail: shell@engineering.ucsb.edu
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one atmore » which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.« less
Identification of sequence-structure RNA binding motifs for SELEX-derived aptamers.
Hoinka, Jan; Zotenko, Elena; Friedman, Adam; Sauna, Zuben E; Przytycka, Teresa M
2012-06-15
Systematic Evolution of Ligands by EXponential Enrichment (SELEX) represents a state-of-the-art technology to isolate single-stranded (ribo)nucleic acid fragments, named aptamers, which bind to a molecule (or molecules) of interest via specific structural regions induced by their sequence-dependent fold. This powerful method has applications in designing protein inhibitors, molecular detection systems, therapeutic drugs and antibody replacement among others. However, full understanding and consequently optimal utilization of the process has lagged behind its wide application due to the lack of dedicated computational approaches. At the same time, the combination of SELEX with novel sequencing technologies is beginning to provide the data that will allow the examination of a variety of properties of the selection process. To close this gap we developed, Aptamotif, a computational method for the identification of sequence-structure motifs in SELEX-derived aptamers. To increase the chances of identifying functional motifs, Aptamotif uses an ensemble-based approach. We validated the method using two published aptamer datasets containing experimentally determined motifs of increasing complexity. We were able to recreate the author's findings to a high degree, thus proving the capability of our approach to identify binding motifs in SELEX data. Additionally, using our new experimental dataset, we illustrate the application of Aptamotif to elucidate several properties of the selection process.
Benassi, Enrico
2017-01-15
A number of programs and tools that simulate 1 H and 13 C nuclear magnetic resonance (NMR) chemical shifts using empirical approaches are available. These tools are user-friendly, but they provide a very rough (and sometimes misleading) estimation of the NMR properties, especially for complex systems. Rigorous and reliable ways to predict and interpret NMR properties of simple and complex systems are available in many popular computational program packages. Nevertheless, experimentalists keep relying on these "unreliable" tools in their daily work because, to have a sufficiently high accuracy, these rigorous quantum mechanical methods need high levels of theory. An alternative, efficient, semi-empirical approach has been proposed by Bally, Rablen, Tantillo, and coworkers. This idea consists of creating linear calibrations models, on the basis of the application of different combinations of functionals and basis sets. Following this approach, the predictive capability of a wider range of popular functionals was systematically investigated and tested. The NMR chemical shifts were computed in solvated phase at density functional theory level, using 30 different functionals coupled with three different triple-ζ basis sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Xu, Min; Chai, Xiaoqi; Muthakana, Hariank; Liang, Xiaodan; Yang, Ge; Zeev-Ben-Mordehai, Tzviya; Xing, Eric P
2017-07-15
Cellular Electron CryoTomography (CECT) enables 3D visualization of cellular organization at near-native state and in sub-molecular resolution, making it a powerful tool for analyzing structures of macromolecular complexes and their spatial organizations inside single cells. However, high degree of structural complexity together with practical imaging limitations makes the systematic de novo discovery of structures within cells challenging. It would likely require averaging and classifying millions of subtomograms potentially containing hundreds of highly heterogeneous structural classes. Although it is no longer difficult to acquire CECT data containing such amount of subtomograms due to advances in data acquisition automation, existing computational approaches have very limited scalability or discrimination ability, making them incapable of processing such amount of data. To complement existing approaches, in this article we propose a new approach for subdividing subtomograms into smaller but relatively homogeneous subsets. The structures in these subsets can then be separately recovered using existing computation intensive methods. Our approach is based on supervised structural feature extraction using deep learning, in combination with unsupervised clustering and reference-free classification. Our experiments show that, compared with existing unsupervised rotation invariant feature and pose-normalization based approaches, our new approach achieves significant improvements in both discrimination ability and scalability. More importantly, our new approach is able to discover new structural classes and recover structures that do not exist in training data. Source code freely available at http://www.cs.cmu.edu/∼mxu1/software . mxu1@cs.cmu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Computers in the Schools: State/Provincial Implications.
ERIC Educational Resources Information Center
Thiessen, S. J.
The Alberta goverment has attempted to systematically address educational computing issues through programs of the provincial (K-12) education department (Alberta Education), which have included the development of computer literacy curricula for elementary, junior, and senior high schools; the Computer Technology Project (CTP); evaluation studies;…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.
2008-09-01
Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worthmore » of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.« less
Usmanova, Dinara R; Bogatyreva, Natalya S; Ariño Bernad, Joan; Eremina, Aleksandra A; Gorshkova, Anastasiya A; Kanevskiy, German M; Lonishin, Lyubov R; Meister, Alexander V; Yakupova, Alisa G; Kondrashov, Fyodor A; Ivankov, Dmitry N
2018-05-02
Computational prediction of the effect of mutations on protein stability is used by researchers in many fields. The utility of the prediction methods is affected by their accuracy and bias. Bias, a systematic shift of the predicted change of stability, has been noted as an issue for several methods, but has not been investigated systematically. Presence of the bias may lead to misleading results especially when exploring the effects of combination of different mutations. Here we use a protocol to measure the bias as a function of the number of introduced mutations. It is based on a self-consistency test of the reciprocity the effect of a mutation. An advantage of the used approach is that it relies solely on crystal structures without experimentally measured stability values. We applied the protocol to four popular algorithms predicting change of protein stability upon mutation, FoldX, Eris, Rosetta, and I-Mutant, and found an inherent bias. For one program, FoldX, we manage to substantially reduce the bias using additional relaxation by Modeller. Authors using algorithms for predicting effects of mutations should be aware of the bias described here. ivankov13@gmail.com. Supplementary data are available at Bioinformatics online.
Equation-free analysis of agent-based models and systematic parameter determination
NASA Astrophysics Data System (ADS)
Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.
2016-12-01
Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for configuring the system specific EF parameters.
Computer-tailored dietary behaviour change interventions: a systematic review
Neville, Leonie M.; O'Hara, Blythe; Milat, Andrew J.
2009-01-01
Improving dietary behaviours such as increasing fruit and vegetable consumption and reducing saturated fat intake are important in the promotion of better health. Computer tailoring has shown promise as a strategy to promote such behaviours. A narrative systematic review was conducted to describe the available evidence on ‘second’-generation computer-tailored primary prevention interventions for dietary behaviour change and to determine their effectiveness and key characteristics of success. Systematic literature searches were conducted through five databases: Medline, Embase, PsycINFO, CINAHL and All EBM Reviews and by examining the reference lists of relevant articles to identify studies published in English from January 1996 to 2008. Randomized controlled trials or quasi-experimental designs with pre-test and post-test behavioural outcome data were included. A total of 13 articles were reviewed, describing the evaluation of 12 interventions, seven of which found significant positive effects of the computer-tailored interventions for dietary behaviour outcomes, one also for weight reduction outcomes. Although the evidence of short-term efficacy for computer-tailored dietary behaviour change interventions is fairly strong, the uncertainty lies in whether the reported effects are generalizable and sustained long term. Further research is required to address these limitations of the evidence. PMID:19286893
Fukunaga, Tsukasa; Iwasaki, Wataru
2017-01-19
With rapid advances in genome sequencing and editing technologies, systematic and quantitative analysis of animal behavior is expected to be another key to facilitating data-driven behavioral genetics. The nematode Caenorhabditis elegans is a model organism in this field. Several video-tracking systems are available for automatically recording behavioral data for the nematode, but computational methods for analyzing these data are still under development. In this study, we applied the Gaussian mixture model-based binning method to time-series postural data for 322 C. elegans strains. We revealed that the occurrence patterns of the postural states and the transition patterns among these states have a relationship as expected, and such a relationship must be taken into account to identify strains with atypical behaviors that are different from those of wild type. Based on this observation, we identified several strains that exhibit atypical transition patterns that cannot be fully explained by their occurrence patterns of postural states. Surprisingly, we found that two simple factors-overall acceleration of postural movement and elimination of inactivity periods-explained the behavioral characteristics of strains with very atypical transition patterns; therefore, computational analysis of animal behavior must be accompanied by evaluation of the effects of these simple factors. Finally, we found that the npr-1 and npr-3 mutants have similar behavioral patterns that were not predictable by sequence homology, proving that our data-driven approach can reveal the functions of genes that have not yet been characterized. We propose that elimination of inactivity periods and overall acceleration of postural change speed can explain behavioral phenotypes of strains with very atypical postural transition patterns. Our methods and results constitute guidelines for effectively finding strains that show "truly" interesting behaviors and systematically uncovering novel gene functions by bioimage-informatic approaches.
Asymptotic modal analysis and statistical energy analysis
NASA Technical Reports Server (NTRS)
Dowell, Earl H.
1992-01-01
Asymptotic Modal Analysis (AMA) is a method which is used to model linear dynamical systems with many participating modes. The AMA method was originally developed to show the relationship between statistical energy analysis (SEA) and classical modal analysis (CMA). In the limit of a large number of modes of a vibrating system, the classical modal analysis result can be shown to be equivalent to the statistical energy analysis result. As the CMA result evolves into the SEA result, a number of systematic assumptions are made. Most of these assumptions are based upon the supposition that the number of modes approaches infinity. It is for this reason that the term 'asymptotic' is used. AMA is the asymptotic result of taking the limit of CMA as the number of modes approaches infinity. AMA refers to any of the intermediate results between CMA and SEA, as well as the SEA result which is derived from CMA. The main advantage of the AMA method is that individual modal characteristics are not required in the model or computations. By contrast, CMA requires that each modal parameter be evaluated at each frequency. In the latter, contributions from each mode are computed and the final answer is obtained by summing over all the modes in the particular band of interest. AMA evaluates modal parameters only at their center frequency and does not sum the individual contributions from each mode in order to obtain a final result. The method is similar to SEA in this respect. However, SEA is only capable of obtaining spatial averages or means, as it is a statistical method. Since AMA is systematically derived from CMA, it can obtain local spatial information as well.
Impact of Overt and Subclinical Hypothyroidism on Exercise Tolerance: A Systematic Review
ERIC Educational Resources Information Center
Lankhaar, Jeannette A. C.; de Vries, Wouter R.; Jansen, Jaap A. C. G.; Zelissen, Pierre M. J.; Backx, Frank J. G.
2014-01-01
Purpose: This systematic review describes the state of the art of the impact of hypothyroidism on exercise tolerance and physical performance capacity in untreated and treated patients with hypothyroidism. Method: A systematic computer-aided search was conducted using biomedical databases. Relevant studies in English, German, and Dutch, published…
Olorisade, Babatunde Kazeem; Brereton, Pearl; Andras, Peter
2017-09-01
Independent validation of published scientific results through study replication is a pre-condition for accepting the validity of such results. In computation research, full replication is often unrealistic for independent results validation, therefore, study reproduction has been justified as the minimum acceptable standard to evaluate the validity of scientific claims. The application of text mining techniques to citation screening in the context of systematic literature reviews is a relatively young and growing computational field with high relevance for software engineering, medical research and other fields. However, there is little work so far on reproduction studies in the field. In this paper, we investigate the reproducibility of studies in this area based on information contained in published articles and we propose reporting guidelines that could improve reproducibility. The study was approached in two ways. Initially we attempted to reproduce results from six studies, which were based on the same raw dataset. Then, based on this experience, we identified steps considered essential to successful reproduction of text mining experiments and characterized them to measure how reproducible is a study given the information provided on these steps. 33 articles were systematically assessed for reproducibility using this approach. Our work revealed that it is currently difficult if not impossible to independently reproduce the results published in any of the studies investigated. The lack of information about the datasets used limits reproducibility of about 80% of the studies assessed. Also, information about the machine learning algorithms is inadequate in about 27% of the papers. On the plus side, the third party software tools used are mostly free and available. The reproducibility potential of most of the studies can be significantly improved if more attention is paid to information provided on the datasets used, how they were partitioned and utilized, and how any randomization was controlled. We introduce a checklist of information that needs to be provided in order to ensure that a published study can be reproduced. Copyright © 2017 Elsevier Inc. All rights reserved.
Morphological computation of multi-gaited robot locomotion based on free vibration.
Reis, Murat; Yu, Xiaoxiang; Maheshwari, Nandan; Iida, Fumiya
2013-01-01
In recent years, there has been increasing interest in the study of gait patterns in both animals and robots, because it allows us to systematically investigate the underlying mechanisms of energetics, dexterity, and autonomy of adaptive systems. In particular, for morphological computation research, the control of dynamic legged robots and their gait transitions provides additional insights into the guiding principles from a synthetic viewpoint for the emergence of sensible self-organizing behaviors in more-degrees-of-freedom systems. This article presents a novel approach to the study of gait patterns, which makes use of the intrinsic mechanical dynamics of robotic systems. Each of the robots consists of a U-shaped elastic beam and exploits free vibration to generate different locomotion patterns. We developed a simplified physics model of these robots, and through experiments in simulation and real-world robotic platforms, we show three distinctive mechanisms for generating different gait patterns in these robots.
Discrete Roughness Effects on Shuttle Orbiter at Mach 6
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Hamilton, H. Harris, II
2002-01-01
Discrete roughness boundary layer transition results on a Shuttle Orbiter model in the NASA Langley Research Center 20-Inch Mach 6 Air Tunnel have been reanalyzed with new boundary layer calculations to provide consistency for comparison to other published results. The experimental results were previously obtained utilizing the phosphor thermography system to monitor the status of the boundary layer via global heat transfer images of the Orbiter windward surface. The size and location of discrete roughness elements were systematically varied along the centerline of the 0.0075-scale model at an angle of attack of 40 deg and the boundary layer response recorded. Various correlative approaches were attempted, with the roughness transition correlations based on edge properties providing the most reliable results. When a consistent computational method is used to compute edge conditions, transition datasets for different configurations at several angles of attack have been shown to collapse to a well-behaved correlation.
The natural mathematics of behavior analysis.
Li, Don; Hautus, Michael J; Elliffe, Douglas
2018-04-19
Models that generate event records have very general scope regarding the dimensions of the target behavior that we measure. From a set of predicted event records, we can generate predictions for any dependent variable that we could compute from the event records of our subjects. In this sense, models that generate event records permit us a freely multivariate analysis. To explore this proposition, we conducted a multivariate examination of Catania's Operant Reserve on single VI schedules in transition using a Markov Chain Monte Carlo scheme for Approximate Bayesian Computation. Although we found systematic deviations between our implementation of Catania's Operant Reserve and our observed data (e.g., mismatches in the shape of the interresponse time distributions), the general approach that we have demonstrated represents an avenue for modelling behavior that transcends the typical constraints of algebraic models. © 2018 Society for the Experimental Analysis of Behavior.
Efficient parallel architecture for highly coupled real-time linear system applications
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo
1988-01-01
A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.
Connectivity ranking of heterogeneous random conductivity models
NASA Astrophysics Data System (ADS)
Rizzo, C. B.; de Barros, F.
2017-12-01
To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.
NASA Technical Reports Server (NTRS)
Yang, Guowei; Pasareanu, Corina S.; Khurshid, Sarfraz
2012-01-01
This paper introduces memoized symbolic execution (Memoise), a novel approach for more efficient application of forward symbolic execution, which is a well-studied technique for systematic exploration of program behaviors based on bounded execution paths. Our key insight is that application of symbolic execution often requires several successive runs of the technique on largely similar underlying problems, e.g., running it once to check a program to find a bug, fixing the bug, and running it again to check the modified program. Memoise introduces a trie-based data structure that stores the key elements of a run of symbolic execution. Maintenance of the trie during successive runs allows re-use of previously computed results of symbolic execution without the need for re-computing them as is traditionally done. Experiments using our prototype embodiment of Memoise show the benefits it holds in various standard scenarios of using symbolic execution, e.g., with iterative deepening of exploration depth, to perform regression analysis, or to enhance coverage.
Anharmonic Vibrational Spectroscopy on Metal Transition Complexes
NASA Astrophysics Data System (ADS)
Latouche, Camille; Bloino, Julien; Barone, Vincenzo
2014-06-01
Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.
From empirical data to time-inhomogeneous continuous Markov processes.
Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G
2016-03-01
We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.
Mining integrated semantic networks for drug repositioning opportunities
Mullen, Joseph; Tipney, Hannah
2016-01-01
Current research and development approaches to drug discovery have become less fruitful and more costly. One alternative paradigm is that of drug repositioning. Many marketed examples of repositioned drugs have been identified through serendipitous or rational observations, highlighting the need for more systematic methodologies to tackle the problem. Systems level approaches have the potential to enable the development of novel methods to understand the action of therapeutic compounds, but requires an integrative approach to biological data. Integrated networks can facilitate systems level analyses by combining multiple sources of evidence to provide a rich description of drugs, their targets and their interactions. Classically, such networks can be mined manually where a skilled person is able to identify portions of the graph (semantic subgraphs) that are indicative of relationships between drugs and highlight possible repositioning opportunities. However, this approach is not scalable. Automated approaches are required to systematically mine integrated networks for these subgraphs and bring them to the attention of the user. We introduce a formal framework for the definition of integrated networks and their associated semantic subgraphs for drug interaction analysis and describe DReSMin, an algorithm for mining semantically-rich networks for occurrences of a given semantic subgraph. This algorithm allows instances of complex semantic subgraphs that contain data about putative drug repositioning opportunities to be identified in a computationally tractable fashion, scaling close to linearly with network data. We demonstrate the utility of our approach by mining an integrated drug interaction network built from 11 sources. This work identified and ranked 9,643,061 putative drug-target interactions, showing a strong correlation between highly scored associations and those supported by literature. We discuss the 20 top ranked associations in more detail, of which 14 are novel and 6 are supported by the literature. We also show that our approach better prioritizes known drug-target interactions, than other state-of-the art approaches for predicting such interactions. PMID:26844016
Wong, Kin-Yiu; Gao, Jiali
2008-09-09
In this paper, we describe an automated integration-free path-integral (AIF-PI) method, based on Kleinert's variational perturbation (KP) theory, to treat internuclear quantum-statistical effects in molecular systems. We have developed an analytical method to obtain the centroid potential as a function of the variational parameter in the KP theory, which avoids numerical difficulties in path-integral Monte Carlo or molecular dynamics simulations, especially at the limit of zero-temperature. Consequently, the variational calculations using the KP theory can be efficiently carried out beyond the first order, i.e., the Giachetti-Tognetti-Feynman-Kleinert variational approach, for realistic chemical applications. By making use of the approximation of independent instantaneous normal modes (INM), the AIF-PI method can readily be applied to many-body systems. Previously, we have shown that in the INM approximation, the AIF-PI method is accurate for computing the quantum partition function of a water molecule (3 degrees of freedom) and the quantum correction factor for the collinear H(3) reaction rate (2 degrees of freedom). In this work, the accuracy and properties of the KP theory are further investigated by using the first three order perturbations on an asymmetric double-well potential, the bond vibrations of H(2), HF, and HCl represented by the Morse potential, and a proton-transfer barrier modeled by the Eckart potential. The zero-point energy, quantum partition function, and tunneling factor for these systems have been determined and are found to be in excellent agreement with the exact quantum results. Using our new analytical results at the zero-temperature limit, we show that the minimum value of the computed centroid potential in the KP theory is in excellent agreement with the ground state energy (zero-point energy) and the position of the centroid potential minimum is the expectation value of particle position in wave mechanics. The fast convergent property of the KP theory is further examined in comparison with results from the traditional Rayleigh-Ritz variational approach and Rayleigh-Schrödinger perturbation theory in wave mechanics. The present method can be used for thermodynamic and quantum dynamic calculations, including to systematically determine the exact value of zero-point energy and to study kinetic isotope effects for chemical reactions in solution and in enzymes.
Personal Health Records: A Systematic Literature Review.
Roehrs, Alex; da Costa, Cristiano André; Righi, Rodrigo da Rosa; de Oliveira, Kleinner Silva Farias
2017-01-06
Information and communication technology (ICT) has transformed the health care field worldwide. One of the main drivers of this change is the electronic health record (EHR). However, there are still open issues and challenges because the EHR usually reflects the partial view of a health care provider without the ability for patients to control or interact with their data. Furthermore, with the growth of mobile and ubiquitous computing, the number of records regarding personal health is increasing exponentially. This movement has been characterized as the Internet of Things (IoT), including the widespread development of wearable computing technology and assorted types of health-related sensors. This leads to the need for an integrated method of storing health-related data, defined as the personal health record (PHR), which could be used by health care providers and patients. This approach could combine EHRs with data gathered from sensors or other wearable computing devices. This unified view of patients' health could be shared with providers, who may not only use previous health-related records but also expand them with data resulting from their interactions. Another PHR advantage is that patients can interact with their health data, making decisions that may positively affect their health. This work aimed to explore the recent literature related to PHRs by defining the taxonomy and identifying challenges and open questions. In addition, this study specifically sought to identify data types, standards, profiles, goals, methods, functions, and architecture with regard to PHRs. The method to achieve these objectives consists of using the systematic literature review approach, which is guided by research questions using the population, intervention, comparison, outcome, and context (PICOC) criteria. As a result, we reviewed more than 5000 scientific studies published in the last 10 years, selected the most significant approaches, and thoroughly surveyed the health care field related to PHRs. We developed an updated taxonomy and identified challenges, open questions, and current data types, related standards, main profiles, input strategies, goals, functions, and architectures of the PHR. All of these results contribute to the achievement of a significant degree of coverage regarding the technology related to PHRs. ©Alex Roehrs, Cristiano André da Costa, Rodrigo da Rosa Righi, Kleinner Silva Farias de Oliveira. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 06.01.2017.
A fast, robust and tunable synthetic gene oscillator.
Stricker, Jesse; Cookson, Scott; Bennett, Matthew R; Mather, William H; Tsimring, Lev S; Hasty, Jeff
2008-11-27
One defining goal of synthetic biology is the development of engineering-based approaches that enable the construction of gene-regulatory networks according to 'design specifications' generated from computational modelling. This approach provides a systematic framework for exploring how a given regulatory network generates a particular phenotypic behaviour. Several fundamental gene circuits have been developed using this approach, including toggle switches and oscillators, and these have been applied in new contexts such as triggered biofilm development and cellular population control. Here we describe an engineered genetic oscillator in Escherichia coli that is fast, robust and persistent, with tunable oscillatory periods as fast as 13 min. The oscillator was designed using a previously modelled network architecture comprising linked positive and negative feedback loops. Using a microfluidic platform tailored for single-cell microscopy, we precisely control environmental conditions and monitor oscillations in individual cells through multiple cycles. Experiments reveal remarkable robustness and persistence of oscillations in the designed circuit; almost every cell exhibited large-amplitude fluorescence oscillations throughout observation runs. The oscillatory period can be tuned by altering inducer levels, temperature and the media source. Computational modelling demonstrates that the key design principle for constructing a robust oscillator is a time delay in the negative feedback loop, which can mechanistically arise from the cascade of cellular processes involved in forming a functional transcription factor. The positive feedback loop increases the robustness of the oscillations and allows for greater tunability. Examination of our refined model suggested the existence of a simplified oscillator design without positive feedback, and we construct an oscillator strain confirming this computational prediction.
Hack City Summer: Computer Camps Can Bring a Vacation of Keyboard Delights.
ERIC Educational Resources Information Center
Shell, Ellen Ruppel
1983-01-01
Activities at a summer computer camp (Camp Atari held at East Stroudsburg State College PA) are described. The curriculum, using logic, systematic analysis, and other fundamental programing skills, teaches students to interact effectively and creatively with computers. Sources for finding a computer camp are included. (JN)
The diversity and evolution of ecological and environmental citizen science.
Pocock, Michael J O; Tweddle, John C; Savage, Joanna; Robinson, Lucy D; Roy, Helen E
2017-01-01
Citizen science-the involvement of volunteers in data collection, analysis and interpretation-simultaneously supports research and public engagement with science, and its profile is rapidly rising. Citizen science represents a diverse range of approaches, but until now this diversity has not been quantitatively explored. We conducted a systematic internet search and discovered 509 environmental and ecological citizen science projects. We scored each project for 32 attributes based on publicly obtainable information and used multiple factor analysis to summarise this variation to assess citizen science approaches. We found that projects varied according to their methodological approach from 'mass participation' (e.g. easy participation by anyone anywhere) to 'systematic monitoring' (e.g. trained volunteers repeatedly sampling at specific locations). They also varied in complexity from approaches that are 'simple' to those that are 'elaborate' (e.g. provide lots of support to gather rich, detailed datasets). There was a separate cluster of entirely computer-based projects but, in general, we found that the range of citizen science projects in ecology and the environment showed continuous variation and cannot be neatly categorised into distinct types of activity. While the diversity of projects begun in each time period (pre 1990, 1990-99, 2000-09 and 2010-13) has not increased, we found that projects tended to have become increasingly different from each other as time progressed (possibly due to changing opportunities, including technological innovation). Most projects were still active so consequently we found that the overall diversity of active projects (available for participation) increased as time progressed. Overall, understanding the landscape of citizen science in ecology and the environment (and its change over time) is valuable because it informs the comparative evaluation of the 'success' of different citizen science approaches. Comparative evaluation provides an evidence-base to inform the future development of citizen science activities.
Foulquier, Nathan; Redou, Pascal; Le Gal, Christophe; Rouvière, Bénédicte; Pers, Jacques-Olivier; Saraux, Alain
2018-05-17
Big data analysis has become a common way to extract information from complex and large datasets among most scientific domains. This approach is now used to study large cohorts of patients in medicine. This work is a review of publications that have used artificial intelligence and advanced machine learning techniques to study physio pathogenesis-based treatments in pSS. A systematic literature review retrieved all articles reporting on the use of advanced statistical analysis applied to the study of systemic autoimmune diseases (SADs) over the last decade. An automatic bibliography screening method has been developed to perform this task. The program called BIBOT was designed to fetch and analyze articles from the pubmed database using a list of keywords and Natural Language Processing approaches. The evolution of trends in statistical approaches, sizes of cohorts and number of publications over this period were also computed in the process. In all, 44077 abstracts were screened and 1017 publications were analyzed. The mean number of selected articles was 101.0 (S.D. 19.16) by year, but increased significantly over the time (from 74 articles in 2008 to 138 in 2017). Among them only 12 focused on pSS but none of them emphasized on the aspect of pathogenesis-based treatments. To conclude, medicine progressively enters the era of big data analysis and artificial intelligence, but these approaches are not yet used to describe pSS-specific pathogenesis-based treatment. Nevertheless, large multicentre studies are investigating this aspect with advanced algorithmic tools on large cohorts of SADs patients.
A study on strategic provisioning of cloud computing services.
Whaiduzzaman, Md; Haque, Mohammad Nazmul; Rejaul Karim Chowdhury, Md; Gani, Abdullah
2014-01-01
Cloud computing is currently emerging as an ever-changing, growing paradigm that models "everything-as-a-service." Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified.
A Study on Strategic Provisioning of Cloud Computing Services
Rejaul Karim Chowdhury, Md
2014-01-01
Cloud computing is currently emerging as an ever-changing, growing paradigm that models “everything-as-a-service.” Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified. PMID:25032243
Advanced Engineering Strategies for Periodontal Complex Regeneration.
Park, Chan Ho; Kim, Kyoung-Hwa; Lee, Yong-Moo; Seol, Yang-Jo
2016-01-18
The regeneration and integration of multiple tissue types is critical for efforts to restore the function of musculoskeletal complex. In particular, the neogenesis of periodontal constructs for systematic tooth-supporting functions is a current challenge due to micron-scaled tissue compartmentalization, oblique/perpendicular orientations of fibrous connective tissues to the tooth root surface and the orchestration of multiple regenerated tissues. Although there have been various biological and biochemical achievements, periodontal tissue regeneration remains limited and unpredictable. The purpose of this paper is to discuss current advanced engineering approaches for periodontal complex formations; computer-designed, customized scaffolding architectures; cell sheet technology-based multi-phasic approaches; and patient-specific constructs using bioresorbable polymeric material and 3-D printing technology for clinical application. The review covers various advanced technologies for periodontal complex regeneration and state-of-the-art therapeutic avenues in periodontal tissue engineering.
Marker optimization for facial motion acquisition and deformation.
Le, Binh H; Zhu, Mingyang; Deng, Zhigang
2013-11-01
A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.
Bishop, Warrick; Girao, Gary
2017-06-01
A strategy that discharges chest pain patients with negative high-sensitivity troponin and non-ischaemic electrocardiography changes may still result in 0.44% of patients experiencing myocardial infarction within 30 days. We observed that a pragmatic approach that systematically discharged 25 patients on cardio-protective medications of aspirin, metoprolol and atorvastatin followed with prompt (<10 days) coronary computed tomography angiography resulted in no major adverse cardiac event and adverse drug reaction 30 days post-presentation. The strategy resulted in three patients (12%) ultimately diagnosed with likely unstable angina, which required planned coronary intervention in two patients and medical management in one patient. No unplanned readmissions for chest pains were noted from initial presentation through to 6-month follow up. © 2017 Royal Australasian College of Physicians.
NASA Technical Reports Server (NTRS)
Nikravesh, Parviz E.; Gim, Gwanghum; Arabyan, Ara; Rein, Udo
1989-01-01
The formulation of a method known as the joint coordinate method for automatic generation of the equations of motion for multibody systems is summarized. For systems containing open or closed kinematic loops, the equations of motion can be reduced systematically to a minimum number of second order differential equations. The application of recursive and nonrecursive algorithms to this formulation, computational considerations and the feasibility of implementing this formulation on multiprocessor computers are discussed.
Development of a Cloud Resolving Model for Heterogeneous Supercomputers
NASA Astrophysics Data System (ADS)
Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.
2017-12-01
A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.
Theory and algorithms to compute Helfrich bending forces: a review.
Guckenberger, Achim; Gekle, Stephan
2017-05-24
Cell membranes are vital to shield a cell's interior from the environment. At the same time they determine to a large extent the cell's mechanical resistance to external forces. In recent years there has been considerable interest in the accurate computational modeling of such membranes, driven mainly by the amazing variety of shapes that red blood cells and model systems such as vesicles can assume in external flows. Given that the typical height of a membrane is only a few nanometers while the surface of the cell extends over many micrometers, physical modeling approaches mostly consider the interface as a two-dimensional elastic continuum. Here we review recent modeling efforts focusing on one of the computationally most intricate components, namely the membrane's bending resistance. We start with a short background on the most widely used bending model due to Helfrich. While the Helfrich bending energy by itself is an extremely simple model equation, the computation of the resulting forces is far from trivial. At the heart of these difficulties lies the fact that the forces involve second order derivatives of the local surface curvature which by itself is the second derivative of the membrane geometry. We systematically derive and compare the different routes to obtain bending forces from the Helfrich energy, namely the variational approach and the thin-shell theory. While both routes lead to mathematically identical expressions, so-called linear bending models are shown to reproduce only the leading order term while higher orders differ. The main part of the review contains a description of various computational strategies which we classify into three categories: the force, the strong and the weak formulation. We finally give some examples for the application of these strategies in actual simulations.
An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2014-01-01
Over the past two decades finite element (FE) analysis has become a popular tool for researchers seeking to simulate the biomechanics of the healthy and diabetic foot. The primary aims of these simulations have been to improve our understanding of the foot's complicated mechanical loading in health and disease and to inform interventions designed to prevent plantar ulceration, a major complication of diabetes. This article provides a systematic review and summary of the findings from FE analysis-based computational simulations of the diabetic foot. A systematic literature search was carried out and 31 relevant articles were identified covering three primary themes: methodological aspects relevant to modelling the diabetic foot; investigations of the pathomechanics of the diabetic foot; and simulation-based design of interventions to reduce ulceration risk. Methodological studies illustrated appropriate use of FE analysis for simulation of foot mechanics, incorporating nonlinear tissue mechanics, contact and rigid body movements. FE studies of pathomechanics have provided estimates of internal soft tissue stresses, and suggest that such stresses may often be considerably larger than those measured at the plantar surface and are proportionally greater in the diabetic foot compared to controls. FE analysis allowed evaluation of insole performance and development of new insole designs, footwear and corrective surgery to effectively provide intervention strategies. The technique also presents the opportunity to simulate the effect of changes associated with the diabetic foot on non-mechanical factors such as blood supply to local tissues. While significant advancement in diabetic foot research has been made possible by the use of FE analysis, translational utility of this powerful tool for routine clinical care at the patient level requires adoption of cost-effective (both in terms of labour and computation) and reliable approaches with clear clinical validity for decision making.
Hu, Jianfei; Neiswinger, Johnathan; Zhang, Jin; Zhu, Heng; Qian, Jiang
2015-01-01
Scaffold proteins play a crucial role in facilitating signal transduction in eukaryotes by bringing together multiple signaling components. In this study, we performed a systematic analysis of scaffold proteins in signal transduction by integrating protein-protein interaction and kinase-substrate relationship networks. We predicted 212 scaffold proteins that are involved in 605 distinct signaling pathways. The computational prediction was validated using a protein microarray-based approach. The predicted scaffold proteins showed several interesting characteristics, as we expected from the functionality of scaffold proteins. We found that the scaffold proteins are likely to interact with each other, which is consistent with previous finding that scaffold proteins tend to form homodimers and heterodimers. Interestingly, a single scaffold protein can be involved in multiple signaling pathways by interacting with other scaffold protein partners. Furthermore, we propose two possible regulatory mechanisms by which the activity of scaffold proteins is coordinated with their associated pathways through phosphorylation process. PMID:26393507
Reynoso, G. A.; March, A. D.; Berra, C. M.; Strobietto, R. P.; Barani, M.; Iubatti, M.; Chiaradio, M. P.; Serebrisky, D.; Kahn, A.; Vaccarezza, O. A.; Leguiza, J. L.; Ceitlin, M.; Luna, D. A.; Bernaldo de Quirós, F. G.; Otegui, M. I.; Puga, M. C.; Vallejos, M.
2000-01-01
This presentation features linguistic and terminology management issues related to the development of the Spanish version of the Systematized Nomenclature of Medicine (SNOMED). It aims at describing the aspects of translating and the difficulties encountered in delivering a natural and consistent medical nomenclature. Bunge's three-layered model is referenced to analyze the sequence of symbolic concept representations. It further explains how a communicative translation based on a concept-to-concept approach was used to achieve the highest level of flawlessness and naturalness for the Spanish rendition of SNOMED. Translation procedures and techniques are described and exemplified. Both the computer-aided and human translation methods are portrayed. The scientific and translation team tasks are detailed, with focus on Newmark's four-level principle for the translation process, extended with a fifth further level relevant to the ontology to control the consistency of the typology of concepts. Finally the convenience for a common methodology to develop non-English versions of SNOMED is suggested. PMID:11079973
Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin
2016-07-01
Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. Copyright © 2016 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Bender, Jason D.
Understanding hypersonic aerodynamics is important for the design of next-generation aerospace vehicles for space exploration, national security, and other applications. Ground-level experimental studies of hypersonic flows are difficult and expensive; thus, computational science plays a crucial role in this field. Computational fluid dynamics (CFD) simulations of extremely high-speed flows require models of chemical and thermal nonequilibrium processes, such as dissociation of diatomic molecules and vibrational energy relaxation. Current models are outdated and inadequate for advanced applications. We describe a multiscale computational study of gas-phase thermochemical processes in hypersonic flows, starting at the atomic scale and building systematically up to the continuum scale. The project was part of a larger effort centered on collaborations between aerospace scientists and computational chemists. We discuss the construction of potential energy surfaces for the N4, N2O2, and O4 systems, focusing especially on the multi-dimensional fitting problem. A new local fitting method named L-IMLS-G2 is presented and compared with a global fitting method. Then, we describe the theory of the quasiclassical trajectory (QCT) approach for modeling molecular collisions. We explain how we implemented the approach in a new parallel code for high-performance computing platforms. Results from billions of QCT simulations of high-energy N2 + N2, N2 + N, and N2 + O2 collisions are reported and analyzed. Reaction rate constants are calculated and sets of reactive trajectories are characterized at both thermal equilibrium and nonequilibrium conditions. The data shed light on fundamental mechanisms of dissociation and exchange reactions -- and their coupling to internal energy transfer processes -- in thermal environments typical of hypersonic flows. We discuss how the outcomes of this investigation and other related studies lay a rigorous foundation for new macroscopic models for hypersonic CFD. This research was supported by the Department of Energy Computational Science Graduate Fellowship and by the Air Force Office of Scientific Research Multidisciplinary University Research Initiative.
Correlated sequential tunneling through a double barrier for interacting one-dimensional electrons
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-07-01
The problem of resonant tunneling through a quantum dot weakly coupled to spinless Tomonaga-Luttinger liquids has been studied. We compute the linear conductance due to sequential tunneling processes upon employing a master equation approach. Besides the previously used lowest-order golden rule rates describing uncorrelated sequential tunneling processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects can be important. Focusing mainly on the temperature dependence of the peak conductance, we discuss the relation of these findings to previous theoretical and experimental results.
Correlated sequential tunneling in Tomonaga-Luttinger liquid quantum dots
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-02-01
We investigate tunneling through a quantum dot formed by two strong impurites in a spinless Tomonaga-Luttinger liquid. Upon employing a Markovian master equation approach, we compute the linear conductance due to sequential tunneling processes. Besides the previously used lowest-order Golden Rule rates describing uncorrelated sequential tunneling (UST) processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects are shown to dominate over UST. Focusing mainly on the temperature dependence of the conductance maximum, we discuss the relation of our results to previous theoretical and experimental results.
Lee, K-E; Lee, E-J; Park, H-S
2016-08-30
Recent advances in computational epigenetics have provided new opportunities to evaluate n-gram probabilistic language models. In this paper, we describe a systematic genome-wide approach for predicting functional roles in inactive chromatin regions by using a sequence-based Markovian chromatin map of the human genome. We demonstrate that Markov chains of sequences can be used as a precursor to predict functional roles in heterochromatin regions and provide an example comparing two publicly available chromatin annotations of large-scale epigenomics projects: ENCODE project consortium and Roadmap Epigenomics consortium.
Flight test trajectory control analysis
NASA Technical Reports Server (NTRS)
Walker, R.; Gupta, N.
1983-01-01
Recent extensions to optimal control theory applied to meaningful linear models with sufficiently flexible software tools provide powerful techniques for designing flight test trajectory controllers (FTTCs). This report describes the principal steps for systematic development of flight trajectory controllers, which can be summarized as planning, modeling, designing, and validating a trajectory controller. The techniques have been kept as general as possible and should apply to a wide range of problems where quantities must be computed and displayed to a pilot to improve pilot effectiveness and to reduce workload and fatigue. To illustrate the approach, a detailed trajectory guidance law is developed and demonstrated for the F-15 aircraft flying the zoom-and-pushover maneuver.
NASA Technical Reports Server (NTRS)
Chen, M. H.; Berger, R. D.; Saul, J. P.; Stevenson, K.; Cohen, R. J.
1987-01-01
We report a new method for the noninvasive characterization of the frequency response of the autonomic nervous system (ANS) in mediating fluctuations in heart rate (HR). The approach entails computation of the transfer function magnitude and phase between instantaneous lung volume and HR. Broad band fluctuations in lung volume were initiated when subjects breathed on cue to a sequence of beeps spaced randomly in time. We studied 10 subjects in both supine and standing positions. The transfer function, averaged among all the subjects, showed systematic differences between the two postures, reflecting the differing frequency responses of the sympathetic and parasympathetic divisions of the ANS.
An Observation-Driven Agent-Based Modeling and Analysis Framework for C. elegans Embryogenesis.
Wang, Zi; Ramsey, Benjamin J; Wang, Dali; Wong, Kwai; Li, Husheng; Wang, Eric; Bao, Zhirong
2016-01-01
With cutting-edge live microscopy and image analysis, biologists can now systematically track individual cells in complex tissues and quantify cellular behavior over extended time windows. Computational approaches that utilize the systematic and quantitative data are needed to understand how cells interact in vivo to give rise to the different cell types and 3D morphology of tissues. An agent-based, minimum descriptive modeling and analysis framework is presented in this paper to study C. elegans embryogenesis. The framework is designed to incorporate the large amounts of experimental observations on cellular behavior and reserve data structures/interfaces that allow regulatory mechanisms to be added as more insights are gained. Observed cellular behaviors are organized into lineage identity, timing and direction of cell division, and path of cell movement. The framework also includes global parameters such as the eggshell and a clock. Division and movement behaviors are driven by statistical models of the observations. Data structures/interfaces are reserved for gene list, cell-cell interaction, cell fate and landscape, and other global parameters until the descriptive model is replaced by a regulatory mechanism. This approach provides a framework to handle the ongoing experiments of single-cell analysis of complex tissues where mechanistic insights lag data collection and need to be validated on complex observations.
Dretzke, Janine; Ensor, Joie; Bayliss, Sue; Hodgkinson, James; Lordkipanidzé, Marie; Riley, Richard D; Fitzmaurice, David; Moore, David
2014-12-03
Prognostic factors are associated with the risk of future health outcomes in individuals with a particular health condition. The prognostic ability of such factors is increasingly being assessed in both primary research and systematic reviews. Systematic review methodology in this area is continuing to evolve, reflected in variable approaches to key methodological aspects. The aim of this article was to (i) explore and compare the methodology of systematic reviews of prognostic factors undertaken for the same clinical question, (ii) to discuss implications for review findings, and (iii) to present recommendations on what might be considered to be 'good practice' approaches. The sample was comprised of eight systematic reviews addressing the same clinical question, namely whether 'aspirin resistance' (a potential prognostic factor) has prognostic utility relative to future vascular events in patients on aspirin therapy for secondary prevention. A detailed comparison of methods around study identification, study selection, quality assessment, approaches to analysis, and reporting of findings was undertaken and the implications discussed. These were summarised into key considerations that may be transferable to future systematic reviews of prognostic factors. Across systematic reviews addressing the same clinical question, there were considerable differences in the numbers of studies identified and overlap between included studies, which could only partially be explained by different study eligibility criteria. Incomplete reporting and differences in terminology within primary studies hampered study identification and selection process across reviews. Quality assessment was highly variable and only one systematic review considered a checklist for studies of prognostic questions. There was inconsistency between reviews in approaches towards analysis, synthesis, addressing heterogeneity and reporting of results. Different methodological approaches may ultimately affect the findings and interpretation of systematic reviews of prognostic research, with implications for clinical decision-making.
[The radiologist physician in major trauma evaluation].
Motta-Ramírez, Gaspar Alberto
2016-01-01
Trauma is the most common cause of death in young adults. A multidisciplinary trauma team consists of at least a surgical team, an anesthesiology team, radiologic team, and an emergency department team. Recognize the integration of multidisciplinary medical team in managing the trauma patient and which must include the radiologist physician responsible for the institutional approach to the systematization of the trauma patient regarding any radiological and imaging study with emphasis on the FAST (del inglés, Focused Assessment with Sonography in Trauma)/USTA, Whole body computed tomography. Ultrasound is a cross-sectional method available for use in patients with major trauma. Whole-body multidetector computed tomography became the imaging modality of choice in the late 1990s. In patients with major trauma, examination FAST often is the initial imaging examination, extended to extraabdominal regions. Patients who have multitrauma from blunt mechanisms often require multiple diagnostic examinations, including Computed Tomography imaging of the torso as well as abdominopelvic Computed Tomography angiography. Multiphasic Whole-body trauma imaging is feasible, helps detect clinically relevant vascular injuries, and results in diagnostic image quality in the majority of patients. Computed Tomography has gained importance in the early diagnostic phase of trauma care in the emergency room. With a single continuous acquisition, whole-body computed tomography angiography is able to demonstrate all potentially injured organs, as well as vascular and bone structures, from the circle of Willis to the symphysis pubis.
Extended polarization in 3rd order SCC-DFTB from chemical potential equilization
Kaminski, Steve; Giese, Timothy J.; Gaus, Michael; York, Darrin M.; Elstner, Marcus
2012-01-01
In this work we augment the approximate density functional method SCC-DFTB (DFTB3) with the chemical potential equilization (CPE) approach in order to improve the performance for molecular electronic polarizabilities. The CPE method, originally implemented for NDDO type methods by Giese and York, has been shown to emend minimal basis methods wrt response properties significantly, and has been applied to SCC-DFTB recently. CPE allows to overcome this inherent limitation of minimal basis methods by supplying an additional response density. The systematic underestimation is thereby corrected quantitatively without the need to extend the atomic orbital basis, i.e. without increasing the overall computational cost significantly. Especially the dependency of polarizability as a function of molecular charge state was significantly improved from the CPE extension of DFTB3. The empirical parameters introduced by the CPE approach were optimized for 172 organic molecules in order to match the results from density functional methods (DFT) methods using large basis sets. However, the first order derivatives of molecular polarizabilities, as e.g. required to compute Raman activities, are not improved by the current CPE implementation, i.e. Raman spectra are not improved. PMID:22894819
Tanglegrams for rooted phylogenetic trees and networks
Scornavacca, Celine; Zickmann, Franziska; Huson, Daniel H.
2011-01-01
Motivation: In systematic biology, one is often faced with the task of comparing different phylogenetic trees, in particular in multi-gene analysis or cospeciation studies. One approach is to use a tanglegram in which two rooted phylogenetic trees are drawn opposite each other, using auxiliary lines to connect matching taxa. There is an increasing interest in using rooted phylogenetic networks to represent evolutionary history, so as to explicitly represent reticulate events, such as horizontal gene transfer, hybridization or reassortment. Thus, the question arises how to define and compute a tanglegram for such networks. Results: In this article, we present the first formal definition of a tanglegram for rooted phylogenetic networks and present a heuristic approach for computing one, called the NN-tanglegram method. We compare the performance of our method with existing tree tanglegram algorithms and also show a typical application to real biological datasets. For maximum usability, the algorithm does not require that the trees or networks are bifurcating or bicombining, or that they are on identical taxon sets. Availability: The algorithm is implemented in our program Dendroscope 3, which is freely available from www.dendroscope.org. Contact: scornava@informatik.uni-tuebingen.de; huson@informatik.uni-tuebingen.de PMID:21685078
NASA Astrophysics Data System (ADS)
Buongiorno Nardelli, Marco
2015-03-01
High-Throughput Quantum-Mechanics computation of materials properties by ab initio methods has become the foundation of an effective approach to materials design, discovery and characterization. This data driven approach to materials science currently presents the most promising path to the development of advanced technological materials that could solve or mitigate important social and economic challenges of the 21st century. In particular, the rapid proliferation of computational data on materials properties presents the possibility to complement and extend materials property databases where the experimental data is lacking and difficult to obtain. Enhanced repositories such as AFLOWLIB, open novel opportunities for structure discovery and optimization, including uncovering of unsuspected compounds, metastable structures and correlations between various properties. The practical realization of these opportunities depends on the the design effcient algorithms for electronic structure simulations of realistic material systems, the systematic compilation and classification of the generated data, and its presentation in easily accessed form to the materials science community, the primary mission of the AFLOW consortium. This work was supported by ONR-MURI under Contract N00014-13-1-0635 and the Duke University Center for Materials Genomics.
Is Internet search better than structured instruction for web-based health education?
Finkelstein, Joseph; Bedra, McKenzie
2013-01-01
Internet provides access to vast amounts of comprehensive information regarding any health-related subject. Patients increasingly use this information for health education using a search engine to identify education materials. An alternative approach of health education via Internet is based on utilizing a verified web site which provides structured interactive education guided by adult learning theories. Comparison of these two approaches in older patients was not performed systematically. The aim of this study was to compare the efficacy of a web-based computer-assisted education (CO-ED) system versus searching the Internet for learning about hypertension. Sixty hypertensive older adults (age 45+) were randomized into control or intervention groups. The control patients spent 30 to 40 minutes searching the Internet using a search engine for information about hypertension. The intervention patients spent 30 to 40 minutes using the CO-ED system, which provided computer-assisted instruction about major hypertension topics. Analysis of pre- and post- knowledge scores indicated a significant improvement among CO-ED users (14.6%) as opposed to Internet users (2%). Additionally, patients using the CO-ED program rated their learning experience more positively than those using the Internet.
NASA Astrophysics Data System (ADS)
Cacciotti, R.; Valach, J.; Kuneš, P.; Čerňanský, M.; Blaško, M.; Křemen, P.
2013-07-01
Deriving from the complex nature of cultural heritage conservation it is the need for enhancing a systematic but flexible organization of expert knowledge in the field. Such organization should address comprehensively the interrelations and complementariness among the different factors that come into play in the understanding of diagnostic and intervention problems. The purpose of MONDIS is to endorse this kind of organization. The approach consists in applying an ontological representation to the field of heritage conservation in order to establish an appropriate processing of data. The system allows replicating in a computer readable form the basic dependence among factors influencing the description, diagnosis and intervention of damages to immovable objects. More specifically MONDIS allows to input and search entries concerning object description, structural evolution, location characteristics and risk, component, material properties, surveys and measurements, damage typology, damage triggering events and possible interventions. The system supports searching features typical of standard databases, as it allows for the digitalization of a wide range of information including professional reports, books, articles and scientific papers. It also allows for computer aided retrieval of information tailored to user's requirements. The foreseen outputs will include a web user interface and a mobile application for visual inspection purposes.
Large scale analysis of the mutational landscape in HT-SELEX improves aptamer discovery
Hoinka, Jan; Berezhnoy, Alexey; Dao, Phuong; Sauna, Zuben E.; Gilboa, Eli; Przytycka, Teresa M.
2015-01-01
High-Throughput (HT) SELEX combines SELEX (Systematic Evolution of Ligands by EXponential Enrichment), a method for aptamer discovery, with massively parallel sequencing technologies. This emerging technology provides data for a global analysis of the selection process and for simultaneous discovery of a large number of candidates but currently lacks dedicated computational approaches for their analysis. To close this gap, we developed novel in-silico methods to analyze HT-SELEX data and utilized them to study the emergence of polymerase errors during HT-SELEX. Rather than considering these errors as a nuisance, we demonstrated their utility for guiding aptamer discovery. Our approach builds on two main advancements in aptamer analysis: AptaMut—a novel technique allowing for the identification of polymerase errors conferring an improved binding affinity relative to the ‘parent’ sequence and AptaCluster—an aptamer clustering algorithm which is to our best knowledge, the only currently available tool capable of efficiently clustering entire aptamer pools. We applied these methods to an HT-SELEX experiment developing aptamers against Interleukin 10 receptor alpha chain (IL-10RA) and experimentally confirmed our predictions thus validating our computational methods. PMID:25870409
Hasan, Md Mehedi; Khatun, Mst Shamima; Mollah, Md Nurul Haque; Yong, Cao; Guo, Dianjing
2017-01-01
Lysine succinylation, an important type of protein posttranslational modification, plays significant roles in many cellular processes. Accurate identification of succinylation sites can facilitate our understanding about the molecular mechanism and potential roles of lysine succinylation. However, even in well-studied systems, a majority of the succinylation sites remain undetected because the traditional experimental approaches to succinylation site identification are often costly, time-consuming, and laborious. In silico approach, on the other hand, is potentially an alternative strategy to predict succinylation substrates. In this paper, a novel computational predictor SuccinSite2.0 was developed for predicting generic and species-specific protein succinylation sites. This predictor takes the composition of profile-based amino acid and orthogonal binary features, which were used to train a random forest classifier. We demonstrated that the proposed SuccinSite2.0 predictor outperformed other currently existing implementations on a complementarily independent dataset. Furthermore, the important features that make visible contributions to species-specific and cross-species-specific prediction of protein succinylation site were analyzed. The proposed predictor is anticipated to be a useful computational resource for lysine succinylation site prediction. The integrated species-specific online tool of SuccinSite2.0 is publicly accessible.
Molecular Dynamics Analysis of Lysozyme Protein in Ethanol-Water Mixed Solvent Environment
NASA Astrophysics Data System (ADS)
Ochije, Henry Ikechukwu
Effect of protein-solvent interaction on the protein structure is widely studied using both experimental and computational techniques. Despite such extensive studies molecular level understanding of proteins and some simple solvents is still not fully understood. This work focuses on detailed molecular dynamics simulations to study of solvent effect on lysozyme protein, using water, alcohol and different concentrations of water-alcohol mixtures as solvents. The lysozyme protein structure in water, alcohol and alcohol-water mixture (0-12% alcohol) was studied using GROMACS molecular dynamics simulation code. Compared to water environment, the lysozome structure showed remarkable changes in solvents with increasing alcohol concentration. In particular, significant changes were observed in the protein secondary structure involving alpha helices. The influence of alcohol on the lysozyme protein was investigated by studying thermodynamic and structural properties. With increasing ethanol concentration we observed a systematic increase in total energy, enthalpy, root mean square deviation (RMSD), and radius of gyration. a polynomial interpolation approach. Using the resulting polynomial equation, we could determine above quantities for any intermediate alcohol percentage. In order to validate this approach, we selected an intermediate ethanol percentage and carried out full MD simulation. The results from MD simulation were in reasonably good agreement with that obtained using polynomial approach. Hence, the polynomial approach based method proposed here eliminates the need for computationally intensive full MD analysis for the concentrations within the range (0-12%) studied in this work.
Lakin, Matthew R.; Brown, Carl W.; Horwitz, Eli K.; Fanning, M. Leigh; West, Hannah E.; Stefanovic, Darko; Graves, Steven W.
2014-01-01
The development of large-scale molecular computational networks is a promising approach to implementing logical decision making at the nanoscale, analogous to cellular signaling and regulatory cascades. DNA strands with catalytic activity (DNAzymes) are one means of systematically constructing molecular computation networks with inherent signal amplification. Linking multiple DNAzymes into a computational circuit requires the design of substrate molecules that allow a signal to be passed from one DNAzyme to another through programmed biochemical interactions. In this paper, we chronicle an iterative design process guided by biophysical and kinetic constraints on the desired reaction pathways and use the resulting substrate design to implement heterogeneous DNAzyme signaling cascades. A key aspect of our design process is the use of secondary structure in the substrate molecule to sequester a downstream effector sequence prior to cleavage by an upstream DNAzyme. Our goal was to develop a concrete substrate molecule design to achieve efficient signal propagation with maximal activation and minimal leakage. We have previously employed the resulting design to develop high-performance DNAzyme-based signaling systems with applications in pathogen detection and autonomous theranostics. PMID:25347066
NASA Astrophysics Data System (ADS)
Aigner, M.; Köpplmayr, T.; Kneidinger, C.; Miethlinger, J.
2014-05-01
Barrier screws are widely used in the plastics industry. Due to the extreme diversity of their geometries, describing the flow behavior is difficult and rarely done in practice. We present a systematic approach based on networks that uses tensor algebra and numerical methods to model and calculate selected barrier screw geometries in terms of pressure, mass flow, and residence time. In addition, we report the results of three-dimensional simulations using the commercially available ANSYS Polyflow software. The major drawbacks of three-dimensional finite-element-method (FEM) simulations are that they require vast computational power and, large quantities of memory, and consume considerable time to create a geometric model created by computer-aided design (CAD) and complete a flow calculation. Consequently, a modified 2.5-dimensional finite volume method, termed network analysis is preferable. The results obtained by network analysis and FEM simulations correlated well. Network analysis provides an efficient alternative to complex FEM software in terms of computing power and memory consumption. Furthermore, typical barrier screw geometries can be parameterized and used for flow calculations without timeconsuming CAD-constructions.
Daubechies wavelets for linear scaling density functional theory.
Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan
2014-05-28
We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.
Boosting antibody developability through rational sequence optimization.
Seeliger, Daniel; Schulz, Patrick; Litzenburger, Tobias; Spitz, Julia; Hoerer, Stefan; Blech, Michaela; Enenkel, Barbara; Studts, Joey M; Garidel, Patrick; Karow, Anne R
2015-01-01
The application of monoclonal antibodies as commercial therapeutics poses substantial demands on stability and properties of an antibody. Therapeutic molecules that exhibit favorable properties increase the success rate in development. However, it is not yet fully understood how the protein sequences of an antibody translates into favorable in vitro molecule properties. In this work, computational design strategies based on heuristic sequence analysis were used to systematically modify an antibody that exhibited a tendency to precipitation in vitro. The resulting series of closely related antibodies showed improved stability as assessed by biophysical methods and long-term stability experiments. As a notable observation, expression levels also improved in comparison with the wild-type candidate. The methods employed to optimize the protein sequences, as well as the biophysical data used to determine the effect on stability under conditions commonly used in the formulation of therapeutic proteins, are described. Together, the experimental and computational data led to consistent conclusions regarding the effect of the introduced mutations. Our approach exemplifies how computational methods can be used to guide antibody optimization for increased stability.
Ranasinghesagara, Janaka C.; Hayakawa, Carole K.; Davis, Mitchell A.; Dunn, Andrew K.; Potma, Eric O.; Venugopalan, Vasan
2014-01-01
We develop an efficient method for accurately calculating the electric field of tightly focused laser beams in the presence of specific configurations of microscopic scatterers. This Huygens–Fresnel wave-based electric field superposition (HF-WEFS) method computes the amplitude and phase of the scattered electric field in excellent agreement with finite difference time-domain (FDTD) solutions of Maxwell’s equations. Our HF-WEFS implementation is 2–4 orders of magnitude faster than the FDTD method and enables systematic investigations of the effects of scatterer size and configuration on the focal field. We demonstrate the power of the new HF-WEFS approach by mapping several metrics of focal field distortion as a function of scatterer position. This analysis shows that the maximum focal field distortion occurs for single scatterers placed below the focal plane with an offset from the optical axis. The HF-WEFS method represents an important first step toward the development of a computational model of laser-scanning microscopy of thick cellular/tissue specimens. PMID:25121440
Computational methods for analysis and inference of kinase/inhibitor relationships
Ferrè, Fabrizio; Palmeri, Antonio; Helmer-Citterich, Manuela
2014-01-01
The central role of kinases in virtually all signal transduction networks is the driving motivation for the development of compounds modulating their activity. ATP-mimetic inhibitors are essential tools for elucidating signaling pathways and are emerging as promising therapeutic agents. However, off-target ligand binding and complex and sometimes unexpected kinase/inhibitor relationships can occur for seemingly unrelated kinases, stressing that computational approaches are needed for learning the interaction determinants and for the inference of the effect of small compounds on a given kinase. Recently published high-throughput profiling studies assessed the effects of thousands of small compound inhibitors, covering a substantial portion of the kinome. This wealth of data paved the road for computational resources and methods that can offer a major contribution in understanding the reasons of the inhibition, helping in the rational design of more specific molecules, in the in silico prediction of inhibition for those neglected kinases for which no systematic analysis has been carried yet, in the selection of novel inhibitors with desired selectivity, and offering novel avenues of personalized therapies. PMID:25071826
Long Penetration Mode Counterflowing Jets for Supersonic Slender Configurations - A Numerical Study
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Cheng, Gary; Chang, Chau-Layn; Zichettello, Benjamin; Bilyeu, David L.
2013-01-01
A novel approach of using counterflowing jets positioned strategically on the aircraft and exploiting its long penetration mode (LPM) of interaction towards sonic-boom mitigation forms the motivation for this study. Given that most previous studies on the counterflowing LPM jet have all been on blunt bodies and at high supersonic or hypersonic flow conditions, exploring the feasibility to obtain a LPM jet issuing from a slender body against low supersonic freestream conditions is the main focus of this study. Computational fluid dynamics computations of axisymmetric models (cone-cylinder and quartic geometry), of relevance to NASA's High Speed project, are carried out using the space-time conservation element solution element viscous flow solver with unstructured meshes. A systematic parametric study is conducted to determine the optimum combination of counterflowing jet size, mass flow rate, and nozzle geometry for obtaining LPM jets. Details from these computations will be used to assess the potential of the LPM counterflowing supersonic jet as a means of active flow control for enabling supersonic flight over land and to establish the knowledge base for possible future implementation of such technologies.
2015-01-01
The 0–0 energies of 80 medium and large molecules have been computed with a large panel of theoretical formalisms. We have used an approach computationally tractable for large molecules, that is, the structural and vibrational parameters are obtained with TD-DFT, the solvent effects are accounted for with the PCM model, whereas the total and transition energies have been determined with TD-DFT and with five wave function approaches accounting for contributions from double excitations, namely, CIS(D), ADC(2), CC2, SCS-CC2, and SOS-CC2, as well as Green’s function based BSE/GW approach. Atomic basis sets including diffuse functions have been systematically applied, and several variations of the PCM have been evaluated. Using solvent corrections obtained with corrected linear-response approach, we found that three schemes, namely, ADC(2), CC2, and BSE/GW allow one to reach a mean absolute deviation smaller than 0.15 eV compared to the measurements, the two former yielding slightly better correlation with experiments than the latter. CIS(D), SCS-CC2, and SOS-CC2 provide significantly larger deviations, though the latter approach delivers highly consistent transition energies. In addition, we show that (i) ADC(2) and CC2 values are extremely close to each other but for systems absorbing at low energies; (ii) the linear-response PCM scheme tends to overestimate solvation effects; and that (iii) the average impact of nonequilibrium correction on 0–0 energies is negligible. PMID:26574326
Bondü, Rebecca; Scheithauer, Herbert
2009-01-01
In March and September 2009 the school shootings in Winnenden and Ansbach once again demonstrated the need for preventive approaches in order to prevent further offences in Germany. Due to the low frequency of such offences and the low specificity of relevant risk factors known so far, prediction and prevention seems difficult though. None the less, several preventive approaches are currently discussed. The present article highlights these approaches and their specific advantages and disadvantages. As school shootings are multicausally determined, approaches focussing only on single aspects (i.e. prohibiting violent computer games or further strengthening gun laws) do not meet requirements. Other measures such as installing technical safety devices or optimizing actions of police and school attendants are supposed to reduce harm in case of emergency. Instead, scientifically founded and promising preventive approaches focus on secondary prevention and for this purpose employ the threat assessment approach, which is widespread within the USA. In this framework, responsible occupational groups such as teachers, school psychologists and police officers are to be trained in identifying students' warning signs, judging danger of these students for self and others in a systematic process and initiating suitable interventions.
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H
2012-11-06
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.
2013-01-01
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719
Computer work and musculoskeletal disorders of the neck and upper extremity: A systematic review
2010-01-01
Background This review examines the evidence for an association between computer work and neck and upper extremity disorders (except carpal tunnel syndrome). Methods A systematic critical review of studies of computer work and musculoskeletal disorders verified by a physical examination was performed. Results A total of 22 studies (26 articles) fulfilled the inclusion criteria. Results show limited evidence for a causal relationship between computer work per se, computer mouse and keyboard time related to a diagnosis of wrist tendonitis, and for an association between computer mouse time and forearm disorders. Limited evidence was also found for a causal relationship between computer work per se and computer mouse time related to tension neck syndrome, but the evidence for keyboard time was insufficient. Insufficient evidence was found for an association between other musculoskeletal diagnoses of the neck and upper extremities, including shoulder tendonitis and epicondylitis, and any aspect of computer work. Conclusions There is limited epidemiological evidence for an association between aspects of computer work and some of the clinical diagnoses studied. None of the evidence was considered as moderate or strong and there is a need for more and better documentation. PMID:20429925
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almeida, Leandro G.; Physics Department, Brookhaven National Laboratory, Upton, New York 11973; Sturm, Christian
2010-09-01
Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the MS scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{sub {mu}} }schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f}=3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturm, C.; Almeida, L.
2010-04-26
Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the {ovr MS} scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{mu}} schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f} = 3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less
Li, Haiquan; Dai, Xinbin; Zhao, Xuechun
2008-05-01
Membrane transport proteins play a crucial role in the import and export of ions, small molecules or macromolecules across biological membranes. Currently, there are a limited number of published computational tools which enable the systematic discovery and categorization of transporters prior to costly experimental validation. To approach this problem, we utilized a nearest neighbor method which seamlessly integrates homologous search and topological analysis into a machine-learning framework. Our approach satisfactorily distinguished 484 transporter families in the Transporter Classification Database, a curated and representative database for transporters. A five-fold cross-validation on the database achieved a positive classification rate of 72.3% on average. Furthermore, this method successfully detected transporters in seven model and four non-model organisms, ranging from archaean to mammalian species. A preliminary literature-based validation has cross-validated 65.8% of our predictions on the 11 organisms, including 55.9% of our predictions overlapping with 83.6% of the predicted transporters in TransportDB.
Application of zonal model on indoor air sensor network design
NASA Astrophysics Data System (ADS)
Chen, Y. Lisa; Wen, Jin
2007-04-01
Growing concerns over the safety of the indoor environment have made the use of sensors ubiquitous. Sensors that detect chemical and biological warfare agents can offer early warning of dangerous contaminants. However, current sensor system design is more informed by intuition and experience rather by systematic design. To develop a sensor system design methodology, a proper indoor airflow modeling approach is needed. Various indoor airflow modeling techniques, from complicated computational fluid dynamics approaches to simplified multi-zone approaches, exist in the literature. In this study, the effects of two airflow modeling techniques, multi-zone modeling technique and zonal modeling technique, on indoor air protection sensor system design are discussed. Common building attack scenarios, using a typical CBW agent, are simulated. Both multi-zone and zonal models are used to predict airflows and contaminant dispersion. Genetic Algorithm is then applied to optimize the sensor location and quantity. Differences in the sensor system design resulting from the two airflow models are discussed for a typical office environment and a large hall environment.
NASA Technical Reports Server (NTRS)
Davis, D. R.; Greenberg, R.; Hebert, F.
1985-01-01
Models of lunar origin in which the Moon accretes in orbit about the Earth from material approaching the Earth from heliocentric orbits must overcome a fundamental problem: the approach orbits of such material would be, in the simplest approximation, equally likely to be prograde or retrograde about the Earth, with the result that accretion of such material adds mass but not angular momentum to circumterrestrial satellites. Satellite orbits would then decay due to the resulting drag, ultimately impacting onto the Earth. One possibility for adding both material and angular momentum to Earth orbit is investigated: imbalance in the delivered angular momentum between pro and retrograde Earth passing orbits which arises from the three body dynamics of planetesimals approaching the Earth from heliocentric space. In order to study angular momentum delivery to circumterrestrial satellites, the near Earth velocities were numerically computed as a function of distance from the Earth for a large array of orbits systematically spanning heliocentric phase space.
NASA Technical Reports Server (NTRS)
Lewis, Adam; Lymburner, Leo; Purss, Matthew B. J.; Brooke, Brendan; Evans, Ben; Ip, Alex; Dekker, Arnold G.; Irons, James R.; Minchin, Stuart; Mueller, Norman
2015-01-01
The effort and cost required to convert satellite Earth Observation (EO) data into meaningful geophysical variables has prevented the systematic analysis of all available observations. To overcome these problems, we utilise an integrated High Performance Computing and Data environment to rapidly process, restructure and analyse the Australian Landsat data archive. In this approach, the EO data are assigned to a common grid framework that spans the full geospatial and temporal extent of the observations - the EO Data Cube. This approach is pixel-based and incorporates geometric and spectral calibration and quality assurance of each Earth surface reflectance measurement. We demonstrate the utility of the approach with rapid time-series mapping of surface water across the entire Australian continent using 27 years of continuous, 25 m resolution observations. Our preliminary analysis of the Landsat archive shows how the EO Data Cube can effectively liberate high-resolution EO data from their complex sensor-specific data structures and revolutionise our ability to measure environmental change.
A Variational Approach to the Analysis of Dissipative Electromechanical Systems
Allison, Andrew; Pearce, Charles E. M.; Abbott, Derek
2014-01-01
We develop a method for systematically constructing Lagrangian functions for dissipative mechanical, electrical, and electromechanical systems. We derive the equations of motion for some typical electromechanical systems using deterministic principles that are strictly variational. We do not use any ad hoc features that are added on after the analysis has been completed, such as the Rayleigh dissipation function. We generalise the concept of potential, and define generalised potentials for dissipative lumped system elements. Our innovation offers a unified approach to the analysis of electromechanical systems where there are energy and power terms in both the mechanical and electrical parts of the system. Using our novel technique, we can take advantage of the analytic approach from mechanics, and we can apply these powerful analytical methods to electrical and to electromechanical systems. We can analyse systems that include non-conservative forces. Our methodology is deterministic, and does does require any special intuition, and is thus suitable for automation via a computer-based algebra package. PMID:24586221
Viljoen, Charle André; Scott Millar, Rob; Engel, Mark E; Shelton, Mary; Burch, Vanessa
2017-12-26
Although ECG interpretation is an essential skill in clinical medicine, medical students and residents often lack ECG competence. Novel teaching methods are increasingly being implemented and investigated to improve ECG training. Computer-assisted instruction is one such method under investigation; however, its efficacy in achieving better ECG competence among medical students and residents remains uncertain. This article describes the protocol for a systematic review and meta-analysis that will compare the effectiveness of computer-assisted instruction with other teaching methods used for the ECG training of medical students and residents. Only studies with a comparative research design will be considered. Articles will be searched for in electronic databases (PubMed, Scopus, Web of Science, Academic Search Premier, CINAHL, PsycINFO, Education Resources Information Center, Africa-Wide Information and Teacher Reference Center). In addition, we will review citation indexes and conduct a grey literature search. Data extraction will be done on articles that met the predefined eligibility criteria. A descriptive analysis of the different teaching modalities will be provided and their educational impact will be assessed in terms of effect size and the modified version of Kirkpatrick framework for the evaluation of educational interventions. This systematic review aims to provide evidence as to whether computer-assisted instruction is an effective teaching modality for ECG training. It is hoped that the information garnered from this systematic review will assist in future curricular development and improve ECG training. As this research is a systematic review of published literature, ethical approval is not required. The results will be reported according to the Preferred Reporting Items for Systematic Review and Meta-Analysis statement and will be submitted to a peer-reviewed journal. The protocol and systematic review will be included in a PhD dissertation. CRD42017067054; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Protecting genomic data analytics in the cloud: state of the art and opportunities.
Tang, Haixu; Jiang, Xiaoqian; Wang, Xiaofeng; Wang, Shuang; Sofia, Heidi; Fox, Dov; Lauter, Kristin; Malin, Bradley; Telenti, Amalio; Xiong, Li; Ohno-Machado, Lucila
2016-10-13
The outsourcing of genomic data into public cloud computing settings raises concerns over privacy and security. Significant advancements in secure computation methods have emerged over the past several years, but such techniques need to be rigorously evaluated for their ability to support the analysis of human genomic data in an efficient and cost-effective manner. With respect to public cloud environments, there are concerns about the inadvertent exposure of human genomic data to unauthorized users. In analyses involving multiple institutions, there is additional concern about data being used beyond agreed research scope and being prcoessed in untrused computational environments, which may not satisfy institutional policies. To systematically investigate these issues, the NIH-funded National Center for Biomedical Computing iDASH (integrating Data for Analysis, 'anonymization' and SHaring) hosted the second Critical Assessment of Data Privacy and Protection competition to assess the capacity of cryptographic technologies for protecting computation over human genomes in the cloud and promoting cross-institutional collaboration. Data scientists were challenged to design and engineer practical algorithms for secure outsourcing of genome computation tasks in working software, whereby analyses are performed only on encrypted data. They were also challenged to develop approaches to enable secure collaboration on data from genomic studies generated by multiple organizations (e.g., medical centers) to jointly compute aggregate statistics without sharing individual-level records. The results of the competition indicated that secure computation techniques can enable comparative analysis of human genomes, but greater efficiency (in terms of compute time and memory utilization) are needed before they are sufficiently practical for real world environments.
The CPU and You: Mastering the Microcomputer.
ERIC Educational Resources Information Center
Kansky, Robert
1983-01-01
Computers are both understandable and controllable. Educators need some understanding of a computer's cognitive profile, component parts, and systematic nature in order to set it to work on some of the teaching tasks that need to be done. Much computer-related vocabulary is discussed. (MP)
Teaching Reading for Students with Intellectual Disabilities: A Systematic Review
ERIC Educational Resources Information Center
Alnahdi, Ghaleb Hamad
2015-01-01
A systematic review of the literature related to instructional strategies to improve reading skills for students with intellectual disabilities was conducted. Studies reviewed were within three categories; early reading approaches, comprehensive approaches, and one method approach. It was concluded that students with intellectual disabilities are…
Functional genomics platform for pooled screening and mammalian genetic interaction maps
Kampmann, Martin; Bassik, Michael C.; Weissman, Jonathan S.
2014-01-01
Systematic genetic interaction maps in microorganisms are powerful tools for identifying functional relationships between genes and defining the function of uncharacterized genes. We have recently implemented this strategy in mammalian cells as a two-stage approach. First, genes of interest are robustly identified in a pooled genome-wide screen using complex shRNA libraries. Second, phenotypes for all pairwise combinations of hit genes are measured in a double-shRNA screen and used to construct a genetic interaction map. Our protocol allows for rapid pooled screening under various conditions without a requirement for robotics, in contrast to arrayed approaches. Each stage of the protocol can be implemented in ~2 weeks, with additional time for analysis and generation of reagents. We discuss considerations for screen design, and present complete experimental procedures as well as a full computational analysis suite for identification of hits in pooled screens and generation of genetic interaction maps. While the protocols outlined here were developed for our original shRNA-based approach, they can be applied more generally, including to CRISPR-based approaches. PMID:24992097
Stepwise construction of a metabolic network in Event-B: The heat shock response.
Sanwal, Usman; Petre, Luigia; Petre, Ion
2017-12-01
There is a high interest in constructing large, detailed computational models for biological processes. This is often done by putting together existing submodels and adding to them extra details/knowledge. The result of such approaches is usually a model that can only answer questions on a very specific level of detail, and thus, ultimately, is of limited use. We focus instead on an approach to systematically add details to a model, with formal verification of its consistency at each step. In this way, one obtains a set of reusable models, at different levels of abstraction, to be used for different purposes depending on the question to address. We demonstrate this approach using Event-B, a computational framework introduced to develop formal specifications of distributed software systems. We first describe how to model generic metabolic networks in Event-B. Then, we apply this method for modeling the biological heat shock response in eukaryotic cells, using Event-B refinement techniques. The advantage of using Event-B consists in having refinement as an intrinsic feature; this provides as a final result not only a correct model, but a chain of models automatically linked by refinement, each of which is provably correct and reusable. This is a proof-of-concept that refinement in Event-B is suitable for biomodeling, serving for mastering biological complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Giesel, F L; Delorme, S; Sibbel, R; Kauczor, H-U; Krix, M
2009-06-01
The aim of the study was to conduct a cost-minimization analysis of contrast-enhanced ultrasound (CEUS) compared to multi-phase computed tomography (M-CT) as the diagnostic standard for diagnosing incidental liver lesions. Different scenarios of a cost-covering realization of CEUS in the ambulant sector in the general health insurance system of Germany were compared to the current cost situation. The absolute savings potential was estimated using different approaches for the calculation of the incidence of liver lesions which require further characterization. CEUS was the more cost-effective method in all scenarios in which CEUS examinations where performed at specialized centers (122.18-186.53 euro) compared to M-CT (223.19 euro). With about 40 000 relevant liver lesions per year, systematic implementation of CEUS would result in a cost savings of 4 m euro per year. However, the scenario of a cost-covering CEUS examination for all physicians who perform liver ultrasound would be the most cost-intensive approach (e. g., 407.87 euro at an average utilization of the ultrasound machine of 25 %, and a CEUS ratio of 5 %). A cost-covering realization of the CEUS method can result in cost savings in the German healthcare system. A centralized approach as proposed by the DEGUM should be targeted.
NASA Astrophysics Data System (ADS)
Georgiou, Harris
2009-10-01
Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.
Identification of sequence–structure RNA binding motifs for SELEX-derived aptamers
Hoinka, Jan; Zotenko, Elena; Friedman, Adam; Sauna, Zuben E.; Przytycka, Teresa M.
2012-01-01
Motivation: Systematic Evolution of Ligands by EXponential Enrichment (SELEX) represents a state-of-the-art technology to isolate single-stranded (ribo)nucleic acid fragments, named aptamers, which bind to a molecule (or molecules) of interest via specific structural regions induced by their sequence-dependent fold. This powerful method has applications in designing protein inhibitors, molecular detection systems, therapeutic drugs and antibody replacement among others. However, full understanding and consequently optimal utilization of the process has lagged behind its wide application due to the lack of dedicated computational approaches. At the same time, the combination of SELEX with novel sequencing technologies is beginning to provide the data that will allow the examination of a variety of properties of the selection process. Results: To close this gap we developed, Aptamotif, a computational method for the identification of sequence–structure motifs in SELEX-derived aptamers. To increase the chances of identifying functional motifs, Aptamotif uses an ensemble-based approach. We validated the method using two published aptamer datasets containing experimentally determined motifs of increasing complexity. We were able to recreate the author's findings to a high degree, thus proving the capability of our approach to identify binding motifs in SELEX data. Additionally, using our new experimental dataset, we illustrate the application of Aptamotif to elucidate several properties of the selection process. Contact: przytyck@ncbi.nlm.nih.gov, Zuben.Sauna@fda.hhs.gov PMID:22689764
A Computational Framework for Bioimaging Simulation.
Watabe, Masaki; Arjunan, Satya N V; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi
2015-01-01
Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units.
Sanz-Santos, José; Serra, Pere; Torky, Mohamed; Andreo, Felipe; Centeno, Carmen; Mendiluce, Leire; Martínez-Barenys, Carlos; López de Castro, Pedro; Ruiz-Manzano, Juan
2018-04-06
To evaluate the accuracy of systematic mediastinal staging by endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA) (sampling of all visible nodes measuring ≥5mm from stations N3 to N1 regardless of their positron emission tomography/computed tomography (PET/CT) features) and compare this staging approach with targeted EBUS-TBNA staging (sampling only 18F-fluorodeoxyglucose (FDG)-avid nodes) in patients with N2 non-small cell lung cancer (NSCLC) on PET/CT. Retrospective study of 107 patients who underwent systematic EBUS-TBNA mediastinal staging. The results were compared with those of a hypothetical scenario where only FDG-avid nodes on PET/CT would be sampled. Systematic EBUS-TBNA sampling demonstrated N3 disease in 3 patients, N2 disease in 60 (42 single-station or N2a, 18 multiple-station or N2b) and N0/N1 disease in 44. Of these 44, seven underwent mediastinoscopy, which did not show mediastinal disease; six of the seven proceeded to lung resection, which also showed no mediastinal disease. Thirty-four N0/N1 patients after EBUS-TBNA underwent lung resection directly: N0/N1 was found in 30 and N2 in four (one N2b with a PET/CT showing N2a disease, three N2a). Sensitivity, specificity, negative predictive value, positive predictive value, and overall accuracy of systematic EBUS-TBNA were 94%, 100%, 90%, 100% and 96%, respectively. Compared to targeted EBUS-TBNA, systematic EBUS-TBNA sampling provided additional important clinical information in 14 cases (13%): three N3 cases would have passed unnoticed, and 11 N2b cases would have been staged as N2a. In clinical practice, systematic sampling of the mediastinum by EBUS-TBNA, regardless of PET/CT features, is to be recommended over targeted sampling. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Abou Chakra, Charbel; Somma, Janine; Elali, Taha; Drapeau, Laurent
2017-04-01
Climate change and its negative impact on water resource is well described. For countries like Lebanon, undergoing major population's rise and already decreasing precipitations issues, effective water resources management is crucial. Their continuous and systematic monitoring overs long period of time is therefore an important activity to investigate drought risk scenarios for the Lebanese territory. Snow cover on Lebanese mountains is the most important water resources reserve. Consequently, systematic observation of snow cover dynamic plays a major role in order to support hydrologic research with accurate data on snow cover volumes over the melting season. For the last 20 years few studies have been conducted for Lebanese snow cover. They were focusing on estimating the snow cover surface using remote sensing and terrestrial measurement without obtaining accurate maps for the sampled locations. Indeed, estimations of both snow cover area and volumes are difficult due to snow accumulation very high variability and Lebanese mountains chains slopes topographic heterogeneity. Therefore, the snow cover relief measurement in its three-dimensional aspect and its Digital Elevation Model computation is essential to estimate snow cover volume. Despite the need to cover the all lebanese territory, we favored experimental terrestrial topographic site approaches due to high resolution satellite imagery cost, its limited accessibility and its acquisition restrictions. It is also most challenging to modelise snow cover at national scale. We therefore, selected a representative witness sinkhole located at Ouyoun el Siman to undertake systematic and continuous observations based on topographic approach using a total station. After four years of continuous observations, we acknowledged the relation between snow melt rate, date of total melting and neighboring springs discharges. Consequently, we are able to forecast, early in the season, dates of total snowmelt and springs low water flows which are essentially feeded by snowmelt water. Simulations were ran, predicting the snow level between two sampled dates, they provided promising result for national scale extrapolation.
Computer Use by School Teachers in Teaching-Learning Process
ERIC Educational Resources Information Center
Bhalla, Jyoti
2013-01-01
Developing countries have a responsibility not merely to provide computers for schools, but also to foster a habit of infusing a variety of ways in which computers can be integrated in teaching-learning amongst the end users of these tools. Earlier researches lacked a systematic study of the manner and the extent of computer-use by teachers. The…
ERIC Educational Resources Information Center
Baeza-Baeza, Juan J.; Garcia-Alvarez-Coque, M. Celia
2012-01-01
A general systematic approach including ionic strength effects is proposed for the numerical calculation of concentrations of chemical species in multiequilibrium problems. This approach extends the versatility of the approach presented in a previous article and is applied using the Solver option of the Excel spreadsheet to solve real problems…
From research to evidence-informed decision making: a systematic approach
Poot, Charlotte C; van der Kleij, Rianne M; Brakema, Evelyn A; Vermond, Debbie; Williams, Siân; Cragg, Liza; van den Broek, Jos M; Chavannes, Niels H
2018-01-01
Abstract Background Knowledge creation forms an integral part of the knowledge-to-action framework aimed at bridging the gap between research and evidence-informed decision making. Although principles of science communication, data visualisation and user-centred design largely impact the effectiveness of communication, their role in knowledge creation is still limited. Hence, this article aims to provide researchers a systematic approach on how knowledge creation can be put into practice. Methods A systematic two-phased approach towards knowledge creation was formulated and executed. First, during a preparation phase the purpose and audience of the knowledge were defined. Subsequently, a developmental phase facilitated how the content is ‘said’ (language) and communicated (channel). This developmental phase proceeded via two pathways: a translational cycle and design cycle, during which core translational and design components were incorporated. The entire approach was demonstrated by a case study. Results The case study demonstrated how the phases in this systematic approach can be operationalised. It furthermore illustrated how created knowledge can be delivered. Conclusion The proposed approach offers researchers a systematic, practical and easy-to-implement tool to facilitate effective knowledge creation towards decision-makers in healthcare. Through the integration of core components of knowledge creation evidence-informed decision making will ultimately be optimized. PMID:29538728
Virtual reality therapy: an effective treatment for phobias.
North, M M; North, S M; Coble, J R
1998-01-01
Behavioral therapy techniques for treating phobias often includes graded exposure of the patient to anxiety-producing stimuli (Systematic Desensitization). However, in utilizing systematic desensitization, research reviews demonstrate that many patients appear to have difficulty in applying imaginative techniques. This chapter describes the Virtual Reality Therapy (VRT), a new therapeutical approach that can be used to overcome some of the difficulties inherent in the traditional treatment of phobias. VRT, like current imaginal and in vivo modalities, can generate stimuli that could be utilized in desensitization therapy. Like systematic desensitization therapy, VRT can provide stimuli for patients who have difficulty in imagining scenes and/or are too phobic to experience real situations. As far as we know, the idea of using virtual reality technology to combat psychological disorders was first conceived within the Human-Computer Interaction Group at Clark Atlanta University in November 1992. Since then, we have successfully conducted the first known pilot experiments in the use of virtual reality technologies in the treatment of specific phobias: fear of flying, fear of heights, fear of being in certain situations (such as a dark barn, an enclosed bridge over a river, and in the presence of an animal [a black cat] in a dark room), and fear of public speaking. The results of these experiments are described.
Hallberg, Sílvia Cristina Marceliano; Lisboa, Carolina Saraiva de Macedo; de Souza, Déborah Brandão; Mester, Ariela; Braga, Andréia Zambon; Strey, Artur Marques; da Silva, Camila Sartori
2015-01-01
Information and communication technologies (ICTs) are devices, services and knowledge that reproduce, process and distribute information. Psychotherapy has been influenced by these technologies, and there is a tendency for their role to expand. To describe the current panorama of the scientific literature on psychotherapy and ICTs. This is a systematic and descriptive review. Searches were run on the electronic databases Biblioteca Virtual em Saude (BVS), PsycINFO, Scopus, PePSIC, ScienceDirect and Index Psi, using the Boolean operator AND and the descriptors psychotherapy, computers, Internet, cell phones and social networks. A considerable volume of empirical research was found, published recently in many different parts of the world, especially in the United States. There is very little Brazilian research on the subject. The majority of the studies identified assess the efficacy or describe the development of techniques and psychotherapies, via ICTs, for prevention, diagnosis or treatment of mental and behavioral disorders. The psychopathology most investigated in this area is depression and it was not possible to draw conclusions on a possible trend for research into the subject to increase. The technology most investigated was the Internet and cognitive-behavioral therapy was the most common theoretical approach in these studies. Systematic reviews of published studies can detect gaps in the research agenda within a specific field of knowledge.
Whitlock, Evelyn P; Eder, Michelle; Thompson, Jamie H; Jonas, Daniel E; Evans, Corinne V; Guirguis-Blake, Janelle M; Lin, Jennifer S
2017-03-02
Guideline developers and other users of systematic reviews need information about whether a medical or preventive intervention is likely to benefit or harm some patients more (or less) than the average in order to make clinical practice recommendations tailored to these populations. However, guidance is lacking on how to include patient subpopulation considerations into the systematic reviews upon which guidelines are often based. In this article, we describe methods developed to consistently consider the evidence for relevant subpopulations in systematic reviews conducted to support primary care clinical preventive service recommendations made by the U.S. Preventive Services Task Force (USPSTF). Our approach is grounded in our experience conducting systematic reviews for the USPSTF and informed by a review of existing guidance on subgroup analysis and subpopulation issues. We developed and refined our approach based on feedback from the Subpopulation Workgroup of the USPSTF and pilot testing on reviews being conducted for the USPSTF. This paper provides processes and tools for incorporating evidence-based identification of important sources of potential heterogeneity of intervention effects into all phases of systematic reviews. Key components of our proposed approach include targeted literature searches and key informant interviews to identify the most important subpopulations a priori during topic scoping, a framework for assessing the credibility of subgroup analyses reported in studies, and structured investigation of sources of heterogeneity of intervention effects. Further testing and evaluation are necessary to refine this proposed approach and demonstrate its utility to the producers and users of systematic reviews beyond the context of the USPSTF. Gaps in the evidence on important subpopulations identified by routinely applying this process in systematic reviews will also inform future research needs.
Validation of a common data model for active safety surveillance research
Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E
2011-01-01
Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893
NASA Astrophysics Data System (ADS)
Liou, Jyun-you; Smith, Elliot H.; Bateman, Lisa M.; McKhann, Guy M., II; Goodman, Robert R.; Greger, Bradley; Davis, Tyler S.; Kellis, Spencer S.; House, Paul A.; Schevon, Catherine A.
2017-08-01
Objective. Epileptiform discharges, an electrophysiological hallmark of seizures, can propagate across cortical tissue in a manner similar to traveling waves. Recent work has focused attention on the origination and propagation patterns of these discharges, yielding important clues to their source location and mechanism of travel. However, systematic studies of methods for measuring propagation are lacking. Approach. We analyzed epileptiform discharges in microelectrode array recordings of human seizures. The array records multiunit activity and local field potentials at 400 micron spatial resolution, from a small cortical site free of obstructions. We evaluated several computationally efficient statistical methods for calculating traveling wave velocity, benchmarking them to analyses of associated neuronal burst firing. Main results. Over 90% of discharges met statistical criteria for propagation across the sampled cortical territory. Detection rate, direction and speed estimates derived from a multiunit estimator were compared to four field potential-based estimators: negative peak, maximum descent, high gamma power, and cross-correlation. Interestingly, the methods that were computationally simplest and most efficient (negative peak and maximal descent) offer non-inferior results in predicting neuronal traveling wave velocities compared to the other two, more complex methods. Moreover, the negative peak and maximal descent methods proved to be more robust against reduced spatial sampling challenges. Using least absolute deviation in place of least squares error minimized the impact of outliers, and reduced the discrepancies between local field potential-based and multiunit estimators. Significance. Our findings suggest that ictal epileptiform discharges typically take the form of exceptionally strong, rapidly traveling waves, with propagation detectable across millimeter distances. The sequential activation of neurons in space can be inferred from clinically-observable EEG data, with a variety of straightforward computation methods available. This opens possibilities for systematic assessments of ictal discharge propagation in clinical and research settings.
Chaos control in delayed phase space constructed by the Takens embedding theory
NASA Astrophysics Data System (ADS)
Hajiloo, R.; Salarieh, H.; Alasty, A.
2018-01-01
In this paper, the problem of chaos control in discrete-time chaotic systems with unknown governing equations and limited measurable states is investigated. Using the time-series of only one measurable state, an algorithm is proposed to stabilize unstable fixed points. The approach consists of three steps: first, using Takens embedding theory, a delayed phase space preserving the topological characteristics of the unknown system is reconstructed. Second, a dynamic model is identified by recursive least squares method to estimate the time-series data in the delayed phase space. Finally, based on the reconstructed model, an appropriate linear delayed feedback controller is obtained for stabilizing unstable fixed points of the system. Controller gains are computed using a systematic approach. The effectiveness of the proposed algorithm is examined by applying it to the generalized hyperchaotic Henon system, prey-predator population map, and the discrete-time Lorenz system.
Bayesian component separation: The Planck experience
NASA Astrophysics Data System (ADS)
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
The renormalization group and the implicit function theorem for amplitude equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkinis, Eleftherios
2008-07-15
This article lays down the foundations of the renormalization group (RG) approach for differential equations characterized by multiple scales. The renormalization of constants through an elimination process and the subsequent derivation of the amplitude equation [Chen et al., Phys. Rev. E 54, 376 (1996)] are given a rigorous but not abstract mathematical form whose justification is based on the implicit function theorem. Developing the theoretical framework that underlies the RG approach leads to a systematization of the renormalization process and to the derivation of explicit closed-form expressions for the amplitude equations that can be carried out with symbolic computation formore » both linear and nonlinear scalar differential equations and first order systems but independently of their particular forms. Certain nonlinear singular perturbation problems are considered that illustrate the formalism and recover well-known results from the literature as special cases.« less
Calcium dynamics and signaling in vascular regulation: computational models
Tsoukias, Nikolaos Michael
2013-01-01
Calcium is a universal signaling molecule with a central role in a number of vascular functions including in the regulation of tone and blood flow. Experimentation has provided insights into signaling pathways that lead to or affected by Ca2+ mobilization in the vasculature. Mathematical modeling offers a systematic approach to the analysis of these mechanisms and can serve as a tool for data interpretation and for guiding new experimental studies. Comprehensive models of calcium dynamics are well advanced for some systems such as the heart. This review summarizes the progress that has been made in modeling Ca2+ dynamics and signaling in vascular cells. Model simulations show how Ca2+ signaling emerges as a result of complex, nonlinear interactions that cannot be properly analyzed using only a reductionist's approach. A strategy of integrative modeling in the vasculature is outlined that will allow linking macroscale pathophysiological responses to the underlying cellular mechanisms. PMID:21061306
NASA Technical Reports Server (NTRS)
Ketchum, E.
1988-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) will be responsible for performing ground attitude determination for Gamma Ray Observatory (GRO) support. The study reported in this paper provides the FDD and the GRO project with ground attitude determination error information and illustrates several uses of the Generalized Calibration System (GCS). GCS, an institutional software tool in the FDD, automates the computation of the expected attitude determination uncertainty that a spacecraft will encounter during its mission. The GRO project is particularly interested in the uncertainty in the attitude determination using Sun sensors and a magnetometer when both star trackers are inoperable. In order to examine the expected attitude errors for GRO, a systematic approach was developed including various parametric studies. The approach identifies pertinent parameters and combines them to form a matrix of test runs in GCS. This matrix formed the basis for this study.
Questioned document workflow for handwriting with automated tools
NASA Astrophysics Data System (ADS)
Das, Krishnanand; Srihari, Sargur N.; Srinivasan, Harish
2012-01-01
During the last few years many document recognition methods have been developed to determine whether a handwriting specimen can be attributed to a known writer. However, in practice, the work-flow of the document examiner continues to be manual-intensive. Before a systematic or computational, approach can be developed, an articulation of the steps involved in handwriting comparison is needed. We describe the work flow of handwritten questioned document examination, as described in a standards manual, and the steps where existing automation tools can be used. A well-known ransom note case is considered as an example, where one encounters testing for multiple writers of the same document, determining whether the writing is disguised, known writing is formal while questioned writing is informal, etc. The findings for the particular ransom note case using the tools are given. Also observations are made for developing a more fully automated approach to handwriting examination.
Efficient similarity-based data clustering by optimal object to cluster reallocation.
Rossignol, Mathias; Lagrange, Mathieu; Cont, Arshia
2018-01-01
We present an iterative flat hard clustering algorithm designed to operate on arbitrary similarity matrices, with the only constraint that these matrices be symmetrical. Although functionally very close to kernel k-means, our proposal performs a maximization of average intra-class similarity, instead of a squared distance minimization, in order to remain closer to the semantics of similarities. We show that this approach permits the relaxing of some conditions on usable affinity matrices like semi-positiveness, as well as opening possibilities for computational optimization required for large datasets. Systematic evaluation on a variety of data sets shows that compared with kernel k-means and the spectral clustering methods, the proposed approach gives equivalent or better performance, while running much faster. Most notably, it significantly reduces memory access, which makes it a good choice for large data collections. Material enabling the reproducibility of the results is made available online.
Using Fitness Landscapes for Rational Hepatitis C Immunogen Design
NASA Astrophysics Data System (ADS)
Hart, Gregory; Ferguson, Andrew
2015-03-01
Hepatitis C virus afflicts 170 million people worldwide, 2-3% of the global population. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic, particularly in the developing world where expensive drug therapies are unavailable. Despite 20 years of research, the high mutability of the virus, and lack of knowledge of what constitutes effective immune responses, have impeded development of an effective vaccine. Coupling data mining of sequence databases with the Potts model, we have developed a computational approach to systematically identify viral vulnerabilities and perform rational design of vaccine immunogens. We applied our approach to the nonstructural proteins NS3, NSA, NSA, and NSB which are crucial for viral replication.The predictions of our model are in good accord with experimental measurements and clinical observations, and we have used our model to design immunogen candidates to elicit T-cell responses against vulnerable regions of theseviral proteins.
Interpreter of maladies: redescription mining applied to biomedical data analysis.
Waltman, Peter; Pearlman, Alex; Mishra, Bud
2006-04-01
Comprehensive, systematic and integrated data-centric statistical approaches to disease modeling can provide powerful frameworks for understanding disease etiology. Here, one such computational framework based on redescription mining in both its incarnations, static and dynamic, is discussed. The static framework provides bioinformatic tools applicable to multifaceted datasets, containing genetic, transcriptomic, proteomic, and clinical data for diseased patients and normal subjects. The dynamic redescription framework provides systems biology tools to model complex sets of regulatory, metabolic and signaling pathways in the initiation and progression of a disease. As an example, the case of chronic fatigue syndrome (CFS) is considered, which has so far remained intractable and unpredictable in its etiology and nosology. The redescription mining approaches can be applied to the Centers for Disease Control and Prevention's Wichita (KS, USA) dataset, integrating transcriptomic, epidemiological and clinical data, and can also be used to study how pathways in the hypothalamic-pituitary-adrenal axis affect CFS patients.
Orbital relaxation effects on Kohn–Sham frontier orbital energies in density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, DaDi; Zheng, Xiao, E-mail: xz58@ustc.edu.cn; Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026
2015-04-21
We explore effects of orbital relaxation on Kohn–Sham frontier orbital energies in density functional theory by using a nonempirical scaling correction approach developed in Zheng et al. [J. Chem. Phys. 138, 174105 (2013)]. Relaxation of Kohn–Sham orbitals upon addition/removal of a fractional number of electrons to/from a finite system is determined by a systematic perturbative treatment. The information of orbital relaxation is then used to improve the accuracy of predicted Kohn–Sham frontier orbital energies by Hartree–Fock, local density approximation, and generalized gradient approximation methods. The results clearly highlight the significance of capturing the orbital relaxation effects. Moreover, the proposed scalingmore » correction approach provides a useful way of computing derivative gaps and Fukui quantities of N-electron finite systems (N is an integer), without the need to perform self-consistent-field calculations for (N ± 1)-electron systems.« less
Contemporary evaluation and management of renal trauma.
Chouhan, Jyoti D; Winer, Andrew G; Johnson, Christina; Weiss, Jeffrey P; Hyacinthe, Llewellyn M
2016-04-01
Renal trauma occurs in approximately 1%-5% of all trauma cases. Improvements in imaging and management over the last two decades have caused a shift in the treatment of this clinical condition. A systematic search of PubMed was performed to identify relevant and contemporary articles that referred to the management and evaluation of renal trauma. Computed tomography remains a mainstay of radiological evaluation in hemodynamically stable patients. There is a growing body of literature showing that conservative, non-operative management of renal trauma is safe, even for Grade IV-V renal injuries. If surgical exploration is planned due to other injuries, a conservative approach to the kidney can often be utilized. Follow up imaging may be warranted in certain circumstances. Urinoma, delayed bleeding, and hypertension are complications that require follow up. Appropriate imaging and conservative approaches are a mainstay of current renal trauma management.
Duroy, David; Boutron, Isabelle; Baron, Gabriel; Ravaud, Philippe; Estellat, Candice; Lejoyeux, Michel
2016-08-01
To assess the impact of a computer-assisted Screening, Brief Intervention, and Referral to Treatment (SBIRT) on daily consumption of alcohol by patients with hazardous drinking disorder detected after systematic screening during their admission to an emergency department (ED). Two-arm, parallel group, multicentre, randomized controlled trial with a centralised computer-generated randomization procedure. Four EDs in university hospitals located in the Paris area in France. Patients admitted in the ED for any reason, with hazardous drinking disorder detected after systematic screening (i.e., Alcohol Use Disorder Identification Test score ≥5 for women and 8 for men OR self-reported alcohol consumption by week ≥7 drinks for women and 14 for men). The experimental intervention was computer-assisted SBIRT and the comparator was a placebo-controlled intervention (i.e., a computer-assisted education program on nutrition). Interventions were administered in the ED and followed by phone reinforcements at 1 and 3 months. The primary outcome was the mean number of alcohol drinks per day in the previous week, at 12 months. Results From May 2005 to February 2011, 286 patients were randomized to the computer-assisted SBIRT and 286 to the comparator intervention. The two groups did not differ in the primary outcome, with an adjusted mean difference of 0.12 (95% confidence interval, -0.88 to 1.11). There was no additional benefit of the computer-assisted alcohol SBIRT as compared with the computer-assisted education program on nutrition among patients with hazardous drinking disorder detected by systematic screening during their admission to an ED. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Computer-aided design of high-frequency transistor amplifiers.
NASA Technical Reports Server (NTRS)
Hsieh, C.-C.; Chan, S.-P.
1972-01-01
A systematic step-by-step computer-aided procedure for designing high-frequency transistor amplifiers is described. The technique makes it possible to determine the optimum source impedance which gives a minimum noise figure.
The Emergence of Systematic Review in Toxicology
Stephens, Martin L.; Betts, Kellyn; Beck, Nancy B.; Cogliano, Vincent; Dickersin, Kay; Fitzpatrick, Suzanne; Freeman, James; Gray, George; Hartung, Thomas; McPartland, Jennifer; Rooney, Andrew A.; Scherer, Roberta W.; Verloo, Didier; Hoffmann, Sebastian
2016-01-01
The Evidence-based Toxicology Collaboration hosted a workshop on “The Emergence of Systematic Review and Related Evidence-based Approaches in Toxicology,” on November 21, 2014 in Baltimore, Maryland. The workshop featured speakers from agencies and organizations applying systematic review approaches to questions in toxicology, speakers with experience in conducting systematic reviews in medicine and healthcare, and stakeholders in industry, government, academia, and non-governmental organizations. Based on the workshop presentations and discussion, here we address the state of systematic review methods in toxicology, historical antecedents in both medicine and toxicology, challenges to the translation of systematic review from medicine to toxicology, and thoughts on the way forward. We conclude with a recommendation that as various agencies and organizations adapt systematic review methods, they continue to work together to ensure that there is a harmonized process for how the basic elements of systematic review methods are applied in toxicology. PMID:27208075
Donovan-Maiye, Rory M; Langmead, Christopher J; Zuckerman, Daniel M
2018-01-09
Motivated by the extremely high computing costs associated with estimates of free energies for biological systems using molecular simulations, we further the exploration of existing "belief propagation" (BP) algorithms for fixed-backbone peptide and protein systems. The precalculation of pairwise interactions among discretized libraries of side-chain conformations, along with representation of protein side chains as nodes in a graphical model, enables direct application of the BP approach, which requires only ∼1 s of single-processor run time after the precalculation stage. We use a "loopy BP" algorithm, which can be seen as an approximate generalization of the transfer-matrix approach to highly connected (i.e., loopy) graphs, and it has previously been applied to protein calculations. We examine the application of loopy BP to several peptides as well as the binding site of the T4 lysozyme L99A mutant. The present study reports on (i) the comparison of the approximate BP results with estimates from unbiased estimators based on the Amber99SB force field; (ii) investigation of the effects of varying library size on BP predictions; and (iii) a theoretical discussion of the discretization effects that can arise in BP calculations. The data suggest that, despite their approximate nature, BP free-energy estimates are highly accurate-indeed, they never fall outside confidence intervals from unbiased estimators for the systems where independent results could be obtained. Furthermore, we find that libraries of sufficiently fine discretization (which diminish library-size sensitivity) can be obtained with standard computing resources in most cases. Altogether, the extremely low computing times and accurate results suggest the BP approach warrants further study.
Understanding the Scalability of Bayesian Network Inference Using Clique Tree Growth Curves
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.
2010-01-01
One of the main approaches to performing computation in Bayesian networks (BNs) is clique tree clustering and propagation. The clique tree approach consists of propagation in a clique tree compiled from a Bayesian network, and while it was introduced in the 1980s, there is still a lack of understanding of how clique tree computation time depends on variations in BN size and structure. In this article, we improve this understanding by developing an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN s non-root nodes to the number of root nodes, and (ii) the expected number of moral edges in their moral graphs. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for the total size of each set. For the special case of bipartite BNs, there are two sets and two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, where random bipartite BNs generated using the BPART algorithm are studied, we systematically increase the out-degree of the root nodes in bipartite Bayesian networks, by increasing the number of leaf nodes. Surprisingly, root clique growth is well-approximated by Gompertz growth curves, an S-shaped family of curves that has previously been used to describe growth processes in biology, medicine, and neuroscience. We believe that this research improves the understanding of the scaling behavior of clique tree clustering for a certain class of Bayesian networks; presents an aid for trade-off studies of clique tree clustering using growth curves; and ultimately provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms.
Conducting systematic reviews of association (etiology): The Joanna Briggs Institute's approach.
Moola, Sandeep; Munn, Zachary; Sears, Kim; Sfetcu, Raluca; Currie, Marian; Lisy, Karolina; Tufanaru, Catalin; Qureshi, Rubab; Mattis, Patrick; Mu, Peifan
2015-09-01
The systematic review of evidence is the research method which underpins the traditional approach to evidence-based healthcare. There is currently no uniform methodology for conducting a systematic review of association (etiology). This study outlines and describes the Joanna Briggs Institute's approach and guidance for synthesizing evidence related to association with a predominant focus on etiology and contributes to the emerging field of systematic review methodologies. It should be noted that questions of association typically address etiological or prognostic issues.The systematic review of studies to answer questions of etiology follows the same basic principles of systematic review of other types of data. An a priori protocol must inform the conduct of the systematic review, comprehensive searching must be performed and critical appraisal of retrieved studies must be carried out.The overarching objective of systematic reviews of etiology is to identify and synthesize the best available evidence on the factors of interest that are associated with a particular disease or outcome. The traditional PICO (population, interventions, comparators and outcomes) format for systematic reviews of effects does not align with questions relating to etiology. A systematic review of etiology should include the following aspects: population, exposure of interest (independent variable) and outcome (dependent variable).Studies of etiology are predominantly explanatory or predictive. The objective of reviews of explanatory or predictive studies is to contribute to, and improve our understanding of, the relationship of health-related events or outcomes by examining the association between variables. When interpreting possible associations between variables based on observational study data, caution must be exercised due to the likely presence of confounding variables or moderators that may impact on the results.As with all systematic reviews, there are various approaches to present the results, including a narrative, graphical or tabular summary, or meta-analysis. When meta-analysis is not possible, a set of alternative methods for synthesizing research is available. On the basis of the research question and objectives, narrative, tabular and/or visual approaches can be used for data synthesis. There are some special considerations when conducting meta-analysis for questions related to risk and correlation. These include, but are not limited to, causal inference.Systematic review and meta-analysis of studies related to etiology is an emerging methodology in the field of evidence synthesis. These reviews can provide useful information for healthcare professionals and policymakers on the burden of disease. The standardized Joanna Briggs Institute approach offers a rigorous and transparent method to conduct reviews of etiology.
ERIC Educational Resources Information Center
Huang, Xi
2018-01-01
Computer-supported collaborative learning facilitates the extension of second language acquisition into social practice. Studies on its achievement effects speak directly to the pedagogical notion of treating communicative practice in synchronous computer-mediated communication (SCMC): real-time communication that takes place between human beings…
Performance of a computer-based assessment of cognitive function measures in two cohorts of seniors
USDA-ARS?s Scientific Manuscript database
Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool f...
Computer-Based Training: Capitalizing on Lessons Learned
ERIC Educational Resources Information Center
Bedwell, Wendy L.; Salas, Eduardo
2010-01-01
Computer-based training (CBT) is a methodology for providing systematic, structured learning; a useful tool when properly designed. CBT has seen a resurgence given the serious games movement, which is at the forefront of integrating primarily entertainment computer-based games into education and training. This effort represents a multidisciplinary…
Computer-Based Education for Patients with Hypertension: A Systematic Review
ERIC Educational Resources Information Center
Saksena, Anuraag
2010-01-01
Objective: To evaluate the benefits of using computer-based interventions to provide patient education to individuals with hypertension. Methods: MEDLINE, Web of Knowledge, CINAHL, ERIC, EMBASE, and PsychINFO were searched from 1995 to April 2009 using keywords related to "computers," "hypertension," "education," and "clinical trial." Additional…
ERIC Educational Resources Information Center
Asikainen, Henna; Gijbels, David
2017-01-01
The focus of the present paper is on the contribution of the research in the student approaches to learning tradition. Several studies in this field have started from the assumption that students' approaches to learning develop towards more deep approaches to learning in higher education. This paper reports on a systematic review of longitudinal…
Systematic Approach to Calculate the Concentration of Chemical Species in Multi-Equilibrium Problems
ERIC Educational Resources Information Center
Baeza-Baeza, Juan Jose; Garcia-Alvarez-Coque, Maria Celia
2011-01-01
A general systematic approach is proposed for the numerical calculation of multi-equilibrium problems. The approach involves several steps: (i) the establishment of balances involving the chemical species in solution (e.g., mass balances, charge balance, and stoichiometric balance for the reaction products), (ii) the selection of the unknowns (the…
NASA Astrophysics Data System (ADS)
Lavrentiev, N. A.; Rodimova, O. B.; Fazliev, A. Z.; Vigasin, A. A.
2017-11-01
An approach is suggested to the formation of applied ontologies in subject domains where results are represented in graphical form. An approach to systematization of research graphics is also given which contains information on weakly bound carbon dioxide complexes. The results of systematization of research plots and images that characterize the spectral properties of the CO2 complexes are presented.
OPTHYLIC: An Optimised Tool for Hybrid Limits Computation
NASA Astrophysics Data System (ADS)
Busato, Emmanuel; Calvet, David; Theveneaux-Pelzer, Timothée
2018-05-01
A software tool, computing observed and expected upper limits on Poissonian process rates using a hybrid frequentist-Bayesian CLs method, is presented. This tool can be used for simple counting experiments where only signal, background and observed yields are provided or for multi-bin experiments where binned distributions of discriminating variables are provided. It allows the combination of several channels and takes into account statistical and systematic uncertainties, as well as correlations of systematic uncertainties between channels. It has been validated against other software tools and analytical calculations, for several realistic cases.
Intelligent process mapping through systematic improvement of heuristics
NASA Technical Reports Server (NTRS)
Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.
1992-01-01
The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.
Begon, Mickaël; Andersen, Michael Skipper; Dumas, Raphaël
2018-03-01
Multibody kinematics optimization (MKO) aims to reduce soft tissue artefact (STA) and is a key step in musculoskeletal modeling. The objective of this review was to identify the numerical methods, their validation and performance for the estimation of the human joint kinematics using MKO. Seventy-four papers were extracted from a systematized search in five databases and cross-referencing. Model-derived kinematics were obtained using either constrained optimization or Kalman filtering to minimize the difference between measured (i.e., by skin markers, electromagnetic or inertial sensors) and model-derived positions and/or orientations. While hinge, universal, and spherical joints prevail, advanced models (e.g., parallel and four-bar mechanisms, elastic joint) have been introduced, mainly for the knee and shoulder joints. Models and methods were evaluated using: (i) simulated data based, however, on oversimplified STA and joint models; (ii) reconstruction residual errors, ranging from 4 mm to 40 mm; (iii) sensitivity analyses which highlighted the effect (up to 36 deg and 12 mm) of model geometrical parameters, joint models, and computational methods; (iv) comparison with other approaches (i.e., single body kinematics optimization and nonoptimized kinematics); (v) repeatability studies that showed low intra- and inter-observer variability; and (vi) validation against ground-truth bone kinematics (with errors between 1 deg and 22 deg for tibiofemoral rotations and between 3 deg and 10 deg for glenohumeral rotations). Moreover, MKO was applied to various movements (e.g., walking, running, arm elevation). Additional validations, especially for the upper limb, should be undertaken and we recommend a more systematic approach for the evaluation of MKO. In addition, further model development, scaling, and personalization methods are required to better estimate the secondary degrees-of-freedom (DoF).
Quantifying confidence in density functional theory predictions of magnetic ground states
NASA Astrophysics Data System (ADS)
Houchins, Gregory; Viswanathan, Venkatasubramanian
2017-10-01
Density functional theory (DFT) simulations, at the generalized gradient approximation (GGA) level, are being routinely used for material discovery based on high-throughput descriptor-based searches. The success of descriptor-based material design relies on eliminating bad candidates and keeping good candidates for further investigation. While DFT has been widely successfully for the former, oftentimes good candidates are lost due to the uncertainty associated with the DFT-predicted material properties. Uncertainty associated with DFT predictions has gained prominence and has led to the development of exchange correlation functionals that have built-in error estimation capability. In this work, we demonstrate the use of built-in error estimation capabilities within the BEEF-vdW exchange correlation functional for quantifying the uncertainty associated with the magnetic ground state of solids. We demonstrate this approach by calculating the uncertainty estimate for the energy difference between the different magnetic states of solids and compare them against a range of GGA exchange correlation functionals as is done in many first-principles calculations of materials. We show that this estimate reasonably bounds the range of values obtained with the different GGA functionals. The estimate is determined as a postprocessing step and thus provides a computationally robust and systematic approach to estimating uncertainty associated with predictions of magnetic ground states. We define a confidence value (c-value) that incorporates all calculated magnetic states in order to quantify the concurrence of the prediction at the GGA level and argue that predictions of magnetic ground states from GGA level DFT is incomplete without an accompanying c-value. We demonstrate the utility of this method using a case study of Li-ion and Na-ion cathode materials and the c-value metric correctly identifies that GGA-level DFT will have low predictability for NaFePO4F . Further, there needs to be a systematic test of a collection of plausible magnetic states, especially in identifying antiferromagnetic (AFM) ground states. We believe that our approach of estimating uncertainty can be readily incorporated into all high-throughput computational material discovery efforts and this will lead to a dramatic increase in the likelihood of finding good candidate materials.
Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.
Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less
Recursive linearization of multibody dynamics equations of motion
NASA Technical Reports Server (NTRS)
Lin, Tsung-Chieh; Yae, K. Harold
1989-01-01
The equations of motion of a multibody system are nonlinear in nature, and thus pose a difficult problem in linear control design. One approach is to have a first-order approximation through the numerical perturbations at a given configuration, and to design a control law based on the linearized model. Here, a linearized model is generated analytically by following the footsteps of the recursive derivation of the equations of motion. The equations of motion are first written in a Newton-Euler form, which is systematic and easy to construct; then, they are transformed into a relative coordinate representation, which is more efficient in computation. A new computational method for linearization is obtained by applying a series of first-order analytical approximations to the recursive kinematic relationships. The method has proved to be computationally more efficient because of its recursive nature. It has also turned out to be more accurate because of the fact that analytical perturbation circumvents numerical differentiation and other associated numerical operations that may accumulate computational error, thus requiring only analytical operations of matrices and vectors. The power of the proposed linearization algorithm is demonstrated, in comparison to a numerical perturbation method, with a two-link manipulator and a seven degrees of freedom robotic manipulator. Its application to control design is also demonstrated.
Knowledge-based computer systems for radiotherapy planning.
Kalet, I J; Paluszynski, W
1990-08-01
Radiation therapy is one of the first areas of clinical medicine to utilize computers in support of routine clinical decision making. The role of the computer has evolved from simple dose calculations to elaborate interactive graphic three-dimensional simulations. These simulations can combine external irradiation from megavoltage photons, electrons, and particle beams with interstitial and intracavitary sources. With the flexibility and power of modern radiotherapy equipment and the ability of computer programs that simulate anything the machinery can do, we now face a challenge to utilize this capability to design more effective radiation treatments. How can we manage the increased complexity of sophisticated treatment planning? A promising approach will be to use artificial intelligence techniques to systematize our present knowledge about design of treatment plans, and to provide a framework for developing new treatment strategies. Far from replacing the physician, physicist, or dosimetrist, artificial intelligence-based software tools can assist the treatment planning team in producing more powerful and effective treatment plans. Research in progress using knowledge-based (AI) programming in treatment planning already has indicated the usefulness of such concepts as rule-based reasoning, hierarchical organization of knowledge, and reasoning from prototypes. Problems to be solved include how to handle continuously varying parameters and how to evaluate plans in order to direct improvements.
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.
2015-01-01
A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.
Hattrick-Simpers, Jason R.; Gregoire, John M.; Kusne, A. Gilad
2016-05-26
With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. Here, we review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams andmore » beyond.« less
Comparative analysis of data mining techniques for business data
NASA Astrophysics Data System (ADS)
Jamil, Jastini Mohd; Shaharanee, Izwan Nizal Mohd
2014-12-01
Data mining is the process of employing one or more computer learning techniques to automatically analyze and extract knowledge from data contained within a database. Companies are using this tool to further understand their customers, to design targeted sales and marketing campaigns, to predict what product customers will buy and the frequency of purchase, and to spot trends in customer preferences that can lead to new product development. In this paper, we conduct a systematic approach to explore several of data mining techniques in business application. The experimental result reveals that all data mining techniques accomplish their goals perfectly, but each of the technique has its own characteristics and specification that demonstrate their accuracy, proficiency and preference.
[Carpus and distal radioulnar joint : Clinical and radiological examination].
Spies, C K; Langer, M F; Unglaub, F; Mühldorfer-Fodor, M; Müller, L P; Ahrens, C; Schlindwein, S F
2016-08-01
A precise medical history and specific symptom-oriented clinical tests of the wrist joint should always precede any radiological, computed tomography (CT) or magnetic resonance imaging (MRI) diagnostics. In many cases, specific clinical tests of the wrist joint allow at least a preliminary diagnosis, which can be supported by standard radiography using correct projections. A systematic approach is recommended covering the radiocarpal, midcarpal, ulnocarpal and distal radioulnar joints. Exact identification of the palpable anatomic landmarks is mandatory for correct application and interpretation of the various clinical tests. The results of the clinical tests in combination with radiological imaging can often detect precisely ruptures of distinct wrist joint ligaments and localized arthritis.
Conformational equilibrium in supramolecular chemistry: Dibutyltriuret case.
Mroczyńska, Karina; Kaczorowska, Małgorzata; Kolehmainen, Erkki; Grubecki, Ireneusz; Pietrzak, Marek; Ośmiałowski, Borys
2015-01-01
The association of substituted benzoates and naphthyridine dianions was used to study the complexation of dibutyltriuret. The title molecule is the simplest molecule able to form two intramolecular hydrogen bonds. The naphthyridine salt was used to break two intramolecular hydrogen bonds at a time while with the use of substituted benzoates the systematic approach to study association was achieved. Both, titrations and variable temperature measurements shed the light on the importance of conformational equilibrium and its influence on association in solution. Moreover, the associates were observed by mass spectrometry. The DFT-based computations for complexes and single bond rotational barriers supports experimental data and helps understanding the properties of multiply hydrogen bonded complexes.
Conducting survey research at nursing conferences.
Sleutel, M R
2001-01-01
Conferences can provide large numbers of potential subjects in one location, yet there is little published guidance on how to collect data at a conference site. A computer search revealed no citations on this topic. This article outlines a systematic strategy to plan and perform research at conferences. This article provides a step-by-step process to guide researchers in planning and conducting survey research at conferences. Initial components in planning data collection at a conference include making a timeline and getting permission. Detailed advanced planning involves specific strategies for attracting participants, and for distributing and collecting the questionnaires. Travel provisions and on-site logistical approaches are explained, followed by suggestions for post-conference activities.
Modeling Electronic Quantum Transport with Machine Learning
Lopez Bezanilla, Alejandro; von Lilienfeld Toal, Otto A.
2014-06-11
We present a machine learning approach to solve electronic quantum transport equations of one-dimensional nanostructures. The transmission coefficients of disordered systems were computed to provide training and test data sets to the machine. The system’s representation encodes energetic as well as geometrical information to characterize similarities between disordered configurations, while the Euclidean norm is used as a measure of similarity. Errors for out-of-sample predictions systematically decrease with training set size, enabling the accurate and fast prediction of new transmission coefficients. The remarkable performance of our model to capture the complexity of interference phenomena lends further support to its viability inmore » dealing with transport problems of undulatory nature.« less
Tuning the critical solution temperature of polymers by copolymerization
NASA Astrophysics Data System (ADS)
Schulz, Bernhard; Chudoba, Richard; Heyda, Jan; Dzubiella, Joachim
2015-12-01
We study statistical copolymerization effects on the upper critical solution temperature (CST) of generic homopolymers by means of coarse-grained Langevin dynamics computer simulations and mean-field theory. Our systematic investigation reveals that the CST can change monotonically or non-monotonically with copolymerization, as observed in experimental studies, depending on the degree of non-additivity of the monomer (A-B) cross-interactions. The simulation findings are confirmed and qualitatively explained by a combination of a two-component Flory-de Gennes model for polymer collapse and a simple thermodynamic expansion approach. Our findings provide some rationale behind the effects of copolymerization and may be helpful for tuning CST behavior of polymers in soft material design.
Quantitative, steady-state properties of Catania's computational model of the operant reserve.
Berg, John P; McDowell, J J
2011-05-01
Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.
Uribe-Convers, Simon; Duke, Justin R.; Moore, Michael J.; Tank, David C.
2014-01-01
• Premise of the study: We present an alternative approach for molecular systematic studies that combines long PCR and next-generation sequencing. Our approach can be used to generate templates from any DNA source for next-generation sequencing. Here we test our approach by amplifying complete chloroplast genomes, and we present a set of 58 potentially universal primers for angiosperms to do so. Additionally, this approach is likely to be particularly useful for nuclear and mitochondrial regions. • Methods and Results: Chloroplast genomes of 30 species across angiosperms were amplified to test our approach. Amplification success varied depending on whether PCR conditions were optimized for a given taxon. To further test our approach, some amplicons were sequenced on an Illumina HiSeq 2000. • Conclusions: Although here we tested this approach by sequencing plastomes, long PCR amplicons could be generated using DNA from any genome, expanding the possibilities of this approach for molecular systematic studies. PMID:25202592
An Alternative Approach to the Teaching of Systematic Transition Metal Chemistry.
ERIC Educational Resources Information Center
Hathaway, Brian
1979-01-01
Presents an alternative approach to teaching Systematic Transition Metal Chemistry with the transition metal chemistry skeleton features of interest. The "skeleton" is intended as a guide to predicting the chemistry of a selected compound. (Author/SA)
Likić, Vladimir A
2009-01-01
Gas chromatography-mass spectrometry (GC-MS) is a widely used analytical technique for the identification and quantification of trace chemicals in complex mixtures. When complex samples are analyzed by GC-MS it is common to observe co-elution of two or more components, resulting in an overlap of signal peaks observed in the total ion chromatogram. In such situations manual signal analysis is often the most reliable means for the extraction of pure component signals; however, a systematic manual analysis over a number of samples is both tedious and prone to error. In the past 30 years a number of computational approaches were proposed to assist in the process of the extraction of pure signals from co-eluting GC-MS components. This includes empirical methods, comparison with library spectra, eigenvalue analysis, regression and others. However, to date no approach has been recognized as best, nor accepted as standard. This situation hampers general GC-MS capabilities, and in particular has implications for the development of robust, high-throughput GC-MS analytical protocols required in metabolic profiling and biomarker discovery. Here we first discuss the nature of GC-MS data, and then review some of the approaches proposed for the extraction of pure signals from co-eluting components. We summarize and classify different approaches to this problem, and examine why so many approaches proposed in the past have failed to live up to their full promise. Finally, we give some thoughts on the future developments in this field, and suggest that the progress in general computing capabilities attained in the past two decades has opened new horizons for tackling this important problem. PMID:19818154
NASA Astrophysics Data System (ADS)
Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang
2010-05-01
CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces
Computational Biorheology of Human Blood Flow in Health and Disease
Fedosov, Dmitry A.; Dao, Ming; Karniadakis, George Em; Suresh, Subra
2014-01-01
Hematologic disorders arising from infectious diseases, hereditary factors and environmental influences can lead to, and can be influenced by, significant changes in the shape, mechanical and physical properties of red blood cells (RBCs), and the biorheology of blood flow. Hence, modeling of hematologic disorders should take into account the multiphase nature of blood flow, especially in arterioles and capillaries. We present here an overview of a general computational framework based on dissipative particle dynamics (DPD) which has broad applicability in cell biophysics with implications for diagnostics, therapeutics and drug efficacy assessments for a wide variety of human diseases. This computational approach, validated by independent experimental results, is capable of modeling the biorheology of whole blood and its individual components during blood flow so as to investigate cell mechanistic processes in health and disease. DPD is a Lagrangian method that can be derived from systematic coarse-graining of molecular dynamics but can scale efficiently up to arterioles and can also be used to model RBCs down to the spectrin level. We start from experimental measurements of a single RBC to extract the relevant biophysical parameters, using single-cell measurements involving such methods as optical tweezers, atomic force microscopy and micropipette aspiration, and cell-population experiments involving microfluidic devices. We then use these validated RBC models to predict the biorheological behavior of whole blood in healthy or pathological states, and compare the simulations with experimental results involving apparent viscosity and other relevant parameters. While the approach discussed here is sufficiently general to address a broad spectrum of hematologic disorders including certain types of cancer, this paper specifically deals with results obtained using this computational framework for blood flow in malaria and sickle cell anemia. PMID:24419829
A Research and Development Strategy for High Performance Computing.
ERIC Educational Resources Information Center
Office of Science and Technology Policy, Washington, DC.
This report is the result of a systematic review of the status and directions of high performance computing and its relationship to federal research and development. Conducted by the Federal Coordinating Council for Science, Engineering, and Technology (FCCSET), the review involved a series of workshops attended by numerous computer scientists and…
ERIC Educational Resources Information Center
Wild, Mary
2009-01-01
The paper reports the results of a randomised control trial investigating the use of computer-aided instruction (CAI) for practising phonological awareness skills with beginning readers. Two intervention groups followed the same phonological awareness programme: one group undertook practice exercises using a computer and the other group undertook…
Multiscale solvers and systematic upscaling in computational physics
NASA Astrophysics Data System (ADS)
Brandt, A.
2005-07-01
Multiscale algorithms can overcome the scale-born bottlenecks that plague most computations in physics. These algorithms employ separate processing at each scale of the physical space, combined with interscale iterative interactions, in ways which use finer scales very sparingly. Having been developed first and well known as multigrid solvers for partial differential equations, highly efficient multiscale techniques have more recently been developed for many other types of computational tasks, including: inverse PDE problems; highly indefinite (e.g., standing wave) equations; Dirac equations in disordered gauge fields; fast computation and updating of large determinants (as needed in QCD); fast integral transforms; integral equations; astrophysics; molecular dynamics of macromolecules and fluids; many-atom electronic structures; global and discrete-state optimization; practical graph problems; image segmentation and recognition; tomography (medical imaging); fast Monte-Carlo sampling in statistical physics; and general, systematic methods of upscaling (accurate numerical derivation of large-scale equations from microscopic laws).
Living systematic review: 1. Introduction-the why, what, when, and how.
Elliott, Julian H; Synnot, Anneliese; Turner, Tari; Simmonds, Mark; Akl, Elie A; McDonald, Steve; Salanti, Georgia; Meerpohl, Joerg; MacLehose, Harriet; Hilton, John; Tovey, David; Shemilt, Ian; Thomas, James
2017-11-01
Systematic reviews are difficult to keep up to date, but failure to do so leads to a decay in review currency, accuracy, and utility. We are developing a novel approach to systematic review updating termed "Living systematic review" (LSR): systematic reviews that are continually updated, incorporating relevant new evidence as it becomes available. LSRs may be particularly important in fields where research evidence is emerging rapidly, current evidence is uncertain, and new research may change policy or practice decisions. We hypothesize that a continual approach to updating will achieve greater currency and validity, and increase the benefits to end users, with feasible resource requirements over time. Copyright © 2017 Elsevier Inc. All rights reserved.
A systematic approach to embedded biomedical decision making.
Song, Zhe; Ji, Zhongkai; Ma, Jian-Guo; Sputh, Bernhard; Acharya, U Rajendra; Faust, Oliver
2012-11-01
An embedded decision making is a key feature for many biomedical systems. In most cases human life directly depends on correct decisions made by these systems, therefore they have to work reliably. This paper describes how we applied systems engineering principles to design a high performance embedded classification system in a systematic and well structured way. We introduce the structured design approach by discussing requirements capturing, specifications refinement, implementation and testing. Thereby, we follow systems engineering principles and execute each of these processes as formal as possible. The requirements, which motivate the system design, describe an automated decision making system for diagnostic support. These requirements are refined into the implementation of a support vector machine (SVM) algorithm which enables us to integrate automated decision making in embedded systems. With a formal model we establish functionality, stability and reliability of the system. Furthermore, we investigated different parallel processing configurations of this computationally complex algorithm. We found that, by adding SVM processes, an almost linear speedup is possible. Once we established these system properties, we translated the formal model into an implementation. The resulting implementation was tested using XMOS processors with both normal and failure cases, to build up trust in the implementation. Finally, we demonstrated that our parallel implementation achieves the speedup, predicted by the formal model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Efficient Mean Field Variational Algorithm for Data Assimilation (Invited)
NASA Astrophysics Data System (ADS)
Vrettas, M. D.; Cornford, D.; Opper, M.
2013-12-01
Data assimilation algorithms combine available observations of physical systems with the assumed model dynamics in a systematic manner, to produce better estimates of initial conditions for prediction. Broadly they can be categorized in three main approaches: (a) sequential algorithms, (b) sampling methods and (c) variational algorithms which transform the density estimation problem to an optimization problem. However, given finite computational resources, only a handful of ensemble Kalman filters and 4DVar algorithms have been applied operationally to very high dimensional geophysical applications, such as weather forecasting. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the ';optimal' posterior distribution over the continuous time states, within a family of non-stationary Gaussian processes. Our initial work on variational Bayesian approaches to data assimilation, unlike the well-known 4DVar method which seeks only the most probable solution, computes the best time varying Gaussian process approximation to the posterior smoothing distribution for dynamical systems that can be represented by stochastic differential equations. This approach was based on minimising the Kullback-Leibler divergence, over paths, between the true posterior and our Gaussian process approximation. Whilst the observations were informative enough to keep the posterior smoothing density close to Gaussian the algorithm proved very effective on low dimensional systems (e.g. O(10)D). However for higher dimensional systems, the high computational demands make the algorithm prohibitively expensive. To overcome the difficulties presented in the original framework and make our approach more efficient in higher dimensional systems we have been developing a new mean field version of the algorithm which treats the state variables at any given time as being independent in the posterior approximation, while still accounting for their relationships in the mean solution arising from the original system dynamics. Here we present this new mean field approach, illustrating its performance on a range of benchmark data assimilation problems whose dimensionality varies from O(10) to O(10^3)D. We emphasise that the variational Bayesian approach we adopt, unlike other variational approaches, provides a natural bound on the marginal likelihood of the observations given the model parameters which also allows for inference of (hyper-) parameters such as observational errors, parameters in the dynamical model and model error representation. We also stress that since our approach is intrinsically parallel it can be implemented very efficiently to address very long data assimilation time windows. Moreover, like most traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem therefore its complexity can be tuned to the available computational resources. We finish with a sketch of possible future directions.
Computational Precision of Mental Inference as Critical Source of Human Choice Suboptimality.
Drugowitsch, Jan; Wyart, Valentin; Devauchelle, Anne-Dominique; Koechlin, Etienne
2016-12-21
Making decisions in uncertain environments often requires combining multiple pieces of ambiguous information from external cues. In such conditions, human choices resemble optimal Bayesian inference, but typically show a large suboptimal variability whose origin remains poorly understood. In particular, this choice suboptimality might arise from imperfections in mental inference rather than in peripheral stages, such as sensory processing and response selection. Here, we dissociate these three sources of suboptimality in human choices based on combining multiple ambiguous cues. Using a novel quantitative approach for identifying the origin and structure of choice variability, we show that imperfections in inference alone cause a dominant fraction of suboptimal choices. Furthermore, two-thirds of this suboptimality appear to derive from the limited precision of neural computations implementing inference rather than from systematic deviations from Bayes-optimal inference. These findings set an upper bound on the accuracy and ultimate predictability of human choices in uncertain environments. Copyright © 2016 Elsevier Inc. All rights reserved.
Simplified and refined structural modeling for economical flutter analysis and design
NASA Technical Reports Server (NTRS)
Ricketts, R. H.; Sobieszczanski, J.
1977-01-01
A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.
Extrapolating Single Organic Ion Solvation Thermochemistry from Simulated Water Nanodroplets.
Coles, Jonathan P; Houriez, Céline; Meot-Ner Mautner, Michael; Masella, Michel
2016-09-08
We compute the ion/water interaction energies of methylated ammonium cations and alkylated carboxylate anions solvated in large nanodroplets of 10 000 water molecules using 10 ns molecular dynamics simulations and an all-atom polarizable force-field approach. Together with our earlier results concerning the solvation of these organic ions in nanodroplets whose molecular sizes range from 50 to 1000, these new data allow us to discuss the reliability of extrapolating absolute single-ion bulk solvation energies from small ion/water droplets using common power-law functions of cluster size. We show that reliable estimates of these energies can be extrapolated from a small data set comprising the results of three droplets whose sizes are between 100 and 1000 using a basic power-law function of droplet size. This agrees with an earlier conclusion drawn from a model built within the mean spherical framework and paves the road toward a theoretical protocol to systematically compute the solvation energies of complex organic ions.
Exploratory Lattice QCD Study of the Rare Kaon Decay K^{+}→π^{+}νν[over ¯].
Bai, Ziyuan; Christ, Norman H; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T
2017-06-23
We report a first, complete lattice QCD calculation of the long-distance contribution to the K^{+}→π^{+}νν[over ¯] decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.
NASA Astrophysics Data System (ADS)
Khan, Shehryar; Kubica-Misztal, Aleksandra; Kruk, Danuta; Kowalewski, Jozef; Odelius, Michael
2015-01-01
The zero-field splitting (ZFS) of the electronic ground state in paramagnetic ions is a sensitive probe of the variations in the electronic and molecular structure with an impact on fields ranging from fundamental physical chemistry to medical applications. A detailed analysis of the ZFS in a series of symmetric Gd(III) complexes is presented in order to establish the applicability and accuracy of computational methods using multiconfigurational complete-active-space self-consistent field wave functions and of density functional theory calculations. The various computational schemes are then applied to larger complexes Gd(III)DOTA(H2O)-, Gd(III)DTPA(H2O)2-, and Gd(III)(H2O)83+ in order to analyze how the theoretical results compare to experimentally derived parameters. In contrast to approximations based on density functional theory, the multiconfigurational methods produce results for the ZFS of Gd(III) complexes on the correct order of magnitude.
NASA Astrophysics Data System (ADS)
Ling, Shenglong; Wang, Wei; Yu, Lu; Peng, Junhui; Cai, Xiaoying; Xiong, Ying; Hayati, Zahra; Zhang, Longhua; Zhang, Zhiyong; Song, Likai; Tian, Changlin
2016-01-01
Electron paramagnetic resonance (EPR)-based hybrid experimental and computational approaches were applied to determine the structure of a full-length E. coli integral membrane sulfurtransferase, dimeric YgaP, and its structural and dynamic changes upon ligand binding. The solution NMR structures of the YgaP transmembrane domain (TMD) and cytosolic catalytic rhodanese domain were reported recently, but the tertiary fold of full-length YgaP was not yet available. Here, systematic site-specific EPR analysis defined a helix-loop-helix secondary structure of the YagP-TMD monomers using mobility, accessibility and membrane immersion measurements. The tertiary folds of dimeric YgaP-TMD and full-length YgaP in detergent micelles were determined through inter- and intra-monomer distance mapping and rigid-body computation. Further EPR analysis demonstrated the tight packing of the two YgaP second transmembrane helices upon binding of the catalytic product SCN-, which provides insight into the thiocyanate exportation mechanism of YgaP in the E. coli membrane.
Multidimensional Modeling of Atmospheric Effects and Surface Heterogeneities on Remote Sensing
NASA Technical Reports Server (NTRS)
Gerstl, S. A. W.; Simmer, C.; Zardecki, A. (Principal Investigator)
1985-01-01
The overall goal of this project is to establish a modeling capability that allows a quantitative determination of atmospheric effects on remote sensing including the effects of surface heterogeneities. This includes an improved understanding of aerosol and haze effects in connection with structural, angular, and spatial surface heterogeneities. One important objective of the research is the possible identification of intrinsic surface or canopy characteristics that might be invariant to atmospheric perturbations so that they could be used for scene identification. Conversely, an equally important objective is to find a correction algorithm for atmospheric effects in satellite-sensed surface reflectances. The technical approach is centered around a systematic model and code development effort based on existing, highly advanced computer codes that were originally developed for nuclear radiation shielding applications. Computational techniques for the numerical solution of the radiative transfer equation are adapted on the basis of the discrete-ordinates finite-element method which proved highly successful for one and two-dimensional radiative transfer problems with fully resolved angular representation of the radiation field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annapureddy, HVR; Motkuri, RK; Nguyen, PTM
In this review, we describe recent efforts to systematically study nano-structured metal organic frameworks (MOFs), also known as metal organic heat carriers, with particular emphasis on their application in heating and cooling processes. We used both molecular dynamics and grand canonical Monte Carlo simulation techniques to gain a molecular-level understanding of the adsorption mechanism of gases in these porous materials. We investigated the uptake of various gases such as refrigerants R12 and R143a. We also evaluated the effects of temperature and pressure on the uptake mechanism. Our computed results compared reasonably well with available measurements from experiments, thus validating ourmore » potential models and approaches. In addition, we investigated the structural, diffusive and adsorption properties of different hydrocarbons in Ni-2(dhtp). Finally, to elucidate the mechanism of nanoparticle dispersion in condensed phases, we studied the interactions among nanoparticles in various liquids, such as n-hexane, water and methanol.« less
Exploratory Lattice QCD Study of the Rare Kaon Decay K+→π+ν ν ¯
NASA Astrophysics Data System (ADS)
Bai, Ziyuan; Christ, Norman H.; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T.; Rbc-Ukqcd Collaboration
2017-06-01
We report a first, complete lattice QCD calculation of the long-distance contribution to the K+→π+ν ν ¯ decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.
NASA Astrophysics Data System (ADS)
Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.
2017-12-01
Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of exploiting large computational and storage resources of Cloud Computing platforms for large scale DInSAR analysis. The presented Cloud Computing P-SBAS processing chain can be a precious tool in the perspective of developing operational services disposable for the EO scientific community related to hazard monitoring and risk prevention and mitigation.
Diagnosis and kidney-sparing treatments for upper tract urothelial carcinoma: state of the art.
Territo, Angelo; Foerster, Bear; Shariat, Shahrokh F; Rouprêt, Morgan; Gaya, Jose M; Palou, Joan; Breda, Alberto
2018-02-01
Conservative management of upper tract urothelial cancer (UTUC) is becoming increasingly popular: the key to success is correct selection of patients with low-risk UTUC based on size (≤ 2 cm), focality (single lesion), stage (< T2), and grade (low grade). Despite the recent growing interest in the conservative approach to UTUC, the diagnostic process is still a challenge, and kidney-sparing surgery (KSS) is traditionally reserved for patients with contraindications to radical nephroureterectomy. In order to explore the "state of the art" in the diagnosis and conservative treatment of UTUC, a systematic review of the literature was performed. A PubMed, Scopus, and Cochrane search for peer-reviewed studies was performed using the keywords "upper tract urothelial carcinoma" OR "UTUC" OR "upper urinary tract" AND "biopsy" OR "diagnosis" OR "endomicroscopy" OR "imaging" AND "URS" OR "ureteroscopy" OR "kidney-sparing surgery" OR "laser ablation" OR "ureterectomy". We considered as relevant comparative prospective studies (randomized, quasi-randomized, no randomized), retrospective studies, meta-analyses, systematic reviews, and case report series written in the English language. Letters to the editor and contributions written in languages other than English were not considered of value for this review. Eligible articles were reviewed according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) criteria. Two hundred and sixty-three (263) records were identified using the above-mentioned keywords. Overall, 30 studies were considered relevant for the purpose of this systematic review and for the evidence evaluation process during qualitative synthesis. The outcomes evaluated in this review were the current diagnostic methods and the KSS approaches in UTUC. Furthermore, we included in the review the emerging technology for distinguishing between normal tissue, low-grade UTUC, and high-grade UTUC. Conclusive diagnosis is fundamental to the decision-making process in patients who could benefit from conservative treatment of UTUC. The most relevant diagnostic modalities are computed tomography urography, local urine cytology, and ureteroscopy with acquisition of an adequate biopsy sample for histology. KSS includes the endourological approach and segmental ureterectomy. Promising technology in the endourological management of UTUC helps in providing intraoperative information on UTUC grading and staging, with a high accuracy. Patients treated conservatively have to undergo stringent postoperative follow-up in order to detect and, if necessary, treat any recurrence promptly. Further larger and multicenter studies are needed to confirm these findings.
A Computational Framework for Bioimaging Simulation
Watabe, Masaki; Arjunan, Satya N. V.; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi
2015-01-01
Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units. PMID:26147508
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
Papadimitriou, Konstantinos I.; Stan, Guy-Bart V.; Drakakis, Emmanuel M.
2013-01-01
This paper presents a novel method for the systematic implementation of low-power microelectronic circuits aimed at computing nonlinear cellular and molecular dynamics. The method proposed is based on the Nonlinear Bernoulli Cell Formalism (NBCF), an advanced mathematical framework stemming from the Bernoulli Cell Formalism (BCF) originally exploited for the modular synthesis and analysis of linear, time-invariant, high dynamic range, logarithmic filters. Our approach identifies and exploits the striking similarities existing between the NBCF and coupled nonlinear ordinary differential equations (ODEs) typically appearing in models of naturally encountered biochemical systems. The resulting continuous-time, continuous-value, low-power CytoMimetic electronic circuits succeed in simulating fast and with good accuracy cellular and molecular dynamics. The application of the method is illustrated by synthesising for the first time microelectronic CytoMimetic topologies which simulate successfully: 1) a nonlinear intracellular calcium oscillations model for several Hill coefficient values and 2) a gene-protein regulatory system model. The dynamic behaviours generated by the proposed CytoMimetic circuits are compared and found to be in very good agreement with their biological counterparts. The circuits exploit the exponential law codifying the low-power subthreshold operation regime and have been simulated with realistic parameters from a commercially available CMOS process. They occupy an area of a fraction of a square-millimetre, while consuming between 1 and 12 microwatts of power. Simulations of fabrication-related variability results are also presented. PMID:23393550
Evaluating the utility of dynamical downscaling in agricultural impacts projections
Glotter, Michael; Elliott, Joshua; McInerney, David; Best, Neil; Foster, Ian; Moyer, Elisabeth J.
2014-01-01
Interest in estimating the potential socioeconomic costs of climate change has led to the increasing use of dynamical downscaling—nested modeling in which regional climate models (RCMs) are driven with general circulation model (GCM) output—to produce fine-spatial-scale climate projections for impacts assessments. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield, one of the greatest concerns under climate change. Our results suggest that it does not. We simulate US maize yields under current and future CO2 concentrations with the widely used Decision Support System for Agrotechnology Transfer crop model, driven by a variety of climate inputs including two GCMs, each in turn downscaled by two RCMs. We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven US maize yields are essentially indistinguishable in all scenarios (<10% discrepancy, equivalent to error from observations). Although RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kilometers) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the benefits for impacts assessments of dynamically downscaling raw GCM output may not be sufficient to justify its computational demands. Progress on fidelity of yield projections may benefit more from continuing efforts to understand and minimize systematic error in underlying climate projections. PMID:24872455
DGCA: A comprehensive R package for Differential Gene Correlation Analysis.
McKenzie, Andrew T; Katsyv, Igor; Song, Won-Min; Wang, Minghui; Zhang, Bin
2016-11-15
Dissecting the regulatory relationships between genes is a critical step towards building accurate predictive models of biological systems. A powerful approach towards this end is to systematically study the differences in correlation between gene pairs in more than one distinct condition. In this study we develop an R package, DGCA (for Differential Gene Correlation Analysis), which offers a suite of tools for computing and analyzing differential correlations between gene pairs across multiple conditions. To minimize parametric assumptions, DGCA computes empirical p-values via permutation testing. To understand differential correlations at a systems level, DGCA performs higher-order analyses such as measuring the average difference in correlation and multiscale clustering analysis of differential correlation networks. Through a simulation study, we show that the straightforward z-score based method that DGCA employs significantly outperforms the existing alternative methods for calculating differential correlation. Application of DGCA to the TCGA RNA-seq data in breast cancer not only identifies key changes in the regulatory relationships between TP53 and PTEN and their target genes in the presence of inactivating mutations, but also reveals an immune-related differential correlation module that is specific to triple negative breast cancer (TNBC). DGCA is an R package for systematically assessing the difference in gene-gene regulatory relationships under different conditions. This user-friendly, effective, and comprehensive software tool will greatly facilitate the application of differential correlation analysis in many biological studies and thus will help identification of novel signaling pathways, biomarkers, and targets in complex biological systems and diseases.
Financial forecasts accuracy in Brazil's social security system.
Silva, Carlos Patrick Alves da; Puty, Claudio Alberto Castelo Branco; Silva, Marcelino Silva da; Carvalho, Solon Venâncio de; Francês, Carlos Renato Lisboa
2017-01-01
Long-term social security statistical forecasts produced and disseminated by the Brazilian government aim to provide accurate results that would serve as background information for optimal policy decisions. These forecasts are being used as support for the government's proposed pension reform that plans to radically change the Brazilian Constitution insofar as Social Security is concerned. However, the reliability of official results is uncertain since no systematic evaluation of these forecasts has ever been published by the Brazilian government or anyone else. This paper aims to present a study of the accuracy and methodology of the instruments used by the Brazilian government to carry out long-term actuarial forecasts. We base our research on an empirical and probabilistic analysis of the official models. Our empirical analysis shows that the long-term Social Security forecasts are systematically biased in the short term and have significant errors that render them meaningless in the long run. Moreover, the low level of transparency in the methods impaired the replication of results published by the Brazilian Government and the use of outdated data compromises forecast results. In the theoretical analysis, based on a mathematical modeling approach, we discuss the complexity and limitations of the macroeconomic forecast through the computation of confidence intervals. We demonstrate the problems related to error measurement inherent to any forecasting process. We then extend this exercise to the computation of confidence intervals for Social Security forecasts. This mathematical exercise raises questions about the degree of reliability of the Social Security forecasts.
Financial forecasts accuracy in Brazil’s social security system
2017-01-01
Long-term social security statistical forecasts produced and disseminated by the Brazilian government aim to provide accurate results that would serve as background information for optimal policy decisions. These forecasts are being used as support for the government’s proposed pension reform that plans to radically change the Brazilian Constitution insofar as Social Security is concerned. However, the reliability of official results is uncertain since no systematic evaluation of these forecasts has ever been published by the Brazilian government or anyone else. This paper aims to present a study of the accuracy and methodology of the instruments used by the Brazilian government to carry out long-term actuarial forecasts. We base our research on an empirical and probabilistic analysis of the official models. Our empirical analysis shows that the long-term Social Security forecasts are systematically biased in the short term and have significant errors that render them meaningless in the long run. Moreover, the low level of transparency in the methods impaired the replication of results published by the Brazilian Government and the use of outdated data compromises forecast results. In the theoretical analysis, based on a mathematical modeling approach, we discuss the complexity and limitations of the macroeconomic forecast through the computation of confidence intervals. We demonstrate the problems related to error measurement inherent to any forecasting process. We then extend this exercise to the computation of confidence intervals for Social Security forecasts. This mathematical exercise raises questions about the degree of reliability of the Social Security forecasts. PMID:28859172
Molecular Imaging of Atherothrombotic Diseases: Seeing Is Believing.
Wang, Xiaowei; Peter, Karlheinz
2017-06-01
Molecular imaging, with major advances in the development of both innovative targeted contrast agents/particles and radiotracers, as well as various imaging technologies, is a fascinating, rapidly growing field with many preclinical and clinical applications, particularly for personalized medicine. Thrombosis in either the venous or the arterial system, the latter typically caused by rupture of unstable atherosclerotic plaques, is a major determinant of mortality and morbidity in patients. However, imaging of the various thrombotic complications and the identification of plaques that are prone to rupture are at best indirect, mostly unreliable, or not available at all. The development of molecular imaging toward diagnosis and prevention of thrombotic disease holds promise for major advance in this clinically important field. Here, we review the medical need and clinical importance of direct molecular imaging of thrombi and unstable atherosclerotic plaques that are prone to rupture, thereby causing thrombotic complications such as myocardial infarction and ischemic stroke. We systematically compare the advantages/disadvantages of the various molecular imaging modalities, including X-ray computed tomography, magnetic resonance imaging, positron emission tomography, single-photon emission computed tomography, fluorescence imaging, and ultrasound. We further systematically discuss molecular targets specific for thrombi and those characterizing unstable, potentially thrombogenic atherosclerotic plaques. Finally, we provide examples for first theranostic approaches in thrombosis, combining diagnosis, targeted therapy, and monitoring of therapeutic success or failure. Overall, molecular imaging is a rapidly advancing field that holds promise of major benefits to many patients with atherothrombotic diseases. © 2017 American Heart Association, Inc.
Systematic Dissemination of Research and Development Program Improvement Efforts.
ERIC Educational Resources Information Center
Sanders, Carol S.
A systematic approach to disseminaton of vocational education research and development program improvement efforts is comprehensive, effective, and efficient. Systematic dissemination is a prerequisite link to assessing impact of research and development--for program improvement to occur, successful dissemination is crucial. A systematic approach…
The Emergence of Systematic Review in Toxicology.
Stephens, Martin L; Betts, Kellyn; Beck, Nancy B; Cogliano, Vincent; Dickersin, Kay; Fitzpatrick, Suzanne; Freeman, James; Gray, George; Hartung, Thomas; McPartland, Jennifer; Rooney, Andrew A; Scherer, Roberta W; Verloo, Didier; Hoffmann, Sebastian
2016-07-01
The Evidence-based Toxicology Collaboration hosted a workshop on "The Emergence of Systematic Review and Related Evidence-based Approaches in Toxicology," on November 21, 2014 in Baltimore, Maryland. The workshop featured speakers from agencies and organizations applying systematic review approaches to questions in toxicology, speakers with experience in conducting systematic reviews in medicine and healthcare, and stakeholders in industry, government, academia, and non-governmental organizations. Based on the workshop presentations and discussion, here we address the state of systematic review methods in toxicology, historical antecedents in both medicine and toxicology, challenges to the translation of systematic review from medicine to toxicology, and thoughts on the way forward. We conclude with a recommendation that as various agencies and organizations adapt systematic review methods, they continue to work together to ensure that there is a harmonized process for how the basic elements of systematic review methods are applied in toxicology. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology.
Chances and Limitations of Video Games in the Fight against Childhood Obesity-A Systematic Review.
Mack, Isabelle; Bayer, Carolin; Schäffeler, Norbert; Reiband, Nadine; Brölz, Ellen; Zurstiege, Guido; Fernandez-Aranda, Fernando; Gawrilow, Caterina; Zipfel, Stephan
2017-07-01
A systematic literature search was conducted to assess the chances and limitations of video games to combat and prevent childhood obesity. This search included studies with video or computer games targeting nutrition, physical activity and obesity for children between 7 and 15 years of age. The study distinguished between games that aimed to (i) improve knowledge about nutrition, eating habits and exercise; (ii) increase physical activity; or (iii) combine both approaches. Overall, the games were well accepted. On a qualitative level, most studies reported positive effects on obesity-related outcomes (improvement of weight-related parameters, physical activity or dietary behaviour/knowledge). However, the observed effects were small. The games did not address psychosocial aspects. Using video games for weight management exclusively does not deliver satisfying results. Video games as an additional guided component of prevention and treatment programs have the potential to increase compliance and thus enhance treatment outcome. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.
Wildenhain, Jan; Spitzer, Michaela; Dolma, Sonam; Jarvik, Nick; White, Rachel; Roy, Marcia; Griffiths, Emma; Bellows, David S.; Wright, Gerard D.; Tyers, Mike
2016-01-01
The network structure of biological systems suggests that effective therapeutic intervention may require combinations of agents that act synergistically. However, a dearth of systematic chemical combination datasets have limited the development of predictive algorithms for chemical synergism. Here, we report two large datasets of linked chemical-genetic and chemical-chemical interactions in the budding yeast Saccharomyces cerevisiae. We screened 5,518 unique compounds against 242 diverse yeast gene deletion strains to generate an extended chemical-genetic matrix (CGM) of 492,126 chemical-gene interaction measurements. This CGM dataset contained 1,434 genotype-specific inhibitors, termed cryptagens. We selected 128 structurally diverse cryptagens and tested all pairwise combinations to generate a benchmark dataset of 8,128 pairwise chemical-chemical interaction tests for synergy prediction, termed the cryptagen matrix (CM). An accompanying database resource called ChemGRID was developed to enable analysis, visualisation and downloads of all data. The CGM and CM datasets will facilitate the benchmarking of computational approaches for synergy prediction, as well as chemical structure-activity relationship models for anti-fungal drug discovery. PMID:27874849
How hot? Systematic convergence of the replica exchange method using multiple reservoirs.
Ruscio, Jory Z; Fawzi, Nicolas L; Head-Gordon, Teresa
2010-02-01
We have devised a systematic approach to converge a replica exchange molecular dynamics simulation by dividing the full temperature range into a series of higher temperature reservoirs and a finite number of lower temperature subreplicas. A defined highest temperature reservoir of equilibrium conformations is used to help converge a lower but still hot temperature subreplica, which in turn serves as the high-temperature reservoir for the next set of lower temperature subreplicas. The process is continued until an optimal temperature reservoir is reached to converge the simulation at the target temperature. This gradual convergence of subreplicas allows for better and faster convergence at the temperature of interest and all intermediate temperatures for thermodynamic analysis, as well as optimizing the use of multiple processors. We illustrate the overall effectiveness of our multiple reservoir replica exchange strategy by comparing sampling and computational efficiency with respect to replica exchange, as well as comparing methods when converging the structural ensemble of the disordered Abeta(21-30) peptide simulated with explicit water by comparing calculated Rotating Overhauser Effect Spectroscopy intensities to experimentally measured values. Copyright 2009 Wiley Periodicals, Inc.
Flux analysis and metabolomics for systematic metabolic engineering of microorganisms.
Toya, Yoshihiro; Shimizu, Hiroshi
2013-11-01
Rational engineering of metabolism is important for bio-production using microorganisms. Metabolic design based on in silico simulations and experimental validation of the metabolic state in the engineered strain helps in accomplishing systematic metabolic engineering. Flux balance analysis (FBA) is a method for the prediction of metabolic phenotype, and many applications have been developed using FBA to design metabolic networks. Elementary mode analysis (EMA) and ensemble modeling techniques are also useful tools for in silico strain design. The metabolome and flux distribution of the metabolic pathways enable us to evaluate the metabolic state and provide useful clues to improve target productivity. Here, we reviewed several computational applications for metabolic engineering by using genome-scale metabolic models of microorganisms. We also discussed the recent progress made in the field of metabolomics and (13)C-metabolic flux analysis techniques, and reviewed these applications pertaining to bio-production development. Because these in silico or experimental approaches have their respective advantages and disadvantages, the combined usage of these methods is complementary and effective for metabolic engineering. Copyright © 2013 Elsevier Inc. All rights reserved.
Havugimana, Pierre C; Hu, Pingzhao; Emili, Andrew
2017-10-01
Elucidation of the networks of physical (functional) interactions present in cells and tissues is fundamental for understanding the molecular organization of biological systems, the mechanistic basis of essential and disease-related processes, and for functional annotation of previously uncharacterized proteins (via guilt-by-association or -correlation). After a decade in the field, we felt it timely to document our own experiences in the systematic analysis of protein interaction networks. Areas covered: Researchers worldwide have contributed innovative experimental and computational approaches that have driven the rapidly evolving field of 'functional proteomics'. These include mass spectrometry-based methods to characterize macromolecular complexes on a global-scale and sophisticated data analysis tools - most notably machine learning - that allow for the generation of high-quality protein association maps. Expert commentary: Here, we recount some key lessons learned, with an emphasis on successful workflows, and challenges, arising from our own and other groups' ongoing efforts to generate, interpret and report proteome-scale interaction networks in increasingly diverse biological contexts.
New Vistas in Chemical Product and Process Design.
Zhang, Lei; Babi, Deenesh K; Gani, Rafiqul
2016-06-07
Design of chemicals-based products is broadly classified into those that are process centered and those that are product centered. In this article, the designs of both classes of products are reviewed from a process systems point of view; developments related to the design of the chemical product, its corresponding process, and its integration are highlighted. Although significant advances have been made in the development of systematic model-based techniques for process design (also for optimization, operation, and control), much work is needed to reach the same level for product design. Timeline diagrams illustrating key contributions in product design, process design, and integrated product-process design are presented. The search for novel, innovative, and sustainable solutions must be matched by consideration of issues related to the multidisciplinary nature of problems, the lack of data needed for model development, solution strategies that incorporate multiscale options, and reliability versus predictive power. The need for an integrated model-experiment-based design approach is discussed together with benefits of employing a systematic computer-aided framework with built-in design templates.
Structural changes in cross-border liabilities: A multidimensional approach
NASA Astrophysics Data System (ADS)
Araújo, Tanya; Spelta, Alessandro
2014-01-01
We study the international interbank market through a geometric analysis of empirical data. The geometric analysis of the time series of cross-country liabilities shows that the systematic information of the interbank international market is contained in a space of small dimension. Geometric spaces of financial relations across countries are developed, for which the space volume, multivariate skewness and multivariate kurtosis are computed. The behavior of these coefficients reveals an important modification acting in the financial linkages since 1997 and allows us to relate the shape of the geometric space that emerges in recent years to the globally turbulent period that has characterized financial systems since the late 1990s. Here we show that, besides a persistent decrease in the volume of the geometric space since 1997, the observation of a generalized increase in the values of the multivariate skewness and kurtosis sheds some light on the behavior of cross-border interdependencies during periods of financial crises. This was found to occur in such a systematic fashion, that these coefficients may be used as a proxy for systemic risk.
Dixon, Brian E; Gamache, Roland E; Grannis, Shaun J
2013-05-01
To summarize the literature describing computer-based interventions aimed at improving bidirectional communication between clinical and public health. A systematic review of English articles using MEDLINE and Google Scholar. Search terms included public health, epidemiology, electronic health records, decision support, expert systems, and decision-making. Only articles that described the communication of information regarding emerging health threats from public health agencies to clinicians or provider organizations were included. Each article was independently reviewed by two authors. Ten peer-reviewed articles highlight a nascent but promising area of research and practice related to alerting clinicians about emerging threats. Current literature suggests that additional research and development in bidirectional communication infrastructure should focus on defining a coherent architecture, improving interoperability, establishing clear governance, and creating usable systems that will effectively deliver targeted, specific information to clinicians in support of patient and population decision-making. Increasingly available clinical information systems make it possible to deliver timely, relevant knowledge to frontline clinicians in support of population health. Future work should focus on developing a flexible, interoperable infrastructure for bidirectional communications capable of integrating public health knowledge into clinical systems and workflows.
Commonality and Variability Analysis for Xenon Family of Separation Virtual Machine Monitors (CVAX)
2017-07-18
technical approach is a systematic application of Software Product Line Engineering (SPLE). A systematic application requires describing the family and... engineering Software family September 2016 – October 2016 OSD/OUSD/ATL/ASD(R&E)/RDOffice of Information Systems & Cyber Security RD / ASD(R&E) / AT&L...by the evolving open-source Xen hypervisor. The technical approach is a systematic application of Software Product Line Engineering (SPLE). A
Theory comparison and numerical benchmarking on neoclassical toroidal viscosity torque
NASA Astrophysics Data System (ADS)
Wang, Zhirui; Park, Jong-Kyu; Liu, Yueqiang; Logan, Nikolas; Kim, Kimin; Menard, Jonathan E.
2014-04-01
Systematic comparison and numerical benchmarking have been successfully carried out among three different approaches of neoclassical toroidal viscosity (NTV) theory and the corresponding codes: IPEC-PENT is developed based on the combined NTV theory but without geometric simplifications [Park et al., Phys. Rev. Lett. 102, 065002 (2009)]; MARS-Q includes smoothly connected NTV formula [Shaing et al., Nucl. Fusion 50, 025022 (2010)] based on Shaing's analytic formulation in various collisionality regimes; MARS-K, originally computing the drift kinetic energy, is upgraded to compute the NTV torque based on the equivalence between drift kinetic energy and NTV torque [J.-K. Park, Phys. Plasma 18, 110702 (2011)]. The derivation and numerical results both indicate that the imaginary part of drift kinetic energy computed by MARS-K is equivalent to the NTV torque in IPEC-PENT. In the benchmark of precession resonance between MARS-Q and MARS-K/IPEC-PENT, the agreement and correlation between the connected NTV formula and the combined NTV theory in different collisionality regimes are shown for the first time. Additionally, both IPEC-PENT and MARS-K indicate the importance of the bounce harmonic resonance which can greatly enhance the NTV torque when E ×B drift frequency reaches the bounce resonance condition.
Computational and Experimental Analysis of the Secretome of Methylococcus capsulatus (Bath)
Indrelid, Stine; Mathiesen, Geir; Jacobsen, Morten; Lea, Tor; Kleiveland, Charlotte R.
2014-01-01
The Gram-negative methanotroph Methylococcus capsulatus (Bath) was recently demonstrated to abrogate inflammation in a murine model of inflammatory bowel disease, suggesting interactions with cells involved in maintaining mucosal homeostasis and emphasizing the importance of understanding the many properties of M. capsulatus. Secreted proteins determine how bacteria may interact with their environment, and a comprehensive knowledge of such proteins is therefore vital to understand bacterial physiology and behavior. The aim of this study was to systematically analyze protein secretion in M. capsulatus (Bath) by identifying the secretion systems present and the respective secreted substrates. Computational analysis revealed that in addition to previously recognized type II secretion systems and a type VII secretion system, a type Vb (two-partner) secretion system and putative type I secretion systems are present in M. capsulatus (Bath). In silico analysis suggests that the diverse secretion systems in M.capsulatus transport proteins likely to be involved in adhesion, colonization, nutrient acquisition and homeostasis maintenance. Results of the computational analysis was verified and extended by an experimental approach showing that in addition an uncharacterized protein and putative moonlighting proteins are released to the medium during exponential growth of M. capsulatus (Bath). PMID:25479164
Multibody simulation of vehicles equipped with an automatic transmission
NASA Astrophysics Data System (ADS)
Olivier, B.; Kouroussis, G.
2016-09-01
Nowadays automotive vehicles remain as one of the most used modes of transportation. Furthermore automatic transmissions are increasingly used to provide a better driving comfort and a potential optimization of the engine performances (by placing the gear shifts at specific engine and vehicle speeds). This paper presents an effective modeling of the vehicle using the multibody methodology (numerically computed under EasyDyn, an open source and in-house library dedicated to multibody simulations). However, the transmission part of the vehicle is described by the usual equations of motion computed using a systematic matrix approach: del Castillo's methodology for planetary gear trains. By coupling the analytic equations of the transmission and the equations computed by the multibody methodology, the performances of any vehicle can be obtained if the characteristics of each element in the vehicle are known. The multibody methodology offers the possibilities to develop the vehicle modeling from 1D-motion to 3D-motion by taking into account the rotations and implementing tire models. The modeling presented in this paper remains very efficient and provides an easy and quick vehicle simulation tool which could be used in order to calibrate the automatic transmission.
Interactions between pool geometry and hydraulics
Thompson, Douglas M.; Nelson, Jonathan M.; Wohl, Ellen E.
1998-01-01
An experimental and computational research approach was used to determine interactions between pool geometry and hydraulics. A 20-m-long, 1.8-m-wide flume was used to investigate the effect of four different geometric aspects of pool shape on flow velocity. Plywood sections were used to systematically alter constriction width, pool depth, pool length, and pool exit-slope gradient, each at two separate levels. Using the resulting 16 unique geometries with measured pool velocities in four-way factorial analyses produced an empirical assessment of the role of the four geometric aspects on the pool flow patterns and hence the stability of the pool. To complement the conclusions of these analyses, a two-dimensional computational flow model was used to investigate the relationships between pool geometry and flow patterns over a wider range of conditions. Both experimental and computational results show that constriction and depth effects dominate in the jet section of the pool and that pool length exhibits an increasing effect within the recirculating-eddy system. The pool exit slope appears to force flow reattachment. Pool length controls recirculating-eddy length and vena contracta strength. In turn, the vena contracta and recirculating eddy control velocities throughout the pool.
Planning in Education--A Systematic Approach
ERIC Educational Resources Information Center
Barker, L. J.
1977-01-01
Presents a case for and poses a procedure including techniques for a systematic approach to planning in education as a means of improving efficiency and effectiveness. Available from: Australian College of Education, 916 Swanston Street, Carlton, Victoria 3053, Australia, $2.50 single copy. (Author/MLF)