Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-22
... explored in this series is cloud computing. The workshop on this topic will be held in Gaithersburg, MD on October 21, 2011. Assertion: ``Current implementations of cloud computing indicate a new approach to security'' Implementations of cloud computing have provided new ways of thinking about how to secure data...
The Fifth Generation and Training Strategies
ERIC Educational Resources Information Center
Ennals, Richard
2008-01-01
Fifth Generation computers should not simply be regarded as an enhancement of current computer technology: the intention is that a fresh approach should be taken to computer science and to the use of computers. The argument of this paper is that the fresh approach must encompass education and training, with implications that extend far beyond the…
One of the strategic objectives of the Computational Toxicology Program is to develop approaches for prioritizing chemicals for subsequent screening and testing. Approaches currently available for this process require extensive resources. Therefore, less costly and time-extensi...
Computer-aided drug discovery.
Bajorath, Jürgen
2015-01-01
Computational approaches are an integral part of interdisciplinary drug discovery research. Understanding the science behind computational tools, their opportunities, and limitations is essential to make a true impact on drug discovery at different levels. If applied in a scientifically meaningful way, computational methods improve the ability to identify and evaluate potential drug molecules, but there remain weaknesses in the methods that preclude naïve applications. Herein, current trends in computer-aided drug discovery are reviewed, and selected computational areas are discussed. Approaches are highlighted that aid in the identification and optimization of new drug candidates. Emphasis is put on the presentation and discussion of computational concepts and methods, rather than case studies or application examples. As such, this contribution aims to provide an overview of the current methodological spectrum of computational drug discovery for a broad audience.
Future Approach to tier-0 extension
NASA Astrophysics Data System (ADS)
Jones, B.; McCance, G.; Cordeiro, C.; Giordano, D.; Traylen, S.; Moreno García, D.
2017-10-01
The current tier-0 processing at CERN is done on two managed sites, the CERN computer centre and the Wigner computer centre. With the proliferation of public cloud resources at increasingly competitive prices, we have been investigating how to transparently increase our compute capacity to include these providers. The approach taken has been to integrate these resources using our existing deployment and computer management tools and to provide them in a way that exposes them to users as part of the same site. The paper will describe the architecture, the toolset and the current production experiences of this model.
MUMPS Based Integration of Disparate Computer-Assisted Medical Diagnosis Modules
1989-12-12
modules use a Bayesian approach, while the Opthalmology module uses a Rule Based approach. In the current effort, MUMPS is used to develop an...Abdominal and Chest Pain modules use a Bayesian approach, while the Opthalmology module uses a Rule Based approach. In the current effort, MUMPS is used
Integrating autonomous distributed control into a human-centric C4ISR environment
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2017-05-01
This paper considers incorporating autonomy into human-centric Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) environments. Specifically, it focuses on identifying ways that current autonomy technologies can augment human control and the challenges presented by additive autonomy. Three approaches to this challenge are considered, stemming from prior work in two converging areas. In the first, the problem is approached as augmenting what humans currently do with automation. In the alternate approach, the problem is approached as treating humans as actors within a cyber-physical system-of-systems (stemming from robotic distributed computing). A third approach, combines elements of both of the aforementioned.
Tutorial: Computer architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gajski, D.D.; Milutinovic, V.M.; Siegel, H.J.
1986-01-01
This book presents the state-of-the-art in advanced computer architecture. It deals with the concepts underlying current architectures and covers approaches and techniques being used in the design of advanced computer systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knio, Omar
2017-05-05
The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solutionmore » can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.« less
Critiquing: A Different Approach to Expert Computer Advice in Medicine
Miller, Perry L.
1984-01-01
The traditional approach to computer-based advice in medicine has been to design systems which simulate a physician's decision process. This paper describes a different approach to computer advice in medicine: a critiquing approach. A critiquing system first asks how the physician is planning to manage his patient and then critiques that plan, discussing the advantages and disadvantages of the proposed approach, compared to other approaches which might be reasonable or preferred. Several critiquing systems are currently in different stages of implementation. The paper describes these systems and discusses the characteristics which make each domain suitable for critiquing. The critiquing approach may prove especially well-suited in domains where decisions involve a great deal of subjective judgement.
The implementation of AI technologies in computer wargames
NASA Astrophysics Data System (ADS)
Tiller, John A.
2004-08-01
Computer wargames involve the most in-depth analysis of general game theory. The enumerated turns of a game like chess are dwarfed by the exponentially larger possibilities of even a simple computer wargame. Implementing challenging AI is computer wargames is an important goal in both the commercial and military environments. In the commercial marketplace, customers demand a challenging AI opponent when they play a computer wargame and are frustrated by a lack of competence on the part of the AI. In the military environment, challenging AI opponents are important for several reasons. A challenging AI opponent will force the military professional to avoid routine or set-piece approaches to situations and cause them to think much deeper about military situations before taking action. A good AI opponent would also include national characteristics of the opponent being simulated, thus providing the military professional with even more of a challenge in planning and approach. Implementing current AI technologies in computer wargames is a technological challenge. The goal is to join the needs of AI in computer wargames with the solutions of current AI technologies. This talk will address several of those issues, possible solutions, and currently unsolved problems.
WPS mediation: An approach to process geospatial data on different computing backends
NASA Astrophysics Data System (ADS)
Giuliani, Gregory; Nativi, Stefano; Lehmann, Anthony; Ray, Nicolas
2012-10-01
The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances.
Methodical Approaches to Teaching of Computer Modeling in Computer Science Course
ERIC Educational Resources Information Center
Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina
2015-01-01
The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1992-01-01
Research conducted during the period from July 1991 through December 1992 is covered. A method based upon the quasi-analytical approach was developed for computing the aerodynamic sensitivity coefficients of three dimensional wings in transonic and subsonic flow. In addition, the method computes for comparison purposes the aerodynamic sensitivity coefficients using the finite difference approach. The accuracy and validity of the methods are currently under investigation.
Efficient non-hydrostatic modelling of 3D wave-induced currents using a subgrid approach
NASA Astrophysics Data System (ADS)
Rijnsdorp, Dirk P.; Smit, Pieter B.; Zijlema, Marcel; Reniers, Ad J. H. M.
2017-08-01
Wave-induced currents are an ubiquitous feature in coastal waters that can spread material over the surf zone and the inner shelf. These currents are typically under resolved in non-hydrostatic wave-flow models due to computational constraints. Specifically, the low vertical resolutions adequate to describe the wave dynamics - and required to feasibly compute at the scales of a field site - are too coarse to account for the relevant details of the three-dimensional (3D) flow field. To describe the relevant dynamics of both wave and currents, while retaining a model framework that can be applied at field scales, we propose a two grid approach to solve the governing equations. With this approach, the vertical accelerations and non-hydrostatic pressures are resolved on a relatively coarse vertical grid (which is sufficient to accurately resolve the wave dynamics), whereas the horizontal velocities and turbulent stresses are resolved on a much finer subgrid (of which the resolution is dictated by the vertical scale of the mean flows). This approach ensures that the discrete pressure Poisson equation - the solution of which dominates the computational effort - is evaluated on the coarse grid scale, thereby greatly improving efficiency, while providing a fine vertical resolution to resolve the vertical variation of the mean flow. This work presents the general methodology, and discusses the numerical implementation in the SWASH wave-flow model. Model predictions are compared with observations of three flume experiments to demonstrate that the subgrid approach captures both the nearshore evolution of the waves, and the wave-induced flows like the undertow profile and longshore current. The accuracy of the subgrid predictions is comparable to fully resolved 3D simulations - but at much reduced computational costs. The findings of this work thereby demonstrate that the subgrid approach has the potential to make 3D non-hydrostatic simulations feasible at the scale of a realistic coastal region.
1991-12-01
urally. 6.5 Summary of Current or Potential Approaches Many approaches to context analysis were discussed by the group, including: * Causal Trees * SWOT ... Apple Computer, 1988 1 Aseltine, J., Beam, W.R., Palmer, J.D., Sage, A.P., 1989, Introduction To Computer Systems: Analysis, Design and Application
Computation of ancestry scores with mixed families and unrelated individuals.
Zhou, Yi-Hui; Marron, James S; Wright, Fred A
2018-03-01
The issue of robustness to family relationships in computing genotype ancestry scores such as eigenvector projections has received increased attention in genetic association, and is particularly challenging when sets of both unrelated individuals and closely related family members are included. The current standard is to compute loadings (left singular vectors) using unrelated individuals and to compute projected scores for remaining family members. However, projected ancestry scores from this approach suffer from shrinkage toward zero. We consider two main novel strategies: (i) matrix substitution based on decomposition of a target family-orthogonalized covariance matrix, and (ii) using family-averaged data to obtain loadings. We illustrate the performance via simulations, including resampling from 1000 Genomes Project data, and analysis of a cystic fibrosis dataset. The matrix substitution approach has similar performance to the current standard, but is simple and uses only a genotype covariance matrix, while the family-average method shows superior performance. Our approaches are accompanied by novel ancillary approaches that provide considerable insight, including individual-specific eigenvalue scree plots. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Computational Protein Engineering: Bridging the Gap between Rational Design and Laboratory Evolution
Barrozo, Alexandre; Borstnar, Rok; Marloie, Gaël; Kamerlin, Shina Caroline Lynn
2012-01-01
Enzymes are tremendously proficient catalysts, which can be used as extracellular catalysts for a whole host of processes, from chemical synthesis to the generation of novel biofuels. For them to be more amenable to the needs of biotechnology, however, it is often necessary to be able to manipulate their physico-chemical properties in an efficient and streamlined manner, and, ideally, to be able to train them to catalyze completely new reactions. Recent years have seen an explosion of interest in different approaches to achieve this, both in the laboratory, and in silico. There remains, however, a gap between current approaches to computational enzyme design, which have primarily focused on the early stages of the design process, and laboratory evolution, which is an extremely powerful tool for enzyme redesign, but will always be limited by the vastness of sequence space combined with the low frequency for desirable mutations. This review discusses different approaches towards computational enzyme design and demonstrates how combining newly developed screening approaches that can rapidly predict potential mutation “hotspots” with approaches that can quantitatively and reliably dissect the catalytic step can bridge the gap that currently exists between computational enzyme design and laboratory evolution studies. PMID:23202907
NASA Astrophysics Data System (ADS)
Bertin, N.; Upadhyay, M. V.; Pradalier, C.; Capolungo, L.
2015-09-01
In this paper, we propose a novel full-field approach based on the fast Fourier transform (FFT) technique to compute mechanical fields in periodic discrete dislocation dynamics (DDD) simulations for anisotropic materials: the DDD-FFT approach. By coupling the FFT-based approach to the discrete continuous model, the present approach benefits from the high computational efficiency of the FFT algorithm, while allowing for a discrete representation of dislocation lines. It is demonstrated that the computational time associated with the new DDD-FFT approach is significantly lower than that of current DDD approaches when large number of dislocation segments are involved for isotropic and anisotropic elasticity, respectively. Furthermore, for fine Fourier grids, the treatment of anisotropic elasticity comes at a similar computational cost to that of isotropic simulation. Thus, the proposed approach paves the way towards achieving scale transition from DDD to mesoscale plasticity, especially due to the method’s ability to incorporate inhomogeneous elasticity.
Computationally driven drug discovery meeting-3 - Verona (Italy): 4 - 6th of March 2014.
Costantino, Gabriele
2014-12-01
The following article reports on the results and the outcome of a meeting organised at the Aptuit Auditorium in Verona (Italy), which highlighted the current applications of state-of-the-art computational science to drug design in Italy. The meeting, which had > 100 people in attendance, consisted of over 40 presentations and included keynote lectures given by world-renowned speakers. The topics included in the meeting are areas related to ligand and structure-based ligand design and library design and screening; it also provided discussion pertaining to chemometrics. The meeting also stressed the importance of public-private collaboration and reviewed the different approaches to computationally driven drug discovery taken within academia and industry. The meeting helped define the current position of state-of-the-art computational drug discovery in Italy, pointing out criticalities and assets. This kind of focused meeting is important in the sense that it lends the opportunity of a restricted yet representative community of fellow professionals to deeply discuss the current methodological approaches and provide future perspectives for computationally driven drug discovery.
NASA Technical Reports Server (NTRS)
Hamilton, George S.; Williams, Jermaine C.
1998-01-01
This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.
Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L
2008-01-15
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.
Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.
2007-01-01
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812
A Research Roadmap for Computation-Based Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Mandelli, Diego; Joe, Jeffrey
2015-08-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less
Systematic Applications of Metabolomics in Metabolic Engineering
Dromms, Robert A.; Styczynski, Mark P.
2012-01-01
The goals of metabolic engineering are well-served by the biological information provided by metabolomics: information on how the cell is currently using its biochemical resources is perhaps one of the best ways to inform strategies to engineer a cell to produce a target compound. Using the analysis of extracellular or intracellular levels of the target compound (or a few closely related molecules) to drive metabolic engineering is quite common. However, there is surprisingly little systematic use of metabolomics datasets, which simultaneously measure hundreds of metabolites rather than just a few, for that same purpose. Here, we review the most common systematic approaches to integrating metabolite data with metabolic engineering, with emphasis on existing efforts to use whole-metabolome datasets. We then review some of the most common approaches for computational modeling of cell-wide metabolism, including constraint-based models, and discuss current computational approaches that explicitly use metabolomics data. We conclude with discussion of the broader potential of computational approaches that systematically use metabolomics data to drive metabolic engineering. PMID:24957776
NASA Technical Reports Server (NTRS)
Hanks, G. W.; Shomber, H. A.; Dethman, H. A.; Gratzer, L. B.; Maeshiro, A.; Gangsaas, D.; Blight, J. D.; Buchan, S. M.; Crumb, C. B.; Dorwart, R. J.
1981-01-01
An active controls technology (ACT) system architecture was selected based on current technology system elements and optimal control theory was evaluated for use in analyzing and synthesizing ACT multiple control laws. The system selected employs three redundant computers to implement all of the ACT functions, four redundant smaller computers to implement the crucial pitch-augmented stability function, and a separate maintenance and display computer. The reliability objective of probability of crucial function failure of less than 1 x 10 to the -9th power per flight of 1 hr can be met with current technology system components, if the software is assumed fault free and coverage approaching 1.0 can be provided. The optimal control theory approach to ACT control law synthesis yielded comparable control law performance much more systematically and directly than the classical s-domain approach. The ACT control law performance, although somewhat degraded by the inclusion of representative nonlinearities, remained quite effective. Certain high-frequency gust-load alleviation functions may require increased surface rate capability.
Computational Aeroelastic Modeling of Airframes and TurboMachinery: Progress and Challenges
NASA Technical Reports Server (NTRS)
Bartels, R. E.; Sayma, A. I.
2006-01-01
Computational analyses such as computational fluid dynamics and computational structural dynamics have made major advances toward maturity as engineering tools. Computational aeroelasticity is the integration of these disciplines. As computational aeroelasticity matures it too finds an increasing role in the design and analysis of aerospace vehicles. This paper presents a survey of the current state of computational aeroelasticity with a discussion of recent research, success and continuing challenges in its progressive integration into multidisciplinary aerospace design. This paper approaches computational aeroelasticity from the perspective of the two main areas of application: airframe and turbomachinery design. An overview will be presented of the different prediction methods used for each field of application. Differing levels of nonlinear modeling will be discussed with insight into accuracy versus complexity and computational requirements. Subjects will include current advanced methods (linear and nonlinear), nonlinear flow models, use of order reduction techniques and future trends in incorporating structural nonlinearity. Examples in which computational aeroelasticity is currently being integrated into the design of airframes and turbomachinery will be presented.
ERIC Educational Resources Information Center
Ragin, Tracey B.
2013-01-01
Fundamental computer skills are vital in the current technology-driven society. The purpose of this study was to investigate the development needs of students at a rural community college in the Southeast who lacked the computer literacy skills required in a basic computer course. Guided by Greenwood's pragmatic approach as a reformative force in…
ERIC Educational Resources Information Center
Heslin, J. Alexander, Jr.
In senior-level undergraduate research courses in Computer Information Systems (CIS), students are required to read and assimilate a large volume of current research literature. One course objective is to demonstrate to the student that there are patterns or models or paradigms of research. A new approach in identifying research paradigms is…
Future in biomolecular computation
NASA Astrophysics Data System (ADS)
Wimmer, E.
1988-01-01
Large-scale computations for biomolecules are dominated by three levels of theory: rigorous quantum mechanical calculations for molecules with up to about 30 atoms, semi-empirical quantum mechanical calculations for systems with up to several hundred atoms, and force-field molecular dynamics studies of biomacromolecules with 10,000 atoms and more including surrounding solvent molecules. It can be anticipated that increased computational power will allow the treatment of larger systems of ever growing complexity. Due to the scaling of the computational requirements with increasing number of atoms, the force-field approaches will benefit the most from increased computational power. On the other hand, progress in methodologies such as density functional theory will enable us to treat larger systems on a fully quantum mechanical level and a combination of molecular dynamics and quantum mechanics can be envisioned. One of the greatest challenges in biomolecular computation is the protein folding problem. It is unclear at this point, if an approach with current methodologies will lead to a satisfactory answer or if unconventional, new approaches will be necessary. In any event, due to the complexity of biomolecular systems, a hierarchy of approaches will have to be established and used in order to capture the wide ranges of length-scales and time-scales involved in biological processes. In terms of hardware development, speed and power of computers will increase while the price/performance ratio will become more and more favorable. Parallelism can be anticipated to become an integral architectural feature in a range of computers. It is unclear at this point, how fast massively parallel systems will become easy enough to use so that new methodological developments can be pursued on such computers. Current trends show that distributed processing such as the combination of convenient graphics workstations and powerful general-purpose supercomputers will lead to a new style of computing in which the calculations are monitored and manipulated as they proceed. The combination of a numeric approach with artificial-intelligence approaches can be expected to open up entirely new possibilities. Ultimately, the most exciding aspect of the future in biomolecular computing will be the unexpected discoveries.
Integrating Human and Computer Intelligence. Technical Report No. 32.
ERIC Educational Resources Information Center
Pea, Roy D.
This paper explores the thesis that advances in computer applications and artificial intelligence have important implications for the study of development and learning in psychology. Current approaches to the use of computers as devices for problem solving, reasoning, and thinking--i.e., expert systems and intelligent tutoring systems--are…
Report on the Total System Computer Program for Medical Libraries.
ERIC Educational Resources Information Center
Divett, Robert T.; Jones, W. Wayne
The objective of this project was to develop an integrated computer program for the total operations of a medical library including acquisitions, cataloging, circulation, reference, a computer catalog, serials controls, and current awareness services. The report describes two systems approaches: the batch system and the terminal system. The batch…
CFD Methods and Tools for Multi-Element Airfoil Analysis
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; George, Michael W. (Technical Monitor)
1995-01-01
This lecture will discuss the computational tools currently available for high-lift multi-element airfoil analysis. It will present an overview of a number of different numerical approaches, their current capabilities, short-comings, and computational costs. The lecture will be limited to viscous methods, including inviscid/boundary layer coupling methods, and incompressible and compressible Reynolds-averaged Navier-Stokes methods. Both structured and unstructured grid generation approaches will be presented. Two different structured grid procedures are outlined, one which uses multi-block patched grids, the other uses overset chimera grids. Turbulence and transition modeling will be discussed.
Mathematical modeling and computational prediction of cancer drug resistance.
Sun, Xiaoqiang; Hu, Bin
2017-06-23
Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of computational methods for studying drug resistance, including inferring drug-induced signaling networks, multiscale modeling, drug combinations and precision medicine. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Toward A Simulation-Based Tool for the Treatment of Vocal Fold Paralysis
Mittal, Rajat; Zheng, Xudong; Bhardwaj, Rajneesh; Seo, Jung Hee; Xue, Qian; Bielamowicz, Steven
2011-01-01
Advances in high-performance computing are enabling a new generation of software tools that employ computational modeling for surgical planning. Surgical management of laryngeal paralysis is one area where such computational tools could have a significant impact. The current paper describes a comprehensive effort to develop a software tool for planning medialization laryngoplasty where a prosthetic implant is inserted into the larynx in order to medialize the paralyzed vocal fold (VF). While this is one of the most common procedures used to restore voice in patients with VF paralysis, it has a relatively high revision rate, and the tool being developed is expected to improve surgical outcomes. This software tool models the biomechanics of airflow-induced vibration in the human larynx and incorporates sophisticated approaches for modeling the turbulent laryngeal flow, the complex dynamics of the VFs, as well as the production of voiced sound. The current paper describes the key elements of the modeling approach, presents computational results that demonstrate the utility of the approach and also describes some of the limitations and challenges. PMID:21556320
ERIC Educational Resources Information Center
Eow, Yee Leng; Ali, Wan Zah bte Wan; Mahmud, Rosnaini bt.; Baki, Roselan
2010-01-01
Creativity is an important entity in developing human capital while computer games are the current generation's contemporary tool. This study focused on the teaching of computer games development in order to enhance the creative perception of secondary school children. The study applied randomised subjects, with control group experimental design,…
Parental Perceptions and Recommendations of Computing Majors: A Technology Acceptance Model Approach
ERIC Educational Resources Information Center
Powell, Loreen; Wimmer, Hayden
2017-01-01
Currently, there are more technology related jobs then there are graduates in supply. The need to understand user acceptance of computing degrees is the first step in increasing enrollment in computing fields. Additionally, valid measurement scales for predicting user acceptance of Information Technology degree programs are required. The majority…
Computational intelligence approaches for pattern discovery in biological systems.
Fogel, Gary B
2008-07-01
Biology, chemistry and medicine are faced by tremendous challenges caused by an overwhelming amount of data and the need for rapid interpretation. Computational intelligence (CI) approaches such as artificial neural networks, fuzzy systems and evolutionary computation are being used with increasing frequency to contend with this problem, in light of noise, non-linearity and temporal dynamics in the data. Such methods can be used to develop robust models of processes either on their own or in combination with standard statistical approaches. This is especially true for database mining, where modeling is a key component of scientific understanding. This review provides an introduction to current CI methods, their application to biological problems, and concludes with a commentary about the anticipated impact of these approaches in bioinformatics.
Theoretical Framework for Interaction Game Design
2016-05-19
modeling. We take a data-driven quantitative approach to understand conversational behaviors by measuring conversational behaviors using advanced sensing...current state of the art, human computing is considered to be a reasonable approach to break through the current limitation. To solicit high quality and...proper resources in conversation to enable smooth and effective interaction. The last technique is about conversation measurement , analysis, and
Sparse approximation of currents for statistics on curves and surfaces.
Durrleman, Stanley; Pennec, Xavier; Trouvé, Alain; Ayache, Nicholas
2008-01-01
Computing, processing, visualizing statistics on shapes like curves or surfaces is a real challenge with many applications ranging from medical image analysis to computational geometry. Modelling such geometrical primitives with currents avoids feature-based approach as well as point-correspondence method. This framework has been proved to be powerful to register brain surfaces or to measure geometrical invariants. However, if the state-of-the-art methods perform efficiently pairwise registrations, new numerical schemes are required to process groupwise statistics due to an increasing complexity when the size of the database is growing. Statistics such as mean and principal modes of a set of shapes often have a heavy and highly redundant representation. We propose therefore to find an adapted basis on which mean and principal modes have a sparse decomposition. Besides the computational improvement, this sparse representation offers a way to visualize and interpret statistics on currents. Experiments show the relevance of the approach on 34 sets of 70 sulcal lines and on 50 sets of 10 meshes of deep brain structures.
Bioinformatics approaches to predict target genes from transcription factor binding data.
Essebier, Alexandra; Lamprecht, Marnie; Piper, Michael; Bodén, Mikael
2017-12-01
Transcription factors regulate gene expression and play an essential role in development by maintaining proliferative states, driving cellular differentiation and determining cell fate. Transcription factors are capable of regulating multiple genes over potentially long distances making target gene identification challenging. Currently available experimental approaches to detect distal interactions have multiple weaknesses that have motivated the development of computational approaches. Although an improvement over experimental approaches, existing computational approaches are still limited in their application, with different weaknesses depending on the approach. Here, we review computational approaches with a focus on data dependency, cell type specificity and usability. With the aim of identifying transcription factor target genes, we apply available approaches to typical transcription factor experimental datasets. We show that approaches are not always capable of annotating all transcription factor binding sites; binding sites should be treated disparately; and a combination of approaches can increase the biological relevance of the set of genes identified as targets. Copyright © 2017 Elsevier Inc. All rights reserved.
Computation of elementary modes: a unifying framework and the new binary approach
Gagneur, Julien; Klamt, Steffen
2004-01-01
Background Metabolic pathway analysis has been recognized as a central approach to the structural analysis of metabolic networks. The concept of elementary (flux) modes provides a rigorous formalism to describe and assess pathways and has proven to be valuable for many applications. However, computing elementary modes is a hard computational task. In recent years we assisted in a multiplication of algorithms dedicated to it. We require a summarizing point of view and a continued improvement of the current methods. Results We show that computing the set of elementary modes is equivalent to computing the set of extreme rays of a convex cone. This standard mathematical representation provides a unified framework that encompasses the most prominent algorithmic methods that compute elementary modes and allows a clear comparison between them. Taking lessons from this benchmark, we here introduce a new method, the binary approach, which computes the elementary modes as binary patterns of participating reactions from which the respective stoichiometric coefficients can be computed in a post-processing step. We implemented the binary approach in FluxAnalyzer 5.1, a software that is free for academics. The binary approach decreases the memory demand up to 96% without loss of speed giving the most efficient method available for computing elementary modes to date. Conclusions The equivalence between elementary modes and extreme ray computations offers opportunities for employing tools from polyhedral computation for metabolic pathway analysis. The new binary approach introduced herein was derived from this general theoretical framework and facilitates the computation of elementary modes in considerably larger networks. PMID:15527509
Approaches to Studying and Students' Use of a Computer Supported Learning Environment
ERIC Educational Resources Information Center
Foster, Jonathan; Lin, Angela
2007-01-01
Although studies of students' study approaches in face to face learning environments are commonplace, studies investigating the role of students' study approaches in online learning environments is currently a less explored area. This paper presents the findings of a survey aimed at investigating the relationship between students' approaches to…
An overview of computer vision
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1982-01-01
An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.
Toward a computational psycholinguistics of reference production.
van Deemter, Kees; Gatt, Albert; van Gompel, Roger P G; Krahmer, Emiel
2012-04-01
This article introduces the topic ''Production of Referring Expressions: Bridging the Gap between Computational and Empirical Approaches to Reference'' of the journal Topics in Cognitive Science. We argue that computational and psycholinguistic approaches to reference production can benefit from closer interaction, and that this is likely to result in the construction of algorithms that differ markedly from the ones currently known in the computational literature. We focus particularly on determinism, the feature of existing algorithms that is perhaps most clearly at odds with psycholinguistic results, discussing how future algorithms might include non-determinism, and how new psycholinguistic experiments could inform the development of such algorithms. Copyright © 2012 Cognitive Science Society, Inc.
Development and Validation of a Computational Model for Androgen Receptor Activity
Testing thousands of chemicals to identify potential androgen receptor (AR) agonists or antagonists would cost millions of dollars and take decades to complete using current validated methods. High-throughput in vitro screening (HTS) and computational toxicology approaches can mo...
Prediction of Patient-Controlled Analgesic Consumption: A Multimodel Regression Tree Approach.
Hu, Yuh-Jyh; Ku, Tien-Hsiung; Yang, Yu-Hung; Shen, Jia-Ying
2018-01-01
Several factors contribute to individual variability in postoperative pain, therefore, individuals consume postoperative analgesics at different rates. Although many statistical studies have analyzed postoperative pain and analgesic consumption, most have identified only the correlation and have not subjected the statistical model to further tests in order to evaluate its predictive accuracy. In this study involving 3052 patients, a multistrategy computational approach was developed for analgesic consumption prediction. This approach uses data on patient-controlled analgesia demand behavior over time and combines clustering, classification, and regression to mitigate the limitations of current statistical models. Cross-validation results indicated that the proposed approach significantly outperforms various existing regression methods. Moreover, a comparison between the predictions by anesthesiologists and medical specialists and those of the computational approach for an independent test data set of 60 patients further evidenced the superiority of the computational approach in predicting analgesic consumption because it produced markedly lower root mean squared errors.
Generative Strategies and Computer-Based Instruction for Teaching Adult Students
ERIC Educational Resources Information Center
Knowlton, Dave S.; Simms, Julia
2009-01-01
Educational interventions that are currently in vogue in higher education settings are based upon constructivist approaches, whereby students learn content within the context of authentic activities and problem-based scenarios. Certainly these approaches have value, but proponents of these approaches have been somewhat successful in convincing…
Computer-aided design development transition for IPAD environment
NASA Technical Reports Server (NTRS)
Owens, H. G.; Mock, W. D.; Mitchell, J. C.
1980-01-01
The relationship of federally sponsored computer-aided design/computer-aided manufacturing (CAD/CAM) programs to the aircraft life cycle design process, an overview of NAAD'S CAD development program, an evaluation of the CAD design process, a discussion of the current computing environment within which NAAD is developing its CAD system, some of the advantages/disadvantages of the NAAD-IPAD approach, and CAD developments during transition into the IPAD system are discussed.
Prediction of Skin Sensitization Potency Using Machine Learning Approaches
Replacing animal tests currently used for regulatory hazard classification of skin sensitizers is one of ICCVAM’s top priorities. Accordingly, U.S. federal agency scientists are developing and evaluating computational approaches to classify substances as sensitizers or nons...
NASA Technical Reports Server (NTRS)
Drake, Jeffrey T.; Prasad, Nadipuram R.
1999-01-01
This paper surveys recent advances in communications that utilize soft computing approaches to phase synchronization. Soft computing, as opposed to hard computing, is a collection of complementary methodologies that act in producing the most desirable control, decision, or estimation strategies. Recently, the communications area has explored the use of the principal constituents of soft computing, namely, fuzzy logic, neural networks, and genetic algorithms, for modeling, control, and most recently for the estimation of phase in phase-coherent communications. If the receiver in a digital communications system is phase-coherent, as is often the case, phase synchronization is required. Synchronization thus requires estimation and/or control at the receiver of an unknown or random phase offset.
Insights into enzymatic halogenation from computational studies
Senn, Hans M.
2014-01-01
The halogenases are a group of enzymes that have only come to the fore over the last 10 years thanks to the discovery and characterization of several novel representatives. They have revealed the fascinating variety of distinct chemical mechanisms that nature utilizes to activate halogens and introduce them into organic substrates. Computational studies using a range of approaches have already elucidated many details of the mechanisms of these enzymes, often in synergistic combination with experiment. This Review summarizes the main insights gained from these studies. It also seeks to identify open questions that are amenable to computational investigations. The studies discussed herein serve to illustrate some of the limitations of the current computational approaches and the challenges encountered in computational mechanistic enzymology. PMID:25426489
Computer animation challenges for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine
2012-07-01
Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.
Load Balancing Strategies for Multiphase Flows on Structured Grids
NASA Astrophysics Data System (ADS)
Olshefski, Kristopher; Owkes, Mark
2017-11-01
The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.
ERIC Educational Resources Information Center
Sawyer, Janet; Penman, Joy
2012-01-01
This study investigated the pattern of teaching of healthy computing skills to high school students in South Australia. A survey approach was used to collect data, specifically to determine the emphasis placed by schools on ergonomics that relate to computer use. Participating schools were recruited through the Department for Education and Child…
ERIC Educational Resources Information Center
Ratnabalasuriar, Sheruni
2012-01-01
This study explores experiences of women as they pursue post-secondary computing education in various contexts. Using in-depth interviews, the current study employs qualitative methods and draws from an intersectional approach to focus on how the various barriers emerge for women in different types of computing cultures. In-depth interviews with…
Teachers' Changing Roles in Computer Assisted Roles in Kenyan Secondary Schools
ERIC Educational Resources Information Center
Tanui, Edward K.; Kiboss, Joel K.; Walaba, Aggrey A.; Nassiuma, Dankit
2008-01-01
The use of computer technology in Kenyan schools is a relatively new approach that is currently being included in the school curriculum. The introduction of computer technology for use in teaching does not always seem to be accepted outright by most teachers. The purpose of the study reported in this paper was to investigate the teachers' changing…
Advanced flight computer. Special study
NASA Technical Reports Server (NTRS)
Coo, Dennis
1995-01-01
This report documents a special study to define a 32-bit radiation hardened, SEU tolerant flight computer architecture, and to investigate current or near-term technologies and development efforts that contribute to the Advanced Flight Computer (AFC) design and development. An AFC processing node architecture is defined. Each node may consist of a multi-chip processor as needed. The modular, building block approach uses VLSI technology and packaging methods that demonstrate a feasible AFC module in 1998 that meets that AFC goals. The defined architecture and approach demonstrate a clear low-risk, low-cost path to the 1998 production goal, with intermediate prototypes in 1996.
Supporting Blended-Learning: Tool Requirements and Solutions with OWLish
ERIC Educational Resources Information Center
Álvarez, Ainhoa; Martín, Maite; Fernández-Castro, Isabel; Urretavizcaya, Maite
2016-01-01
Currently, most of the educational approaches applied to higher education combine face-to-face (F2F) and computer-mediated instruction in a Blended-Learning (B-Learning) approach. One of the main challenges of these approaches is fully integrating the traditional brick-and-mortar classes with online learning environments in an efficient and…
General Information: Chapman Conference on Magnetospheric Current Systems
NASA Technical Reports Server (NTRS)
Spicer, Daniel S.; Curtis, Steven
1999-01-01
The goal of this conference is to address recent achievements of observational, computational, theoretical, and modeling studies, and to foster communication among people working with different approaches. Electric current systems play an important role in the energetics of the magnetosphere. This conference will target outstanding issues related to magnetospheric current systems, placing its emphasis on interregional processes and driving mechanisms of current systems.
Magnetic field oscillations of the critical current in long ballistic graphene Josephson junctions
NASA Astrophysics Data System (ADS)
Rakyta, Péter; Kormányos, Andor; Cserti, József
2016-06-01
We study the Josephson current in long ballistic superconductor-monolayer graphene-superconductor junctions. As a first step, we have developed an efficient computational approach to calculate the Josephson current in tight-binding systems. This approach can be particularly useful in the long-junction limit, which has hitherto attracted less theoretical interest but has recently become experimentally relevant. We use this computational approach to study the dependence of the critical current on the junction geometry, doping level, and an applied perpendicular magnetic field B . In zero magnetic field we find a good qualitative agreement with the recent experiment of M. Ben Shalom et al. [Nat. Phys. 12, 318 (2016), 10.1038/nphys3592] for the length dependence of the critical current. For highly doped samples our numerical calculations show a broad agreement with the results of the quasiclassical formalism. In this case the critical current exhibits Fraunhofer-like oscillations as a function of B . However, for lower doping levels, where the cyclotron orbit becomes comparable to the characteristic geometrical length scales of the system, deviations from the results of the quasiclassical formalism appear. We argue that due to the exceptional tunability and long mean free path of graphene systems a new regime can be explored where geometrical and dynamical effects are equally important to understand the magnetic field dependence of the critical current.
Human-computer interaction in multitask situations
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1977-01-01
Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.
Computer Aided Enzyme Design and Catalytic Concepts
Frushicheva, Maria P.; Mills, Matthew J. L.; Schopf, Patrick; Singh, Manoj K.; Warshel, Arieh
2014-01-01
Gaining a deeper understanding of enzyme catalysis is of great practical and fundamental importance. Over the years it has become clear that despite advances made in experimental mutational studies, a quantitative understanding of enzyme catalysis will not be possible without the use of computer modeling approaches. While we believe that electrostatic preorganization is by far the most important catalytic factor, convincing the wider scientific community of this may require the demonstration of effective rational enzyme design. Here we make the point that the main current advances in enzyme design are basically advances in directed evolution and that computer aided enzyme design must involve approaches that can reproduce catalysis in well-defined test cases. Such an approach is provided by the empirical valence bond method. PMID:24814389
Microfabrication Technology for Photonics
1990-06-01
specifically addressed by a "folded," parallel architecture currently being proposed by A. Huang(35) who calls it "Computational Origami ." 25 IV...34Computational Origami " U.S. Patent Pending; H.M. Lu, "computatiortal Origami : A Geometric Approach to Regular Multiprocessing," MIT Master’s Thesis in
Adaptive Device Context Based Mobile Learning Systems
ERIC Educational Resources Information Center
Pu, Haitao; Lin, Jinjiao; Song, Yanwei; Liu, Fasheng
2011-01-01
Mobile learning is e-learning delivered through mobile computing devices, which represents the next stage of computer-aided, multi-media based learning. Therefore, mobile learning is transforming the way of traditional education. However, as most current e-learning systems and their contents are not suitable for mobile devices, an approach for…
NASA Astrophysics Data System (ADS)
Yoon, S.
2016-12-01
To define geodetic reference frame using GPS data collected by Continuously Operating Reference Stations (CORS) network, historical GPS data needs to be reprocessed regularly. Reprocessing GPS data collected by upto 2000 CORS sites for the last two decades requires a lot of computational resource. At National Geodetic Survey (NGS), there has been one completed reprocessing in 2011, and currently, the second reprocessing is undergoing. For the first reprocessing effort, in-house computing resource was utilized. In the current second reprocessing effort, outsourced cloud computing platform is being utilized. In this presentation, the outline of data processing strategy at NGS is described as well as the effort to parallelize the data processing procedure in order to maximize the benefit of the cloud computing. The time and cost savings realized by utilizing cloud computing approach will also be discussed.
A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations
NASA Technical Reports Server (NTRS)
Dydson, Roger W.; Goodrich, John W.
2000-01-01
Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.
Plank, Gernot; Zhou, Lufang; Greenstein, Joseph L; Cortassa, Sonia; Winslow, Raimond L; O'Rourke, Brian; Trayanova, Natalia A
2008-01-01
Computer simulations of electrical behaviour in the whole ventricles have become commonplace during the last few years. The goals of this article are (i) to review the techniques that are currently employed to model cardiac electrical activity in the heart, discussing the strengths and weaknesses of the various approaches, and (ii) to implement a novel modelling approach, based on physiological reasoning, that lifts some of the restrictions imposed by current state-of-the-art ionic models. To illustrate the latter approach, the present study uses a recently developed ionic model of the ventricular myocyte that incorporates an excitation–contraction coupling and mitochondrial energetics model. A paradigm to bridge the vastly disparate spatial and temporal scales, from subcellular processes to the entire organ, and from sub-microseconds to minutes, is presented. Achieving sufficient computational efficiency is the key to success in the quest to develop multiscale realistic models that are expected to lead to better understanding of the mechanisms of arrhythmia induction following failure at the organelle level, and ultimately to the development of novel therapeutic applications. PMID:18603526
Viscous Incompressible Flow Computations for 3-D Steady and Unsteady Flows
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2001-01-01
This viewgraph presentation gives an overview of viscous incompressible flow computations for three-dimensional steady and unsteady flows. Details are given on the use of computational fluid dynamics (CFD) as an engineering tool, solution methods for incompressible Navier-Stokes equations, numerical and physical characteristics of the primitive variable approach, and the role of CFD in the past and in current engineering and research applications.
Novel 3-D Computer Model Can Help Predict Pathogens’ Roles in Cancer | Poster
To understand how bacterial and viral infections contribute to human cancers, four NCI at Frederick scientists turned not to the lab bench, but to a computer. The team has created the world’s first—and currently, only—3-D computational approach for studying interactions between pathogen proteins and human proteins based on a molecular adaptation known as interface mimicry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasenkamp, Daren; Sim, Alexander; Wehner, Michael
Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, whilemore » we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.« less
The UK Human Genome Mapping Project online computing service.
Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W
1992-04-01
This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.
Generic, Type-Safe and Object Oriented Computer Algebra Software
NASA Astrophysics Data System (ADS)
Kredel, Heinz; Jolly, Raphael
Advances in computer science, in particular object oriented programming, and software engineering have had little practical impact on computer algebra systems in the last 30 years. The software design of existing systems is still dominated by ad-hoc memory management, weakly typed algorithm libraries and proprietary domain specific interactive expression interpreters. We discuss a modular approach to computer algebra software: usage of state-of-the-art memory management and run-time systems (e.g. JVM) usage of strongly typed, generic, object oriented programming languages (e.g. Java) and usage of general purpose, dynamic interactive expression interpreters (e.g. Python) To illustrate the workability of this approach, we have implemented and studied computer algebra systems in Java and Scala. In this paper we report on the current state of this work by presenting new examples.
Butera, R J; Wilson, C G; Delnegro, C A; Smith, J C
2001-12-01
We present a novel approach to implementing the dynamic-clamp protocol (Sharp et al., 1993), commonly used in neurophysiology and cardiac electrophysiology experiments. Our approach is based on real-time extensions to the Linux operating system. Conventional PC-based approaches have typically utilized single-cycle computational rates of 10 kHz or slower. In thispaper, we demonstrate reliable cycle-to-cycle rates as fast as 50 kHz. Our system, which we call model reference current injection (MRCI); pronounced merci is also capable of episodic logging of internal state variables and interactive manipulation of model parameters. The limiting factor in achieving high speeds was not processor speed or model complexity, but cycle jitter inherent in the CPU/motherboard performance. We demonstrate these high speeds and flexibility with two examples: 1) adding action-potential ionic currents to a mammalian neuron under whole-cell patch-clamp and 2) altering a cell's intrinsic dynamics via MRCI while simultaneously coupling it via artificial synapses to an internal computational model cell. These higher rates greatly extend the applicability of this technique to the study of fast electrophysiological currents such fast a currents and fast excitatory/inhibitory synapses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D; Fasenfest, B; Rieben, R
2006-09-08
We are concerned with the solution of time-dependent electromagnetic eddy current problems using a finite element formulation on three-dimensional unstructured meshes. We allow for multiple conducting regions, and our goal is to develop an efficient computational method that does not require a computational mesh of the air/vacuum regions. This requires a sophisticated global boundary condition specifying the total fields on the conductor boundaries. We propose a Biot-Savart law based volume-to-surface boundary condition to meet this requirement. This Biot-Savart approach is demonstrated to be very accurate. In addition, this approach can be accelerated via a low-rank QR approximation of the discretizedmore » Biot-Savart law.« less
NASA Technical Reports Server (NTRS)
Feng, C.; Sun, X.; Shen, Y. N.; Lombardi, Fabrizio
1992-01-01
This paper covers the verification and protocol validation for distributed computer and communication systems using a computer aided testing approach. Validation and verification make up the so-called process of conformance testing. Protocol applications which pass conformance testing are then checked to see whether they can operate together. This is referred to as interoperability testing. A new comprehensive approach to protocol testing is presented which address: (1) modeling for inter-layer representation for compatibility between conformance and interoperability testing; (2) computational improvement to current testing methods by using the proposed model inclusive of formulation of new qualitative and quantitative measures and time-dependent behavior; (3) analysis and evaluation of protocol behavior for interactive testing without extensive simulation.
A neural network based reputation bootstrapping approach for service selection
NASA Astrophysics Data System (ADS)
Wu, Quanwang; Zhu, Qingsheng; Li, Peng
2015-10-01
With the concept of service-oriented computing becoming widely accepted in enterprise application integration, more and more computing resources are encapsulated as services and published online. Reputation mechanism has been studied to establish trust on prior unknown services. One of the limitations of current reputation mechanisms is that they cannot assess the reputation of newly deployed services as no record of their previous behaviours exists. Most of the current bootstrapping approaches merely assign default reputation values to newcomers. However, by this kind of methods, either newcomers or existing services will be favoured. In this paper, we present a novel reputation bootstrapping approach, where correlations between features and performance of existing services are learned through an artificial neural network (ANN) and they are then generalised to establish a tentative reputation when evaluating new and unknown services. Reputations of services published previously by the same provider are also incorporated for reputation bootstrapping if available. The proposed reputation bootstrapping approach is seamlessly embedded into an existing reputation model and implemented in the extended service-oriented architecture. Empirical studies of the proposed approach are shown at last.
NASA Astrophysics Data System (ADS)
Marque, J. P.; Issac, F.; Parmantier, J. P.; Bertuol, S.
1998-11-01
The ESD-induced electromagnetic field on a S/C is computed using the 3D PIC code GEODE. Typical E-field waveforms are deduced and a new susceptibility test at equipment level is proposed as an alternative to the plane wave approach.
Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner
2017-11-01
Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.
Sputnik: ad hoc distributed computation.
Völkel, Gunnar; Lausser, Ludwig; Schmid, Florian; Kraus, Johann M; Kestler, Hans A
2015-04-15
In bioinformatic applications, computationally demanding algorithms are often parallelized to speed up computation. Nevertheless, setting up computational environments for distributed computation is often tedious. Aim of this project were the lightweight ad hoc set up and fault-tolerant computation requiring only a Java runtime, no administrator rights, while utilizing all CPU cores most effectively. The Sputnik framework provides ad hoc distributed computation on the Java Virtual Machine which uses all supplied CPU cores fully. It provides a graphical user interface for deployment setup and a web user interface displaying the current status of current computation jobs. Neither a permanent setup nor administrator privileges are required. We demonstrate the utility of our approach on feature selection of microarray data. The Sputnik framework is available on Github http://github.com/sysbio-bioinf/sputnik under the Eclipse Public License. hkestler@fli-leibniz.de or hans.kestler@uni-ulm.de Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Assessment of the magnetic field exposure due to the battery current of digital mobile phones.
Jokela, Kari; Puranen, Lauri; Sihvonen, Ari-Pekka
2004-01-01
Hand-held digital mobile phones generate pulsed magnetic fields associated with the battery current. The peak value and the waveform of the battery current were measured for seven different models of digital mobile phones, and the results were applied to compute approximately the magnetic flux density and induced currents in the phone-user's head. A simple circular loop model was used for the magnetic field source and a homogeneous sphere consisting of average brain tissue equivalent material simulated the head. The broadband magnetic flux density and the maximal induced current density were compared with the guidelines of ICNIRP using two various approaches. In the first approach the relative exposure was determined separately at each frequency and the exposure ratios were summed to obtain the total exposure (multiple-frequency rule). In the second approach the waveform was weighted in the time domain with a simple low-pass RC filter and the peak value was divided by a peak limit, both derived from the guidelines (weighted peak approach). With the maximum transmitting power (2 W) the measured peak current varied from 1 to 2.7 A. The ICNIRP exposure ratio based on the current density varied from 0.04 to 0.14 for the weighted peak approach and from 0.08 to 0.27 for the multiple-frequency rule. The latter values are considerably greater than the corresponding exposure ratios 0.005 (min) to 0.013 (max) obtained by applying the evaluation based on frequency components presented by the new IEEE standard. Hence, the exposure does not seem to exceed the guidelines. The computed peak magnetic flux density exceeded substantially the derived peak reference level of ICNIRP, but it should be noted that in a near-field exposure the external field strengths are not valid indicators of exposure. Currently, no biological data exist to give a reason for concern about the health effects of magnetic field pulses from mobile phones.
Canino-Rodríguez, José M; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G; Travieso-González, Carlos; Alonso-Hernández, Jesús B
2015-03-04
The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers' indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications.
Canino-Rodríguez, José M.; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G.; Travieso-González, Carlos; Alonso-Hernández, Jesús B.
2015-01-01
The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers’ indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications. PMID:25746092
Approaches for scalable modeling and emulation of cyber systems : LDRD final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.
2009-09-01
The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminarymore » theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.« less
A Gaussian Approximation Approach for Value of Information Analysis.
Jalal, Hawre; Alarid-Escudero, Fernando
2018-02-01
Most decisions are associated with uncertainty. Value of information (VOI) analysis quantifies the opportunity loss associated with choosing a suboptimal intervention based on current imperfect information. VOI can inform the value of collecting additional information, resource allocation, research prioritization, and future research designs. However, in practice, VOI remains underused due to many conceptual and computational challenges associated with its application. Expected value of sample information (EVSI) is rooted in Bayesian statistical decision theory and measures the value of information from a finite sample. The past few years have witnessed a dramatic growth in computationally efficient methods to calculate EVSI, including metamodeling. However, little research has been done to simplify the experimental data collection step inherent to all EVSI computations, especially for correlated model parameters. This article proposes a general Gaussian approximation (GA) of the traditional Bayesian updating approach based on the original work by Raiffa and Schlaifer to compute EVSI. The proposed approach uses a single probabilistic sensitivity analysis (PSA) data set and involves 2 steps: 1) a linear metamodel step to compute the EVSI on the preposterior distributions and 2) a GA step to compute the preposterior distribution of the parameters of interest. The proposed approach is efficient and can be applied for a wide range of data collection designs involving multiple non-Gaussian parameters and unbalanced study designs. Our approach is particularly useful when the parameters of an economic evaluation are correlated or interact.
The Layer-Oriented Approach to Declarative Languages for Biological Modeling
Raikov, Ivan; De Schutter, Erik
2012-01-01
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language. PMID:22615554
The layer-oriented approach to declarative languages for biological modeling.
Raikov, Ivan; De Schutter, Erik
2012-01-01
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language.
Computer-Assisted Diagnosis of the Sleep Apnea-Hypopnea Syndrome: A Review
Alvarez-Estevez, Diego; Moret-Bonillo, Vicente
2015-01-01
Automatic diagnosis of the Sleep Apnea-Hypopnea Syndrome (SAHS) has become an important area of research due to the growing interest in the field of sleep medicine and the costs associated with its manual diagnosis. The increment and heterogeneity of the different techniques, however, make it somewhat difficult to adequately follow the recent developments. A literature review within the area of computer-assisted diagnosis of SAHS has been performed comprising the last 15 years of research in the field. Screening approaches, methods for the detection and classification of respiratory events, comprehensive diagnostic systems, and an outline of current commercial approaches are reviewed. An overview of the different methods is presented together with validation analysis and critical discussion of the current state of the art. PMID:26266052
Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.
2015-01-01
The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351
Broadening the interface bandwidth in simulation based training
NASA Technical Reports Server (NTRS)
Somers, Larry E.
1989-01-01
Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.
Multigrid techniques for unstructured meshes
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1995-01-01
An overview of current multigrid techniques for unstructured meshes is given. The basic principles of the multigrid approach are first outlined. Application of these principles to unstructured mesh problems is then described, illustrating various different approaches, and giving examples of practical applications. Advanced multigrid topics, such as the use of algebraic multigrid methods, and the combination of multigrid techniques with adaptive meshing strategies are dealt with in subsequent sections. These represent current areas of research, and the unresolved issues are discussed. The presentation is organized in an educational manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.
Discrete square root filtering - A survey of current techniques.
NASA Technical Reports Server (NTRS)
Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.
1971-01-01
Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.
Pattern perception and computational complexity: introduction to the special issue
Fitch, W. Tecumseh; Friederici, Angela D.; Hagoort, Peter
2012-01-01
Research on pattern perception and rule learning, grounded in formal language theory (FLT) and using artificial grammar learning paradigms, has exploded in the last decade. This approach marries empirical research conducted by neuroscientists, psychologists and ethologists with the theory of computation and FLT, developed by mathematicians, linguists and computer scientists over the last century. Of particular current interest are comparative extensions of this work to non-human animals, and neuroscientific investigations using brain imaging techniques. We provide a short introduction to the history of these fields, and to some of the dominant hypotheses, to help contextualize these ongoing research programmes, and finally briefly introduce the papers in the current issue. PMID:22688630
Helicopter noise prediction - The current status and future direction
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Farassat, F.
1992-01-01
The paper takes stock of the progress, assesses the current prediction capabilities, and forecasts the direction of future helicopter noise prediction research. The acoustic analogy approach, specifically, theories based on the Ffowcs Williams-Hawkings equations, are the most widely used for deterministic noise sources. Thickness and loading noise can be routinely predicted given good plane motion and blade loading inputs. Blade-vortex interaction noise can also be predicted well with measured input data, but prediction of airloads with the high spatial and temporal resolution required for BVI is still difficult. Current semiempirical broadband noise predictions are useful and reasonably accurate. New prediction methods based on a Kirchhoff formula and direct computation appear to be very promising, but are currently very demanding computationally.
Managing geometric information with a data base management system
NASA Technical Reports Server (NTRS)
Dube, R. P.
1984-01-01
The strategies for managing computer based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. The research on integrated programs for aerospace-vehicle design (IPAD) focuses on the use of data base management system (DBMS) technology to manage engineering/manufacturing data. The objectives of IPAD is to develop a computer based engineering complex which automates the storage, management, protection, and retrieval of engineering data. In particular, this facility must manage geometry information as well as associated data. The approach taken on the IPAD project to achieve this objective is discussed. Geometry management in current systems and the approach taken in the early IPAD prototypes are examined.
NASA Astrophysics Data System (ADS)
Shatravin, V.; Shashev, D. V.
2018-05-01
Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.
NASA Astrophysics Data System (ADS)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.
2017-02-01
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.
NASA Astrophysics Data System (ADS)
Bocharov, A. N.; Bityurin, V. A.; Golovin, N. N.; Evstigneev, N. M.; Petrovskiy, V. P.; Ryabkov, O. I.; Teplyakov, I. O.; Shustov, A. A.; Solomonov, Yu S.; Fortov, V. E.
2016-11-01
In this paper, an approach to solve conjugate heat- and mass-transfer problems is considered to be applied to hypersonic vehicle surface of arbitrary shape. The approach under developing should satisfy the following demands. (i) The surface of the body of interest may have arbitrary geometrical shape. (ii) The shape of the body can change during calculation. (iii) The flight characteristics may vary in a wide range, specifically flight altitude, free-stream Mach number, angle-of-attack, etc. (iv) The approach should be realized with using the high-performance-computing (HPC) technologies. The approach is based on coupled solution of 3D unsteady hypersonic flow equations and 3D unsteady heat conductance problem for the thick wall. Iterative process is applied to account for ablation of wall material and, consequently, mass injection from the surface and changes in the surface shape. While iterations, unstructured computational grids both in the flow region and within the wall interior are adapted to the current geometry and flow conditions. The flow computations are done on HPC platform and are most time-consuming part of the whole problem, while heat conductance problem can be solved on many kinds of computers.
Advances in computer-aided well-test interpretation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horne, R.N.
1994-07-01
Despite the feeling expressed several times over the past 40 years that well-test analysis had reached it peak development, an examination of recent advances shows continuous expansion in capability, with future improvement likely. The expansion in interpretation capability over the past decade arose mainly from the development of computer-aided techniques, which, although introduced 20 years ago, have come into use only recently. The broad application of computer-aided interpretation originated with the improvement of the methodologies and continued with the expansion in computer access and capability that accompanied the explosive development of the microcomputer industry. This paper focuses on the differentmore » pieces of the methodology that combine to constitute a computer-aided interpretation and attempts to compare some of the approaches currently used. Future directions of the approach are also discussed. The separate areas discussed are deconvolution, pressure derivatives, model recognition, nonlinear regression, and confidence intervals.« less
Computational analysis of conserved RNA secondary structure in transcriptomes and genomes.
Eddy, Sean R
2014-01-01
Transcriptomics experiments and computational predictions both enable systematic discovery of new functional RNAs. However, many putative noncoding transcripts arise instead from artifacts and biological noise, and current computational prediction methods have high false positive rates. I discuss prospects for improving computational methods for analyzing and identifying functional RNAs, with a focus on detecting signatures of conserved RNA secondary structure. An interesting new front is the application of chemical and enzymatic experiments that probe RNA structure on a transcriptome-wide scale. I review several proposed approaches for incorporating structure probing data into the computational prediction of RNA secondary structure. Using probabilistic inference formalisms, I show how all these approaches can be unified in a well-principled framework, which in turn allows RNA probing data to be easily integrated into a wide range of analyses that depend on RNA secondary structure inference. Such analyses include homology search and genome-wide detection of new structural RNAs.
ERIC Educational Resources Information Center
Showanasai, Parinya; Lu, Jiafang; Hallinger, Philip
2013-01-01
Purpose: The extant literature on school leadership development is dominated by conceptual analysis, descriptive studies of current practice, critiques of current practice, and prescriptions for better ways to approach practice. Relatively few studies have examined impact of leadership development using experimental methods, among which even fewer…
Interferometer for Space Station Windows
NASA Technical Reports Server (NTRS)
Hall, Gregory
2003-01-01
Inspection of space station windows for micrometeorite damage would be a difficult task insitu using current inspection techniques. Commercially available optical profilometers and inspection systems are relatively large, about the size of a desktop computer tower, and require a stable platform to inspect the test object. Also, many devices currently available are designed for a laboratory or controlled environments requiring external computer control. This paper presents an approach using a highly developed optical interferometer to inspect the windows from inside the space station itself using a self- contained hand held device. The interferometer would be capable as a minimum of detecting damage as small as one ten thousands of an inch in diameter and depth while interrogating a relatively large area. The current developmental state of this device is still in the proof of concept stage. The background section of this paper will discuss the current state of the art of profilometers as well as the desired configuration of the self-contained, hand held device. Then, a discussion of the developments and findings that will allow the configuration change with suggested approaches appearing in the proof of concept section.
NASA Astrophysics Data System (ADS)
Sizov, Gennadi Y.
In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.
Agent-Based Simulations for Project Management
NASA Technical Reports Server (NTRS)
White, J. Chris; Sholtes, Robert M.
2011-01-01
Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.
Fan Noise Prediction with Applications to Aircraft System Noise Assessment
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Envia, Edmane; Burley, Casey L.
2009-01-01
This paper describes an assessment of current fan noise prediction tools by comparing measured and predicted sideline acoustic levels from a benchmark fan noise wind tunnel test. Specifically, an empirical method and newly developed coupled computational approach are utilized to predict aft fan noise for a benchmark test configuration. Comparisons with sideline noise measurements are performed to assess the relative merits of the two approaches. The study identifies issues entailed in coupling the source and propagation codes, as well as provides insight into the capabilities of the tools in predicting the fan noise source and subsequent propagation and radiation. In contrast to the empirical method, the new coupled computational approach provides the ability to investigate acoustic near-field effects. The potential benefits/costs of these new methods are also compared with the existing capabilities in a current aircraft noise system prediction tool. The knowledge gained in this work provides a basis for improved fan source specification in overall aircraft system noise studies.
Computer-Aided Air-Traffic Control In The Terminal Area
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
1995-01-01
Developmental computer-aided system for automated management and control of arrival traffic at large airport includes three integrated subsystems. One subsystem, called Traffic Management Advisor, another subsystem, called Descent Advisor, and third subsystem, called Final Approach Spacing Tool. Data base that includes current wind measurements and mathematical models of performances of types of aircraft contributes to effective operation of system.
PNNLs Data Intensive Computing research battles Homeland Security threats
David Thurman; Joe Kielman; Katherine Wolf; David Atkinson
2018-05-11
The Pacific Northwest National Laboratorys (PNNL's) approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architecture, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.
PNNL pushing scientific discovery through data intensive computing breakthroughs
Deborah Gracio; David Koppenaal; Ruby Leung
2018-05-18
The Pacific Northwest National Laboratory's approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architectures, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
From computers to cultivation: reconceptualizing evolutionary psychology.
Barrett, Louise; Pollet, Thomas V; Stulp, Gert
2014-01-01
Does evolutionary theorizing have a role in psychology? This is a more contentious issue than one might imagine, given that, as evolved creatures, the answer must surely be yes. The contested nature of evolutionary psychology lies not in our status as evolved beings, but in the extent to which evolutionary ideas add value to studies of human behavior, and the rigor with which these ideas are tested. This, in turn, is linked to the framework in which particular evolutionary ideas are situated. While the framing of the current research topic places the brain-as-computer metaphor in opposition to evolutionary psychology, the most prominent school of thought in this field (born out of cognitive psychology, and often known as the Santa Barbara school) is entirely wedded to the computational theory of mind as an explanatory framework. Its unique aspect is to argue that the mind consists of a large number of functionally specialized (i.e., domain-specific) computational mechanisms, or modules (the massive modularity hypothesis). Far from offering an alternative to, or an improvement on, the current perspective, we argue that evolutionary psychology is a mainstream computational theory, and that its arguments for domain-specificity often rest on shaky premises. We then go on to suggest that the various forms of e-cognition (i.e., embodied, embedded, enactive) represent a true alternative to standard computational approaches, with an emphasis on "cognitive integration" or the "extended mind hypothesis" in particular. We feel this offers the most promise for human psychology because it incorporates the social and historical processes that are crucial to human "mind-making" within an evolutionarily informed framework. In addition to linking to other research areas in psychology, this approach is more likely to form productive links to other disciplines within the social sciences, not least by encouraging a healthy pluralism in approach.
Delayed flap approach procedures for noise abatement and fuel conservation
NASA Technical Reports Server (NTRS)
Edwards, F. G.; Bull, J. S.; Foster, J. D.; Hegarty, D. M.; Drinkwater, F. J., III
1976-01-01
The NASA/Ames Research Center is currently investigating the delayed flap approach during which pilot actions are determined and prescribed by an onboard digital computer. The onboard digital computer determines the proper timing for the deployment of the landing gear and flaps based on the existing winds and airplane gross weight. Advisory commands are displayed to the pilot. The approach is flown along the conventional ILS glide slope but is initiated at a higher airspeed and in a clean aircraft configuration that allows for low thrust and results in reduced noise and fuel consumption. Topics discussed include operational procedures, pilot acceptability of these procedures, and fuel/noise benefits resulting from flight tests and simulation.
Computational Approaches to Nucleic Acid Origami.
Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo
2015-10-12
Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.
Wells, I G; Cartwright, R Y; Farnan, L P
1993-12-15
The computing strategy in our laboratories evolved from research in Artificial Intelligence, and is based on powerful software tools running on high performance desktop computers with a graphical user interface. This allows most tasks to be regarded as design problems rather than implementation projects, and both rapid prototyping and an object-oriented approach to be employed during the in-house development and enhancement of the laboratory information systems. The practical application of this strategy is discussed, with particular reference to the system designer, the laboratory user and the laboratory customer. Routine operation covers five departments, and the systems are stable, flexible and well accepted by the users. Client-server computing, currently undergoing final trials, is seen as the key to further development, and this approach to Pathology computing has considerable potential for the future.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1991-01-01
Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Revil, A.
2018-04-01
Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.
Massively parallel multicanonical simulations
NASA Astrophysics Data System (ADS)
Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard
2018-03-01
Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
Efficient method for computing the electronic transport properties of a multiterminal system
NASA Astrophysics Data System (ADS)
Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio
2018-04-01
We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.
ERIC Educational Resources Information Center
Burk, Robin K.
2010-01-01
Computational natural language understanding and generation have been a goal of artificial intelligence since McCarthy, Minsky, Rochester and Shannon first proposed to spend the summer of 1956 studying this and related problems. Although statistical approaches dominate current natural language applications, two current research trends bring…
Efficiency and Accuracy in Thermal Simulation of Powder Bed Fusion of Bulk Metallic Glass
NASA Astrophysics Data System (ADS)
Lindwall, J.; Malmelöv, A.; Lundbäck, A.; Lindgren, L.-E.
2018-05-01
Additive manufacturing by powder bed fusion processes can be utilized to create bulk metallic glass as the process yields considerably high cooling rates. However, there is a risk that reheated material set in layers may become devitrified, i.e., crystallize. Therefore, it is advantageous to simulate the process to fully comprehend it and design it to avoid the aforementioned risk. However, a detailed simulation is computationally demanding. It is necessary to increase the computational speed while maintaining accuracy of the computed temperature field in critical regions. The current study evaluates a few approaches based on temporal reduction to achieve this. It is found that the evaluated approaches save a lot of time and accurately predict the temperature history.
NASA Astrophysics Data System (ADS)
Agudelo-Toro, Andres; Neef, Andreas
2013-04-01
Objective. We present a computational method that implements a reduced set of Maxwell's equations to allow simulation of cells under realistic conditions: sub-micron cell morphology, a conductive non-homogeneous space and various ion channel properties and distributions. Approach. While a reduced set of Maxwell's equations can be used to couple membrane currents to extra- and intracellular potentials, this approach is rarely taken, most likely because adequate computational tools are missing. By using these equations, and introducing an implicit solver, numerical stability is attained even with large time steps. The time steps are limited only by the time development of the membrane potentials. Main results. This method allows simulation times of tens of minutes instead of weeks, even for complex problems. The extracellular fields are accurately represented, including secondary fields, which originate at inhomogeneities of the extracellular space and can reach several millivolts. We present a set of instructive examples that show how this method can be used to obtain reference solutions for problems, which might not be accurately captured by the traditional approaches. This includes the simulation of realistic magnitudes of extracellular action potential signals in restricted extracellular space. Significance. The electric activity of neurons creates extracellular potentials. Recent findings show that these endogenous fields act back onto the neurons, contributing to the synchronization of population activity. The influence of endogenous fields is also relevant for understanding therapeutic approaches such as transcranial direct current, transcranial magnetic and deep brain stimulation. The mutual interaction between fields and membrane currents is not captured by today's concepts of cellular electrophysiology, including the commonly used activation function, as those concepts are based on isolated membranes in an infinite, isopotential extracellular space. The presented tool makes simulations with detailed morphology and implicit interactions of currents and fields available to the electrophysiology community.
Evaluations of Technology-Assisted Small-Group Tutoring for Struggling Readers
ERIC Educational Resources Information Center
Madden, Nancy A.; Slavin, Robert E.
2017-01-01
This article reports on 2 experiments in inner-city Baltimore evaluating a computer-assisted tutoring approach, Tutoring With Alphie (TWA), in which 1 paraprofessional can work with up to 6 children at a time. In Study 1, we randomly assigned 14 schools to receive TWA or to continue with whatever approaches they were currently using. Each…
ERIC Educational Resources Information Center
Stroup, Walter M.; Hills, Thomas; Carmona, Guadalupe
2011-01-01
This paper summarizes an approach to helping future educators to engage with key issues related to the application of measurement-related statistics to learning and teaching, especially in the contexts of science, mathematics, technology and engineering (STEM) education. The approach we outline has two major elements. First, students are asked to…
Advanced computer techniques for inverse modeling of electric current in cardiac tissue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.A.; Romero, L.A.; Diegert, C.F.
1996-08-01
For many years, ECG`s and vector cardiograms have been the tools of choice for non-invasive diagnosis of cardiac conduction problems, such as found in reentrant tachycardia or Wolff-Parkinson-White (WPW) syndrome. Through skillful analysis of these skin-surface measurements of cardiac generated electric currents, a physician can deduce the general location of heart conduction irregularities. Using a combination of high-fidelity geometry modeling, advanced mathematical algorithms and massively parallel computing, Sandia`s approach would provide much more accurate information and thus allow the physician to pinpoint the source of an arrhythmia or abnormal conduction pathway.
Coupled circuit numerical analysis of eddy currents in an open MRI system.
Akram, Md Shahadat Hossain; Terada, Yasuhiko; Keiichiro, Ishi; Kose, Katsumi
2014-08-01
We performed a new coupled circuit numerical simulation of eddy currents in an open compact magnetic resonance imaging (MRI) system. Following the coupled circuit approach, the conducting structures were divided into subdomains along the length (or width) and the thickness, and by implementing coupled circuit concepts we have simulated transient responses of eddy currents for subdomains in different locations. We implemented the Eigen matrix technique to solve the network of coupled differential equations to speed up our simulation program. On the other hand, to compute the coupling relations between the biplanar gradient coil and any other conducting structure, we implemented the solid angle form of Ampere's law. We have also calculated the solid angle for three dimensions to compute inductive couplings in any subdomain of the conducting structures. Details of the temporal and spatial distribution of the eddy currents were then implemented in the secondary magnetic field calculation by the Biot-Savart law. In a desktop computer (Programming platform: Wolfram Mathematica 8.0®, Processor: Intel(R) Core(TM)2 Duo E7500 @ 2.93GHz; OS: Windows 7 Professional; Memory (RAM): 4.00GB), it took less than 3min to simulate the entire calculation of eddy currents and fields, and approximately 6min for X-gradient coil. The results are given in the time-space domain for both the direct and the cross-terms of the eddy current magnetic fields generated by the Z-gradient coil. We have also conducted free induction decay (FID) experiments of eddy fields using a nuclear magnetic resonance (NMR) probe to verify our simulation results. The simulation results were found to be in good agreement with the experimental results. In this study we have also conducted simulations for transient and spatial responses of secondary magnetic field induced by X-gradient coil. Our approach is fast and has much less computational complexity than the conventional electromagnetic numerical simulation methods. Copyright © 2014 Elsevier Inc. All rights reserved.
A Computational Approach to Estimate Interorgan Metabolic Transport in a Mammal
Cui, Xiao; Geffers, Lars; Eichele, Gregor; Yan, Jun
2014-01-01
In multicellular organisms metabolism is distributed across different organs, each of which has specific requirements to perform its own specialized task. But different organs also have to support the metabolic homeostasis of the organism as a whole by interorgan metabolite transport. Recent studies have successfully reconstructed global metabolic networks in tissues and cell types and attempts have been made to connect organs with interorgan metabolite transport. Instead of these complicated approaches to reconstruct global metabolic networks, we proposed in this study a novel approach to study interorgan metabolite transport focusing on transport processes mediated by solute carrier (Slc) transporters and their couplings to cognate enzymatic reactions. We developed a computational approach to identify and score potential interorgan metabolite transports based on the integration of metabolism and transports in different organs in the adult mouse from quantitative gene expression data. This allowed us to computationally estimate the connectivity between 17 mouse organs via metabolite transport. Finally, by applying our method to circadian metabolism, we showed that our approach can shed new light on the current understanding of interorgan metabolite transport at a whole-body level in mammals. PMID:24971892
Computation of dark frames in digital imagers
NASA Astrophysics Data System (ADS)
Widenhorn, Ralf; Rest, Armin; Blouke, Morley M.; Berry, Richard L.; Bodegom, Erik
2007-02-01
Dark current is caused by electrons that are thermally exited into the conduction band. These electrons are collected by the well of the CCD and add a false signal to the chip. We will present an algorithm that automatically corrects for dark current. It uses a calibration protocol to characterize the image sensor for different temperatures. For a given exposure time, the dark current of every pixel is characteristic of a specific temperature. The dark current of every pixel can therefore be used as an indicator of the temperature. Hot pixels have the highest signal-to-noise ratio and are the best temperature sensors. We use the dark current of a several hundred hot pixels to sense the chip temperature and predict the dark current of all pixels on the chip. Dark current computation is not a new concept, but our approach is unique. Some advantages of our method include applicability for poorly temperature-controlled camera systems and the possibility of ex post facto dark current correction.
Current CFD Practices in Launch Vehicle Applications
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, Cetin
2012-01-01
The quest for sustained space exploration will require the development of advanced launch vehicles, and efficient and reliable operating systems. Development of launch vehicles via test-fail-fix approach is very expensive and time consuming. For decision making, modeling and simulation (M&S) has played increasingly important roles in many aspects of launch vehicle development. It is therefore essential to develop and maintain most advanced M&S capability. More specifically computational fluid dynamics (CFD) has been providing critical data for developing launch vehicles complementing expensive testing. During the past three decades CFD capability has increased remarkably along with advances in computer hardware and computing technology. However, most of the fundamental CFD capability in launch vehicle applications is derived from the past advances. Specific gaps in the solution procedures are being filled primarily through "piggy backed" efforts.on various projects while solving today's problems. Therefore, some of the advanced capabilities are not readily available for various new tasks, and mission-support problems are often analyzed using ad hoc approaches. The current report is intended to present our view on state-of-the-art (SOA) in CFD and its shortcomings in support of space transport vehicle development. Best practices in solving current issues will be discussed using examples from ascending launch vehicles. Some of the pacing will be discussed in conjunction with these examples.
Quantum market games: implementing tactics via measurements
NASA Astrophysics Data System (ADS)
Pakula, I.; Piotrowski, E. W.; Sladkowski, J.
2006-02-01
A major development in applying quantum mechanical formalism to various fields has been made during the last few years. Quantum counterparts of Game Theory, Economy, as well as diverse approaches to Quantum Information Theory have been found and currently are being explored. Using connections between Quantum Game Theory and Quantum Computations, an application of the universality of a measurement based computation in Quantum Market Theory is presented.
ERIC Educational Resources Information Center
Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine
2009-01-01
We built a natural language processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…
Efficient integration method for fictitious domain approaches
NASA Astrophysics Data System (ADS)
Duczek, Sascha; Gabbert, Ulrich
2015-10-01
In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.
Step-by-step magic state encoding for efficient fault-tolerant quantum computation
Goto, Hayato
2014-01-01
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387
Step-by-step magic state encoding for efficient fault-tolerant quantum computation.
Goto, Hayato
2014-12-16
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.
Compiling probabilistic, bio-inspired circuits on a field programmable analog array
Marr, Bo; Hasler, Jennifer
2014-01-01
A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199
NASA Astrophysics Data System (ADS)
Jezzine, Karim; Imperiale, Alexandre; Demaldent, Edouard; Le Bourdais, Florian; Calmon, Pierre; Dominguez, Nicolas
2018-04-01
Models for the simulation of ultrasonic inspections of flat and curved plate-like composite structures, as well as stiffeners, are available in the CIVA-COMPOSITE module released in 2016. A first modelling approach using a ray-based model is able to predict the ultrasonic propagation in an anisotropic effective medium obtained after having homogenized the composite laminate. Fast 3D computations can be performed on configurations featuring delaminations, flat bottom holes or inclusions for example. In addition, computations on ply waviness using this model will be available in CIVA 2017. Another approach is proposed in the CIVA-COMPOSITE module. It is based on the coupling of CIVA ray-based model and a finite difference scheme in time domain (FDTD) developed by AIRBUS. The ray model handles the ultrasonic propagation between the transducer and the FDTD computation zone that surrounds the composite part. In this way, the computational efficiency is preserved and the ultrasound scattering by the composite structure can be predicted. Alternatively, a high order finite element approach is currently developed at CEA but not yet integrated in CIVA. The advantages of this approach will be discussed and first simulation results on Carbon Fiber Reinforced Polymers (CFRP) will be shown. Finally, the application of these modelling tools to the construction of metamodels is discussed.
Software Systems for High-performance Quantum Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; Britt, Keith A
Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventionalmore » computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.« less
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...
2017-03-20
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
NASA Technical Reports Server (NTRS)
1979-01-01
The current program had the objective to modify a discrete vortex wake method to efficiently compute the aerodynamic forces and moments on high fineness ratio bodies (f approximately 10.0). The approach is to increase computational efficiency by structuring the program to take advantage of new computer vector software and by developing new algorithms when vector software can not efficiently be used. An efficient program was written and substantial savings achieved. Several test cases were run for fineness ratios up to f = 16.0 and angles of attack up to 50 degrees.
Computational Modeling in Liver Surgery
Christ, Bruno; Dahmen, Uta; Herrmann, Karl-Heinz; König, Matthias; Reichenbach, Jürgen R.; Ricken, Tim; Schleicher, Jana; Ole Schwen, Lars; Vlaic, Sebastian; Waschinsky, Navina
2017-01-01
The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery. PMID:29249974
Low cost paths to binary optics
NASA Technical Reports Server (NTRS)
Nelson, Arthur; Domash, Lawrence
1993-01-01
Application of binary optics has been limited to a few major laboratories because of the limited availability of fabrication facilities such as e-beam machines and the lack of standardized design software. Foster-Miller has attempted to identify low cost approaches to medium-resolution binary optics using readily available computer and fabrication tools, primarily for the use of students and experimenters in optical computing. An early version of our system, MacBEEP, made use of an optimized laser film recorder from the commercial typesetting industry with 10 micron resolution. This report is an update on our current efforts to design and build a second generation MacBEEP, which aims at 1 micron resolution and multiple phase levels. Trails included a low cost scanning electron microscope in microlithography mode, and alternative laser inscribers or photomask generators. Our current software approach is based on Mathematica and PostScript compatibility.
User-Defined Data Distributions in High-Level Programming Languages
NASA Technical Reports Server (NTRS)
Diaconescu, Roxana E.; Zima, Hans P.
2006-01-01
One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.
Navier-Stokes predictions of pitch damping for axisymmetric shell using steady coning motion
NASA Technical Reports Server (NTRS)
Weinacht, Paul; Sturek, Walter B.; Schiff, Lewis B.
1991-01-01
Previous theoretical investigations have proposed that the side force and moment acting on a body of revolution in steady coning motion could be related to the pitch-damping force and moment. In the current research effort, this approach is applied to produce predictions of the pitch damping for axisymmetric shell. The flow fields about these projectiles undergoing steady coning motion are successfully computed using a parabolized Navier-Stokes computational approach which makes use of a rotating coordinate frame. The governing equations are modified to include the centrifugal and Coriolis force terms due to the rotating coordinate frame. From the computed flow field, the side moments due to coning motion, spinning motion, and combined spinning and coning motion are used to determine the pitch-damping coefficients. Computations are performed for two generic shell configurations, a secant-ogive-cylinder and a secant-ogive-cylinder-boattail.
A New Formulation for Hybrid LES-RANS Computations
NASA Technical Reports Server (NTRS)
Woodruff, Stephen L.
2013-01-01
Ideally, a hybrid LES-RANS computation would employ LES only where necessary to make up for the failure of the RANS model to provide sufficient accuracy or to provide time-dependent information. Current approaches are fairly restrictive in the placement of LES and RANS regions; an LES-RANS transition in a boundary layer, for example, yields an unphysical log-layer shift. A hybrid computation is formulated here to allow greater control over the placement of LES and RANS regions and the transitions between them. The concept of model invariance is introduced, which provides a basis for interpreting hybrid results within an LES-RANS transition zone. Consequences of imposing model invariance include the addition of terms to the governing equations that compensate for unphysical gradients created as the model changes between RANS and LES. Computational results illustrate the increased accuracy of the approach and its insensitivity to the location of the transition and to the blending function employed.
Whole-genome CNV analysis: advances in computational approaches.
Pirooznia, Mehdi; Goes, Fernando S; Zandi, Peter P
2015-01-01
Accumulating evidence indicates that DNA copy number variation (CNV) is likely to make a significant contribution to human diversity and also play an important role in disease susceptibility. Recent advances in genome sequencing technologies have enabled the characterization of a variety of genomic features, including CNVs. This has led to the development of several bioinformatics approaches to detect CNVs from next-generation sequencing data. Here, we review recent advances in CNV detection from whole genome sequencing. We discuss the informatics approaches and current computational tools that have been developed as well as their strengths and limitations. This review will assist researchers and analysts in choosing the most suitable tools for CNV analysis as well as provide suggestions for new directions in future development.
In silico polypharmacology of natural products.
Fang, Jiansong; Liu, Chuang; Wang, Qi; Lin, Ping; Cheng, Feixiong
2017-04-27
Natural products with polypharmacological profiles have demonstrated promise as novel therapeutics for various complex diseases, including cancer. Currently, many gaps exist in our knowledge of which compounds interact with which targets, and experimentally testing all possible interactions is infeasible. Recent advances and developments of systems pharmacology and computational (in silico) approaches provide powerful tools for exploring the polypharmacological profiles of natural products. In this review, we introduce recent progresses and advances of computational tools and systems pharmacology approaches for identifying drug targets of natural products by focusing on the development of targeted cancer therapy. We survey the polypharmacological and systems immunology profiles of five representative natural products that are being considered as cancer therapies. We summarize various chemoinformatics, bioinformatics and systems biology resources for reconstructing drug-target networks of natural products. We then review currently available computational approaches and tools for prediction of drug-target interactions by focusing on five domains: target-based, ligand-based, chemogenomics-based, network-based and omics-based systems biology approaches. In addition, we describe a practical example of the application of systems pharmacology approaches by integrating the polypharmacology of natural products and large-scale cancer genomics data for the development of precision oncology under the systems biology framework. Finally, we highlight the promise of cancer immunotherapies and combination therapies that target tumor ecosystems (e.g. clones or 'selfish' sub-clones) via exploiting the immunological and inflammatory 'side' effects of natural products in the cancer post-genomics era. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
NASA Astrophysics Data System (ADS)
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
Memory Overview - Technologies and Needs
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.
2010-01-01
As NASA has evolved it's usage of spaceflight computing, memory applications have followed as well. In this talk, we will discuss the history of NASA's memories from magnetic core and tape recorders to current semiconductor approaches. We will briefly describe current functional memory usage in NASA space systems followed by a description of potential radiation-induced failure modes along with considerations for reliable system design.
NASA Astrophysics Data System (ADS)
Ren, Xiaotao; Corcolle, Romain; Daniel, Laurent
2016-02-01
The use of soft magnetic composites (SMCs) in electrical engineering applications is growing. SMCs provide an effective alternative to laminated steels because they exhibit a high permeability with low eddy current losses. Losses are a critical feature in the design of electrical machines, and it is necessary to evaluate the role of microstructure and constitutive properties of SMCs during the predesign stage. In this paper we propose a simplified finite element approach to compute eddy current losses in these materials. The computations allow to quantify the role of exciting source and material properties on eddy current losses. This analysis can later be used in the development of homogenization models for SMC. Contribution to the topical issue "Numelec 2015 - Elected submissions", edited by Adel Razek
NASA Astrophysics Data System (ADS)
Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun
2018-02-01
Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.
Large-scale neuromorphic computing systems
NASA Astrophysics Data System (ADS)
Furber, Steve
2016-10-01
Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.
Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A
2018-06-01
Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.
Zevin, Jason D; Miller, Brett
Reading research is increasingly a multi-disciplinary endeavor involving more complex, team-based science approaches. These approaches offer the potential of capturing the complexity of reading development, the emergence of individual differences in reading performance over time, how these differences relate to the development of reading difficulties and disability, and more fully understanding the nature of skilled reading in adults. This special issue focuses on the potential opportunities and insights that early and richly integrated advanced statistical and computational modeling approaches can provide to our foundational (and translational) understanding of reading. The issue explores how computational and statistical modeling, using both observed and simulated data, can serve as a contact point among research domains and topics, complement other data sources and critically provide analytic advantages over current approaches.
A Service-oriented Approach towards Context-aware Mobile Learning Management Systems
2010-07-01
towards a pervasive university. Keywords-context-aware computing, service-oriented archi- tecture, mobile computing, elearning , learn management sys- tem I...usage of device- specific features provide support for various ubiquitous and pervasive eLearning scenarios [2][3]. By knowing where the user currently...data from the mobile device towards a context-aware mobile LMS. II. BASIC CONCEPTS For a better understanding of the presented eLearning sce- narios
Simulation of electron transport across charged grain boundaries
NASA Astrophysics Data System (ADS)
Srikant, V.; Clarke, D. R.; Evans, P. V.
1996-09-01
The I-V (current density-electric field) characteristics of low-angle grain boundaries consisting of periodic arrays of charged dislocations are computed using a quasiclassical molecular dynamics approach. Below a critical value of the grain boundary misorientation, the computed I-V characteristics are linear whereas above they are nonlinear. The degree of nonlinearity and the voltage onset of nonlinearity are found to be dependent on the grain boundary misorientation.
Status of the Electroforming Shield Design (ESD) project
NASA Technical Reports Server (NTRS)
Fletcher, R. E.
1977-01-01
The utilization of a digital computer to augment electrodeposition/electroforming processes in which nonconducting shielding controls local cathodic current distribution is reported. The primary underlying philosophy of the physics of electrodeposition was presented. The technical approach taken to analytically simulate electrolytic tank variables was also included. A FORTRAN computer program has been developed and implemented. The program utilized finite element techniques and electrostatic theory to simulate electropotential fields and ionic transport.
An assessment and application of turbulence models for hypersonic flows
NASA Technical Reports Server (NTRS)
Coakley, T. J.; Viegas, J. R.; Huang, P. G.; Rubesin, M. W.
1990-01-01
The current approach to the Accurate Computation of Complex high-speed flows is to solve the Reynolds averaged Navier-Stokes equations using finite difference methods. An integral part of this approach consists of development and applications of mathematical turbulence models which are necessary in predicting the aerothermodynamic loads on the vehicle and the performance of the propulsion plant. Computations of several high speed turbulent flows using various turbulence models are described and the models are evaluated by comparing computations with the results of experimental measurements. The cases investigated include flows over insulated and cooled flat plates with Mach numbers ranging from 2 to 8 and wall temperature ratios ranging from 0.2 to 1.0. The turbulence models investigated include zero-equation, two-equation, and Reynolds-stress transport models.
ERIC Educational Resources Information Center
Downie, J. Stephen
2003-01-01
Identifies MIR (Music Information Retrieval) computer system problems, historic influences, current state-of-the-art, and future MIR solutions through an examination of the multidisciplinary approach to MIR. Highlights include pitch; temporal factors; harmonics; tone; editorial, textual, and bibliographic facets; multicultural factors; locating…
Computational Approaches for Identifying Adverse Outcome Pathways
Adverse Outcome Pathways (AOPs) provide a framework for organizing toxicity information to improve predictions of the potential adverse impact of environment stressors on humans or wildlife populations, but these benefits are currently limited by the small number of AOPs currentl...
Analysis of Compression Pad Cavities for the Orion Heatshield
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lessard, Victor R.; Jentink, Thomas N.; Zoby, Ernest V.
2009-01-01
Current results of a program for analysis of the compression pad cavities on the Orion heatshield are reviewed. The program was supported by experimental tests, engineering modeling, and applied computations with an emphasis on the latter presented in this paper. The computational tools and approach are described along with calculated results for wind tunnel and flight conditions. Correlations of the computed results are shown which can produce a credible prediction of heating augmentation due to cavity disturbances. The models developed for use in preliminary design of the Orion heatshield are presented.
Ho, Hsiang; Milenković, Tijana; Memisević, Vesna; Aruri, Jayavani; Przulj, Natasa; Ganesan, Anand K
2010-06-15
RNA-mediated interference (RNAi)-based functional genomics is a systems-level approach to identify novel genes that control biological phenotypes. Existing computational approaches can identify individual genes from RNAi datasets that regulate a given biological process. However, currently available methods cannot identify which RNAi screen "hits" are novel components of well-characterized biological pathways known to regulate the interrogated phenotype. In this study, we describe a method to identify genes from RNAi datasets that are novel components of known biological pathways. We experimentally validate our approach in the context of a recently completed RNAi screen to identify novel regulators of melanogenesis. In this study, we utilize a PPI network topology-based approach to identify targets within our RNAi dataset that may be components of known melanogenesis regulatory pathways. Our computational approach identifies a set of screen targets that cluster topologically in a human PPI network with the known pigment regulator Endothelin receptor type B (EDNRB). Validation studies reveal that these genes impact pigment production and EDNRB signaling in pigmented melanoma cells (MNT-1) and normal melanocytes. We present an approach that identifies novel components of well-characterized biological pathways from functional genomics datasets that could not have been identified by existing statistical and computational approaches.
2010-01-01
Background RNA-mediated interference (RNAi)-based functional genomics is a systems-level approach to identify novel genes that control biological phenotypes. Existing computational approaches can identify individual genes from RNAi datasets that regulate a given biological process. However, currently available methods cannot identify which RNAi screen "hits" are novel components of well-characterized biological pathways known to regulate the interrogated phenotype. In this study, we describe a method to identify genes from RNAi datasets that are novel components of known biological pathways. We experimentally validate our approach in the context of a recently completed RNAi screen to identify novel regulators of melanogenesis. Results In this study, we utilize a PPI network topology-based approach to identify targets within our RNAi dataset that may be components of known melanogenesis regulatory pathways. Our computational approach identifies a set of screen targets that cluster topologically in a human PPI network with the known pigment regulator Endothelin receptor type B (EDNRB). Validation studies reveal that these genes impact pigment production and EDNRB signaling in pigmented melanoma cells (MNT-1) and normal melanocytes. Conclusions We present an approach that identifies novel components of well-characterized biological pathways from functional genomics datasets that could not have been identified by existing statistical and computational approaches. PMID:20550706
NASA Technical Reports Server (NTRS)
Murray, N. D.
1985-01-01
Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.
Convolutional neural networks and face recognition task
NASA Astrophysics Data System (ADS)
Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.
2017-09-01
Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.
NASA Astrophysics Data System (ADS)
Chatziantonaki, Ioanna; Tsironis, Christos; Isliker, Heinz; Vlahos, Loukas
2013-11-01
The most promising technique for the control of neoclassical tearing modes in tokamak experiments is the compensation of the missing bootstrap current with an electron-cyclotron current drive (ECCD). In this frame, the dynamics of magnetic islands has been studied extensively in terms of the modified Rutherford equation (MRE), including the presence of a current drive, either analytically described or computed by numerical methods. In this article, a self-consistent model for the dynamic evolution of the magnetic island and the driven current is derived, which takes into account the island's magnetic topology and its effect on the current drive. The model combines the MRE with a ray-tracing approach to electron-cyclotron wave-propagation and absorption. Numerical results exhibit a decrease in the time required for complete stabilization with respect to the conventional computation (not taking into account the island geometry), which increases by increasing the initial island size and radial misalignment of the deposition.
NASA Astrophysics Data System (ADS)
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
An alternative approach for computing seismic response with accidental eccentricity
NASA Astrophysics Data System (ADS)
Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu
2014-09-01
Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.
Visualization of Electrical Field of Electrode Using Voltage-Controlled Fluorescence Release
Jia, Wenyan; Wu, Jiamin; Gao, Di; Wang, Hao; Sun, Mingui
2016-01-01
In this study we propose an approach to directly visualize electrical current distribution at the electrode-electrolyte interface of a biopotential electrode. High-speed fluorescent microscopic images are acquired when an electric potential is applied across the interface to trigger the release of fluorescent material from the surface of the electrode. These images are analyzed computationally to obtain the distribution of the electric field from the fluorescent intensity of each pixel. Our approach allows direct observation of microscopic electrical current distribution around the electrode. Experiments are conducted to validate the feasibility of the fluorescent imaging method. PMID:27253615
Phytoplankton as Particles - A New Approach to Modeling Algal Blooms
2013-07-01
68 Figure 69. Amplitudes of lunar semi-diurnal and diurnal harmonics of observed and computed...particle behavior when the trajectory takes a particle outside the model domain. The rules associated with the present particle-tracking algorithms are... land - ward, although occasional reversals occurred. Amplitude of the current fluctuations was ≈ 20 cm s-1. Model residual currents for one year were
Computational pan-genomics: status, promises and challenges.
2018-01-01
Many disciplines, from human genetics and oncology to plant breeding, microbiology and virology, commonly face the challenge of analyzing rapidly increasing numbers of genomes. In case of Homo sapiens, the number of sequenced genomes will approach hundreds of thousands in the next few years. Simply scaling up established bioinformatics pipelines will not be sufficient for leveraging the full potential of such rich genomic data sets. Instead, novel, qualitatively different computational methods and paradigms are needed. We will witness the rapid extension of computational pan-genomics, a new sub-area of research in computational biology. In this article, we generalize existing definitions and understand a pan-genome as any collection of genomic sequences to be analyzed jointly or to be used as a reference. We examine already available approaches to construct and use pan-genomes, discuss the potential benefits of future technologies and methodologies and review open challenges from the vantage point of the above-mentioned biological disciplines. As a prominent example for a computational paradigm shift, we particularly highlight the transition from the representation of reference genomes as strings to representations as graphs. We outline how this and other challenges from different application domains translate into common computational problems, point out relevant bioinformatics techniques and identify open problems in computer science. With this review, we aim to increase awareness that a joint approach to computational pan-genomics can help address many of the problems currently faced in various domains. © The Author 2016. Published by Oxford University Press.
Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2012-01-01
We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the method achieved a missed detection rate of 0.1%, and a false alarm rate of 2%.
The application of quantum mechanics in structure-based drug design.
Mucs, Daniel; Bryce, Richard A
2013-03-01
Computational chemistry has become an established and valuable component in structure-based drug design. However the chemical complexity of many ligands and active sites challenges the accuracy of the empirical potentials commonly used to describe these systems. Consequently, there is a growing interest in utilizing electronic structure methods for addressing problems in protein-ligand recognition. In this review, the authors discuss recent progress in the development and application of quantum chemical approaches to modeling protein-ligand interactions. The authors specifically consider the development of quantum mechanics (QM) approaches for studying large molecular systems pertinent to biology, focusing on protein-ligand docking, protein-ligand binding affinities and ligand strain on binding. Although computation of binding energies remains a challenging and evolving area, current QM methods can underpin improved docking approaches and offer detailed insights into ligand strain and into the nature and relative strengths of complex active site interactions. The authors envisage that QM will become an increasingly routine and valued tool of the computational medicinal chemist.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Seo, Jung Hee; Mittal, Rajat
2010-01-01
A new sharp-interface immersed boundary method based approach for the computation of low-Mach number flow-induced sound around complex geometries is described. The underlying approach is based on a hydrodynamic/acoustic splitting technique where the incompressible flow is first computed using a second-order accurate immersed boundary solver. This is followed by the computation of sound using the linearized perturbed compressible equations (LPCE). The primary contribution of the current work is the development of a versatile, high-order accurate immersed boundary method for solving the LPCE in complex domains. This new method applies the boundary condition on the immersed boundary to a high-order by combining the ghost-cell approach with a weighted least-squares error method based on a high-order approximating polynomial. The method is validated for canonical acoustic wave scattering and flow-induced noise problems. Applications of this technique to relatively complex cases of practical interest are also presented. PMID:21318129
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Critical Thinking Traits of Top-Tier Experts and Implications for Computer Science Education
2007-08-01
field of cognitive theory ," [Papert 1999] used his work while developing the Logo programming language. 19 Although other researchers had developed ...of computer expert systems influenced the development of current theories dealing with cognitive abilities. One of the most important initiatives by...multitude of factors involved. He also builds on the cognitive development work of Piaget and is not ready to abandon the generalist approach. Instead, he
Integral processing in beyond-Hartree-Fock calculations
NASA Technical Reports Server (NTRS)
Taylor, P. R.
1986-01-01
The increasing rate at which improvements in processing capacity outstrip improvements in input/output performance of large computers has led to recent attempts to bypass generation of a disk-based integral file. The direct self-consistent field (SCF) method of Almlof and co-workers represents a very successful implementation of this approach. This paper is concerned with the extension of this general approach to configuration interaction (CI) and multiconfiguration-self-consistent field (MCSCF) calculations. After a discussion of the particular types of molecular orbital (MO) integrals for which -- at least for most current generation machines -- disk-based storage seems unavoidable, it is shown how all the necessary integrals can be obtained as matrix elements of Coulomb and exchange operators that can be calculated using a direct approach. Computational implementations of such a scheme are discussed.
Representation-Independent Iteration of Sparse Data Arrays
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacko, M; Aldoohan, S
Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended undermore » simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical CT images.« less
From computers to cultivation: reconceptualizing evolutionary psychology
Barrett, Louise; Pollet, Thomas V.; Stulp, Gert
2014-01-01
Does evolutionary theorizing have a role in psychology? This is a more contentious issue than one might imagine, given that, as evolved creatures, the answer must surely be yes. The contested nature of evolutionary psychology lies not in our status as evolved beings, but in the extent to which evolutionary ideas add value to studies of human behavior, and the rigor with which these ideas are tested. This, in turn, is linked to the framework in which particular evolutionary ideas are situated. While the framing of the current research topic places the brain-as-computer metaphor in opposition to evolutionary psychology, the most prominent school of thought in this field (born out of cognitive psychology, and often known as the Santa Barbara school) is entirely wedded to the computational theory of mind as an explanatory framework. Its unique aspect is to argue that the mind consists of a large number of functionally specialized (i.e., domain-specific) computational mechanisms, or modules (the massive modularity hypothesis). Far from offering an alternative to, or an improvement on, the current perspective, we argue that evolutionary psychology is a mainstream computational theory, and that its arguments for domain-specificity often rest on shaky premises. We then go on to suggest that the various forms of e-cognition (i.e., embodied, embedded, enactive) represent a true alternative to standard computational approaches, with an emphasis on “cognitive integration” or the “extended mind hypothesis” in particular. We feel this offers the most promise for human psychology because it incorporates the social and historical processes that are crucial to human “mind-making” within an evolutionarily informed framework. In addition to linking to other research areas in psychology, this approach is more likely to form productive links to other disciplines within the social sciences, not least by encouraging a healthy pluralism in approach. PMID:25161633
Pursiainen, S; Vorwerk, J; Wolters, C H
2016-12-21
The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.
Exploiting multicore compute resources in the CMS experiment
NASA Astrophysics Data System (ADS)
Ramírez, J. E.; Pérez-Calero Yzquierdo, A.; Hernández, J. M.; CMS Collaboration
2016-10-01
CMS has developed a strategy to efficiently exploit the multicore architecture of the compute resources accessible to the experiment. A coherent use of the multiple cores available in a compute node yields substantial gains in terms of resource utilization. The implemented approach makes use of the multithreading support of the event processing framework and the multicore scheduling capabilities of the resource provisioning system. Multicore slots are acquired and provisioned by means of multicore pilot agents which internally schedule and execute single and multicore payloads. Multicore scheduling and multithreaded processing are currently used in production for online event selection and prompt data reconstruction. More workflows are being adapted to run in multicore mode. This paper presents a review of the experience gained in the deployment and operation of the multicore scheduling and processing system, the current status and future plans.
NASA Astrophysics Data System (ADS)
Larour, Eric; Utke, Jean; Bovin, Anton; Morlighem, Mathieu; Perez, Gilberto
2016-11-01
Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar, gravity, and altimetry observations mainly). However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model (ISSM), written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written, but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of the ISSM. We present a comprehensive approach to (1) carry out type changing through the ISSM, hence facilitating operator overloading, (2) bind to external solvers such as MUMPS and GSL-LU, and (3) handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the northeastern Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential to enable a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or already collected, in Greenland and Antarctica.
NASA Astrophysics Data System (ADS)
Perez, G. L.; Larour, E. Y.; Morlighem, M.
2016-12-01
Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar and altimetry observations mainly). However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model, written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of ISSM. We present a comprehensive approach to 1) carry out type changing through ISSM, hence facilitating operator overloading, 2) bind to external solvers such as MUMPS and GSL-LU and 3) handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the North-East Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, Central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential of enabling a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or alreay collected in Greenland and Antarctica, such as surface altimetry, surface velocities, and/or gravity measurements.
Computing the nucleon charge and axial radii directly at Q2=0 in lattice QCD
NASA Astrophysics Data System (ADS)
Hasan, Nesreen; Green, Jeremy; Meinel, Stefan; Engelhardt, Michael; Krieg, Stefan; Negele, John; Pochinsky, Andrew; Syritsyn, Sergey
2018-02-01
We describe a procedure for extracting momentum derivatives of nucleon matrix elements on the lattice directly at Q2=0 . This is based on the Rome method for computing momentum derivatives of quark propagators. We apply this procedure to extract the nucleon isovector magnetic moment and charge radius as well as the isovector induced pseudoscalar form factor at Q2=0 and the axial radius. For comparison, we also determine these quantities with the traditional approach of computing the corresponding form factors, i.e. GEv(Q2) and GMv(Q2) for the case of the vector current and GPv(Q2) and GAv(Q2) for the axial current, at multiple Q2 values followed by z -expansion fits. We perform our calculations at the physical pion mass using a 2HEX-smeared Wilson-clover action. To control the effects of excited-state contamination, the calculations were done at three source-sink separations and the summation method was used. The derivative method produces results consistent with those from the traditional approach but with larger statistical uncertainties especially for the isovector charge and axial radii.
An approach to the origin of self-replicating system. I - Intermolecular interactions
NASA Technical Reports Server (NTRS)
Macelroy, R. D.; Coeckelenbergh, Y.; Rein, R.
1978-01-01
The present paper deals with the characteristics and potentialities of a recently developed computer-based molecular modeling system. Some characteristics of current coding systems are examined and are extrapolated to the apparent requirements of primitive prebiological coding systems.
Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis
2015-01-01
Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Enhancements in medicine by integrating content based image retrieval in computer-aided diagnosis
NASA Astrophysics Data System (ADS)
Aggarwal, Preeti; Sardana, H. K.
2010-02-01
Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. With cad, radiologists use the computer output as a "second opinion" and make the final decisions. Retrieving images is a useful tool to help radiologist to check medical image and diagnosis. The impact of contentbased access to medical images is frequently reported but existing systems are designed for only a particular context of diagnosis. The challenge in medical informatics is to develop tools for analyzing the content of medical images and to represent them in a way that can be efficiently searched and compared by the physicians. CAD is a concept established by taking into account equally the roles of physicians and computers. To build a successful computer aided diagnostic system, all the relevant technologies, especially retrieval need to be integrated in such a manner that should provide effective and efficient pre-diagnosed cases with proven pathology for the current case at the right time. In this paper, it is suggested that integration of content-based image retrieval (CBIR) in cad can bring enormous results in medicine especially in diagnosis. This approach is also compared with other approaches by highlighting its advantages over those approaches.
Computationally optimized ECoG stimulation with local safety constraints.
Guler, Seyhmus; Dannhauer, Moritz; Roig-Solvas, Biel; Gkogkidis, Alexis; Macleod, Rob; Ball, Tonio; Ojemann, Jeffrey G; Brooks, Dana H
2018-06-01
Direct stimulation of the cortical surface is used clinically for cortical mapping and modulation of local activity. Future applications of cortical modulation and brain-computer interfaces may also use cortical stimulation methods. One common method to deliver current is through electrocorticography (ECoG) stimulation in which a dense array of electrodes are placed subdurally or epidurally to stimulate the cortex. However, proximity to cortical tissue limits the amount of current that can be delivered safely. It may be desirable to deliver higher current to a specific local region of interest (ROI) while limiting current to other local areas more stringently than is guaranteed by global safety limits. Two commonly used global safety constraints bound the total injected current and individual electrode currents. However, these two sets of constraints may not be sufficient to prevent high current density locally (hot-spots). In this work, we propose an efficient approach that prevents current density hot-spots in the entire brain while optimizing ECoG stimulus patterns for targeted stimulation. Specifically, we maximize the current along a particular desired directional field in the ROI while respecting three safety constraints: one on the total injected current, one on individual electrode currents, and the third on the local current density magnitude in the brain. This third set of constraints creates a computational barrier due to the huge number of constraints needed to bound the current density at every point in the entire brain. We overcome this barrier by adopting an efficient two-step approach. In the first step, the proposed method identifies the safe brain region, which cannot contain any hot-spots solely based on the global bounds on total injected current and individual electrode currents. In the second step, the proposed algorithm iteratively adjusts the stimulus pattern to arrive at a solution that exhibits no hot-spots in the remaining brain. We report on simulations on a realistic finite element (FE) head model with five anatomical ROIs and two desired directional fields. We also report on the effect of ROI depth and desired directional field on the focality of the stimulation. Finally, we provide an analysis of optimization runtime as a function of different safety and modeling parameters. Our results suggest that optimized stimulus patterns tend to differ from those used in clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Reciprocity in computer-human interaction: source-based, norm-based, and affect-based explanations.
Lee, Seungcheol Austin; Liang, Yuhua Jake
2015-04-01
Individuals often apply social rules when they interact with computers, and this is known as the Computers Are Social Actors (CASA) effect. Following previous work, one approach to understand the mechanism responsible for CASA is to utilize computer agents and have the agents attempt to gain human compliance (e.g., completing a pattern recognition task). The current study focuses on three key factors frequently cited to influence traditional notions of compliance: evaluations toward the source (competence and warmth), normative influence (reciprocity), and affective influence (mood). Structural equation modeling assessed the effects of these factors on human compliance with computer request. The final model shows that norm-based influence (reciprocity) increased the likelihood of compliance, while evaluations toward the computer agent did not significantly influence compliance.
One-way quantum computing in superconducting circuits
NASA Astrophysics Data System (ADS)
Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.
2018-03-01
We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.
Tong, Wing-Chiu; Ghouri, Iffath; Taggart, Michael J
2014-01-01
The uterus and heart share the important physiological feature whereby contractile activation of the muscle tissue is regulated by the generation of periodic, spontaneous electrical action potentials (APs). Preterm birth arising from premature uterine contractions is a major complication of pregnancy and there remains a need to pursue avenues of research that facilitate the use of drugs, tocolytics, to limit these inappropriate contractions without deleterious actions on cardiac electrical excitation. A novel approach is to make use of mathematical models of uterine and cardiac APs, which incorporate many ionic currents contributing to the AP forms, and test the cell-specific responses to interventions. We have used three such models-of uterine smooth muscle cells (USMC), cardiac sinoatrial node cells (SAN), and ventricular cells-to investigate the relative effects of reducing two important voltage-gated Ca currents-the L-type (ICaL) and T-type (ICaT) Ca currents. Reduction of ICaL (10%) alone, or ICaT (40%) alone, blunted USMC APs with little effect on ventricular APs and only mild effects on SAN activity. Larger reductions in either current further attenuated the USMC APs but with also greater effects on SAN APs. Encouragingly, a combination of ICaL and ICaT reduction did blunt USMC APs as intended with little detriment to APs of either cardiac cell type. Subsequent overlapping maps of ICaL and ICaT inhibition profiles from each model revealed a range of combined reductions of ICaL and ICaT over which an appreciable diminution of USMC APs could be achieved with no deleterious action on cardiac SAN or ventricular APs. This novel approach illustrates the potential for computational biology to inform us of possible uterine and cardiac cell-specific mechanisms. Incorporating such computational approaches in future studies directed at designing new, or repurposing existing, tocolytics will be beneficial for establishing a desired uterine specificity of action.
Alfredsson, Jayne; Plichart, Patrick; Zary, Nabil
2012-01-01
Research on computer supported scoring of assessments in health care education has mainly focused on automated scoring. Little attention has been given to how informatics can support the currently predominant human-based grading approach. This paper reports steps taken to develop a model for a computer supported scoring process that focuses on optimizing a task that was previously undertaken without computer support. The model was also implemented in the open source assessment platform TAO in order to study its benefits. Ability to score test takers anonymously, analytics on the graders reliability and a more time efficient process are example of observed benefits. A computer supported scoring will increase the quality of the assessment results.
NASA Astrophysics Data System (ADS)
Perry, Alexander R.
2002-06-01
Low Frequency Eddy Current (EC) probes are capable of measurement from 5 MHz down to DC through the use of Magnetoresistive (MR) sensors. Choosing components with appropriate electrical specifications allows them to be matched to the power and impedance characteristics of standard computer connectors. This permits direct attachment of the probe to inexpensive computers, thereby eliminating external power supplies, amplifiers and modulators that have heretofore precluded very low system purchase prices. Such price reduction is key to increased market penetration in General Aviation maintenance and consequent reduction in recurring costs. This paper examines our computer software CANDETECT, which implements this approach and permits effective probe operation. Results are presented to show the intrinsic sensitivity of the software and demonstrate its practical performance when seeking cracks in the underside of a thick aluminum multilayer structure. The majority of the General Aviation light aircraft fleet uses rivets and screws to attach sheet aluminum skin to the airframe, resulting in similar multilayer lap joints.
Radiogenomics and radiotherapy response modeling
NASA Astrophysics Data System (ADS)
El Naqa, Issam; Kerns, Sarah L.; Coates, James; Luo, Yi; Speers, Corey; West, Catharine M. L.; Rosenstein, Barry S.; Ten Haken, Randall K.
2017-08-01
Advances in patient-specific information and biotechnology have contributed to a new era of computational medicine. Radiogenomics has emerged as a new field that investigates the role of genetics in treatment response to radiation therapy. Radiation oncology is currently attempting to embrace these recent advances and add to its rich history by maintaining its prominent role as a quantitative leader in oncologic response modeling. Here, we provide an overview of radiogenomics starting with genotyping, data aggregation, and application of different modeling approaches based on modifying traditional radiobiological methods or application of advanced machine learning techniques. We highlight the current status and potential for this new field to reshape the landscape of outcome modeling in radiotherapy and drive future advances in computational oncology.
Computational modeling of cardiac hemodynamics: Current status and future outlook
NASA Astrophysics Data System (ADS)
Mittal, Rajat; Seo, Jung Hee; Vedula, Vijay; Choi, Young J.; Liu, Hang; Huang, H. Howie; Jain, Saurabh; Younes, Laurent; Abraham, Theodore; George, Richard T.
2016-01-01
The proliferation of four-dimensional imaging technologies, increasing computational speeds, improved simulation algorithms, and the widespread availability of powerful computing platforms is enabling simulations of cardiac hemodynamics with unprecedented speed and fidelity. Since cardiovascular disease is intimately linked to cardiovascular hemodynamics, accurate assessment of the patient's hemodynamic state is critical for the diagnosis and treatment of heart disease. Unfortunately, while a variety of invasive and non-invasive approaches for measuring cardiac hemodynamics are in widespread use, they still only provide an incomplete picture of the hemodynamic state of a patient. In this context, computational modeling of cardiac hemodynamics presents as a powerful non-invasive modality that can fill this information gap, and significantly impact the diagnosis as well as the treatment of cardiac disease. This article reviews the current status of this field as well as the emerging trends and challenges in cardiovascular health, computing, modeling and simulation and that are expected to play a key role in its future development. Some recent advances in modeling and simulations of cardiac flow are described by using examples from our own work as well as the research of other groups.
2015-12-04
from back-office big - data analytics to fieldable hot-spot systems providing storage-processing-communication services for off- grid sensors. Speed...and power efficiency are the key metrics. Current state-of-the art approaches for big - data aim toward scaling out to many computers to meet...pursued within Lincoln Laboratory as well as external sponsors. Our vision is to bring new capabilities in big - data and internet-of-things applications
NASA Technical Reports Server (NTRS)
Barber, Bryan; Kahn, Laura; Wong, David
1990-01-01
Offshore operations such as oil drilling and radar monitoring require semisubmersible platforms to remain stationary at specific locations in the Gulf of Mexico. Ocean currents, wind, and waves in the Gulf of Mexico tend to move platforms away from their desired locations. A computer model was created to predict the station keeping requirements of a platform. The computer simulation uses remote sensing data from satellites and buoys as input. A background of the project, alternate approaches to the project, and the details of the simulation are presented.
Machine learning for epigenetics and future medical applications.
Holder, Lawrence B; Haque, M Muksitul; Skinner, Michael K
2017-07-03
Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review.
Yu, Jun; Shen, Zhengxiang; Sheng, Pengfeng; Wang, Xiaoqiang; Hailey, Charles J; Wang, Zhanshan
2018-03-01
The nested grazing incidence telescope can achieve a large collecting area in x-ray astronomy, with a large number of closely packed, thin conical mirrors. Exploiting the surface metrological data, the ray tracing method used to reconstruct the shell surface topography and evaluate the imaging performance is a powerful tool to assist iterative improvement in the fabrication process. However, current two-dimensional (2D) ray tracing codes, especially when utilized with densely sampled surface shape data, may not provide sufficient accuracy of reconstruction and are computationally cumbersome. In particular, 2D ray tracing currently employed considers coplanar rays and thus simulates only these rays along the meridional plane. This captures axial figure errors but leaves other important errors, such as roundness errors, unaccounted for. We introduce a semianalytic, three-dimensional (3D) ray tracing approach for x-ray optics that overcomes these shortcomings. And the present method is both computationally fast and accurate. We first introduce the principles and the computational details of this 3D ray tracing method. Then the computer simulations of this approach compared to 2D ray tracing are demonstrated, using an ideal conic Wolter-I telescope for benchmarking. Finally, the present 3D ray tracing is used to evaluate the performance of a prototype x-ray telescope fabricated for the enhanced x-ray timing and polarization mission.
Goudey, Benjamin; Abedini, Mani; Hopper, John L; Inouye, Michael; Makalic, Enes; Schmidt, Daniel F; Wagner, John; Zhou, Zeyu; Zobel, Justin; Reumann, Matthias
2015-01-01
Genome-wide association studies (GWAS) are a common approach for systematic discovery of single nucleotide polymorphisms (SNPs) which are associated with a given disease. Univariate analysis approaches commonly employed may miss important SNP associations that only appear through multivariate analysis in complex diseases. However, multivariate SNP analysis is currently limited by its inherent computational complexity. In this work, we present a computational framework that harnesses supercomputers. Based on our results, we estimate a three-way interaction analysis on 1.1 million SNP GWAS data requiring over 5.8 years on the full "Avoca" IBM Blue Gene/Q installation at the Victorian Life Sciences Computation Initiative. This is hundreds of times faster than estimates for other CPU based methods and four times faster than runtimes estimated for GPU methods, indicating how the improvement in the level of hardware applied to interaction analysis may alter the types of analysis that can be performed. Furthermore, the same analysis would take under 3 months on the currently largest IBM Blue Gene/Q supercomputer "Sequoia" at the Lawrence Livermore National Laboratory assuming linear scaling is maintained as our results suggest. Given that the implementation used in this study can be further optimised, this runtime means it is becoming feasible to carry out exhaustive analysis of higher order interaction studies on large modern GWAS.
Mathematical and Computational Modeling in Complex Biological Systems
Li, Wenyang; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558
Mathematical and Computational Modeling in Complex Biological Systems.
Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.
Use of agents to implement an integrated computing environment
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implement the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.
Computational attributes of the integral form of the equation of transfer
NASA Technical Reports Server (NTRS)
Frankel, J. I.
1991-01-01
Difficulties can arise in radiative and neutron transport calculations when a highly anisotropic scattering phase function is present. In the presence of anisotropy, currently used numerical solutions are based on the integro-differential form of the linearized Boltzmann transport equation. This paper, departs from classical thought and presents an alternative numerical approach based on application of the integral form of the transport equation. Use of the integral formalism facilitates the following steps: a reduction in dimensionality of the system prior to discretization, the use of symbolic manipulation to augment the computational procedure, and the direct determination of key physical quantities which are derivable through the various Legendre moments of the intensity. The approach is developed in the context of radiative heat transfer in a plane-parallel geometry, and results are presented and compared with existing benchmark solutions. Encouraging results are presented to illustrate the potential of the integral formalism for computation. The integral formalism appears to possess several computational attributes which are well-suited to radiative and neutron transport calculations.
Partitioning a macroscopic system into independent subsystems
NASA Astrophysics Data System (ADS)
Delle Site, Luigi; Ciccotti, Giovanni; Hartmann, Carsten
2017-08-01
We discuss the problem of partitioning a macroscopic system into a collection of independent subsystems. The partitioning of a system into replica-like subsystems is nowadays a subject of major interest in several fields of theoretical and applied physics. The thermodynamic approach currently favoured by practitioners is based on a phenomenological definition of an interface energy associated with the partition, due to a lack of easily computable expressions for a microscopic (i.e. particle-based) interface energy. In this article, we outline a general approach to derive sharp and computable bounds for the interface free energy in terms of microscopic statistical quantities. We discuss potential applications in nanothermodynamics and outline possible future directions.
On propagation of energy flux in de Sitter spacetime
NASA Astrophysics Data System (ADS)
Hoque, Sk Jahanur; Virmani, Amitabh
2018-04-01
In this paper, we explore propagation of energy flux in the future Poincaré patch of de Sitter spacetime. We present two results. First, we compute the flux integral of energy using the symplectic current density of the covariant phase space approach on hypersurfaces of constant radial physical distance. Using this computation we show that in the tt-projection, the integrand in the energy flux expression on the cosmological horizon is same as that on the future null infinity. This suggests that propagation of energy flux in de Sitter spacetime is sharp. Second, we relate our energy flux expression in tt-projection to a previously obtained expression using the Isaacson stress-tensor approach.
High resolution flow field prediction for tail rotor aeroacoustics
NASA Technical Reports Server (NTRS)
Quackenbush, Todd R.; Bliss, Donald B.
1989-01-01
The prediction of tail rotor noise due to the impingement of the main rotor wake poses a significant challenge to current analysis methods in rotorcraft aeroacoustics. This paper describes the development of a new treatment of the tail rotor aerodynamic environment that permits highly accurate resolution of the incident flow field with modest computational effort relative to alternative models. The new approach incorporates an advanced full-span free wake model of the main rotor in a scheme which reconstructs high-resolution flow solutions from preliminary, computationally inexpensive simulations with coarse resolution. The heart of the approach is a novel method for using local velocity correction terms to capture the steep velocity gradients characteristic of the vortex-dominated incident flow. Sample calculations have been undertaken to examine the principal types of interactions between the tail rotor and the main rotor wake and to examine the performance of the new method. The results of these sample problems confirm the success of this approach in capturing the high-resolution flows necessary for analysis of rotor-wake/rotor interactions with dramatically reduced computational cost. Computations of radiated sound are also carried out that explore the role of various portions of the main rotor wake in generating tail rotor noise.
Memristor-Based Analog Computation and Neural Network Classification with a Dot Product Engine.
Hu, Miao; Graves, Catherine E; Li, Can; Li, Yunning; Ge, Ning; Montgomery, Eric; Davila, Noraica; Jiang, Hao; Williams, R Stanley; Yang, J Joshua; Xia, Qiangfei; Strachan, John Paul
2018-03-01
Using memristor crossbar arrays to accelerate computations is a promising approach to efficiently implement algorithms in deep neural networks. Early demonstrations, however, are limited to simulations or small-scale problems primarily due to materials and device challenges that limit the size of the memristor crossbar arrays that can be reliably programmed to stable and analog values, which is the focus of the current work. High-precision analog tuning and control of memristor cells across a 128 × 64 array is demonstrated, and the resulting vector matrix multiplication (VMM) computing precision is evaluated. Single-layer neural network inference is performed in these arrays, and the performance compared to a digital approach is assessed. Memristor computing system used here reaches a VMM accuracy equivalent of 6 bits, and an 89.9% recognition accuracy is achieved for the 10k MNIST handwritten digit test set. Forecasts show that with integrated (on chip) and scaled memristors, a computational efficiency greater than 100 trillion operations per second per Watt is possible. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The quantum computer game: citizen science
NASA Astrophysics Data System (ADS)
Damgaard, Sidse; Mølmer, Klaus; Sherson, Jacob
2013-05-01
Progress in the field of quantum computation is hampered by daunting technical challenges. Here we present an alternative approach to solving these by enlisting the aid of computer players around the world. We have previously examined a quantum computation architecture involving ultracold atoms in optical lattices and strongly focused tweezers of light. In The Quantum Computer Game (see http://www.scienceathome.org/), we have encapsulated the time-dependent Schrödinger equation for the problem in a graphical user interface allowing for easy user input. Players can then search the parameter space with real-time graphical feedback in a game context with a global high-score that rewards short gate times and robustness to experimental errors. The game which is still in a demo version has so far been tried by several hundred players. Extensions of the approach to other models such as Gross-Pitaevskii and Bose-Hubbard are currently under development. The game has also been incorporated into science education at high-school and university level as an alternative method for teaching quantum mechanics. Initial quantitative evaluation results are very positive. AU Ideas Center for Community Driven Research, CODER.
An improved current potential method for fast computation of stellarator coil shapes
NASA Astrophysics Data System (ADS)
Landreman, Matt
2017-04-01
Several fast methods for computing stellarator coil shapes are compared, including the classical NESCOIL procedure (Merkel 1987 Nucl. Fusion 27 867), its generalization using truncated singular value decomposition, and a Tikhonov regularization approach we call REGCOIL in which the squared current density is included in the objective function. Considering W7-X and NCSX geometries, and for any desired level of regularization, we find the REGCOIL approach simultaneously achieves lower surface-averaged and maximum values of both current density (on the coil winding surface) and normal magnetic field (on the desired plasma surface). This approach therefore can simultaneously improve the free-boundary reconstruction of the target plasma shape while substantially increasing the minimum distances between coils, preventing collisions between coils while improving access for ports and maintenance. The REGCOIL method also allows finer control over the level of regularization, it preserves convexity to ensure the local optimum found is the global optimum, and it eliminates two pathologies of NESCOIL: the resulting coil shapes become independent of the arbitrary choice of angles used to parameterize the coil surface, and the resulting coil shapes converge rather than diverge as Fourier resolution is increased. We therefore contend that REGCOIL should be used instead of NESCOIL for applications in which a fast and robust method for coil calculation is needed, such as when targeting coil complexity in fixed-boundary plasma optimization, or for scoping new stellarator geometries.
Mobile Applications for Participatory Science
ERIC Educational Resources Information Center
Drill, Sabrina L.
2013-01-01
Citizen science, participatory research, and volunteer monitoring all describe research where data are collected by non-professional collaborators. These approaches can allow for research to be conducted at spatial and temporal scales unfeasible for professionals, especially in current budget climates. Mobile computing apps for data collection,…
Current testing is limited by traditional testing models and regulatory systems. An overview is given of high throughput screening approaches to provide broader chemical and biological coverage, toxicokinetics and molecular pathway data and tools to facilitate utilization for reg...
The Required Business Computing Course: Peer-Group Learning with a Managerial Emphasis
ERIC Educational Resources Information Center
Schonberger, Richard J.; Franz, Lori
1978-01-01
This paper describes course design aimed at catching up with current business practice. Discussion concerns differences between old and new pedagogies, as well as obstacles in the way of adopting the new approach, particularly in the introductory course. (Author/IRT)
Three-Dimensional Printed Prosthesis for Repair of Superior Canal Dehiscence.
Kozin, Elliott D; Remenschneider, Aaron K; Cheng, Song; Nakajima, Hideko Heidi; Lee, Daniel J
2015-10-01
Outcomes following repair of superior canal dehiscence (SCD) are variable, and surgery carries a risk of persistent or recurrent SCD symptoms, as well as a risk of hearing loss and vestibulopathy. Poor outcomes may occur from inadequate repair of the SCD or mechanical insult to the membranous labyrinth. Repair of SCD using a customized, fixed-length prosthesis may address current operative limitations and improve surgical outcomes. We aim to 3-dimensionally print customized prostheses to resurface or occlude bony SCD defects. Dehiscences were created along the arcuate eminence of superior semicircular canals in cadaveric temporal bones. Prostheses were designed and created using computed tomography and a 3-dimensional printer. The prostheses occupied the superior semicircular canal defect, reflected in postrepair computed tomography scans. This novel approach to SCD repair could have advantages over current techniques. Refinement of prosthesis design and materials will be important if this approach is translated into clinical use. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2015.
Advanced Multigrid Solvers for Fluid Dynamics
NASA Technical Reports Server (NTRS)
Brandt, Achi
1999-01-01
The main objective of this project has been to support the development of multigrid techniques in computational fluid dynamics that can achieve "textbook multigrid efficiency" (TME), which is several orders of magnitude faster than current industrial CFD solvers. Toward that goal we have assembled a detailed table which lists every foreseen kind of computational difficulty for achieving it, together with the possible ways for resolving the difficulty, their current state of development, and references. We have developed several codes to test and demonstrate, in the framework of simple model problems, several approaches for overcoming the most important of the listed difficulties that had not been resolved before. In particular, TME has been demonstrated for incompressible flows on one hand, and for near-sonic flows on the other hand. General approaches were advanced for the relaxation of stagnation points and boundary conditions under various situations. Also, new algebraic multigrid techniques were formed for treating unstructured grid formulations. More details on all these are given below.
Evaluation of bending modulus of lipid bilayers using undulation and orientation analysis
NASA Astrophysics Data System (ADS)
Chaurasia, Adarsh K.; Rukangu, Andrew M.; Philen, Michael K.; Seidel, Gary D.; Freeman, Eric C.
2018-03-01
In the current paper, phospholipid bilayers are modeled using coarse-grained molecular dynamics simulations with the MARTINI force field. The extracted molecular trajectories are analyzed using Fourier analysis of the undulations and orientation vectors to establish the differences between the two approaches for evaluating the bending modulus. The current work evaluates and extends the implementation of the Fourier analysis for molecular trajectories using a weighted horizon-based averaging approach. The effect of numerical parameters in the analysis of these trajectories is explored by conducting parametric studies. Computational modeling results are validated against experimentally characterized bending modulus of lipid membranes using a shape fluctuation analysis. The computational framework is then used to estimate the bending moduli for different types of lipids (phosphocholine, phosphoethanolamine, and phosphoglycerol). This work provides greater insight into the numerical aspects of evaluating the bilayer bending modulus, provides validation for the orientation analysis technique, and explores differences in bending moduli based on differences in the lipid nanostructures.
Bioengineering solutions for neural repair and recovery in stroke.
Modo, Michel; Ambrosio, Fabrisia; Friedlander, Robert M; Badylak, Stephen F; Wechsler, Lawrence R
2013-12-01
This review discusses emerging bioengineering opportunities for the treatment of stroke and their potential to build on current rehabilitation protocols. Bioengineering is a vast field that ranges from biomaterials to brain-computer interfaces. Biomaterials find application in the delivery of pharmacotherapies, as well as the emerging field of tissue engineering. For the treatment of stroke, these approaches have to be seen in the context of physical therapy in order to maximize functional outcomes. There is also an emergence of rehabilitation that engages engineering solutions, such as robot-assisted training, as well as brain-computer interfaces that can potentially assist in the case of paralysis. Stroke remains the main cause of adult disability with rehabilitation therapy being the focus for chronic impairments. Bioengineering is offering new opportunities to both support and synergize with currently available treatment options, and also promises to potentially dramatically improve available approaches. See the Video Supplementary Digital Content 1 (http://links.lww.com/CONR/A21).
New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation
NASA Astrophysics Data System (ADS)
Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.
2015-02-01
In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.
Extending Clause Learning of SAT Solvers with Boolean Gröbner Bases
NASA Astrophysics Data System (ADS)
Zengler, Christoph; Küchlin, Wolfgang
We extend clause learning as performed by most modern SAT Solvers by integrating the computation of Boolean Gröbner bases into the conflict learning process. Instead of learning only one clause per conflict, we compute and learn additional binary clauses from a Gröbner basis of the current conflict. We used the Gröbner basis engine of the logic package Redlog contained in the computer algebra system Reduce to extend the SAT solver MiniSAT with Gröbner basis learning. Our approach shows a significant reduction of conflicts and a reduction of restarts and computation time on many hard problems from the SAT 2009 competition.
Modeling thrombin generation: plasma composition based approach.
Brummel-Ziedins, Kathleen E; Everse, Stephen J; Mann, Kenneth G; Orfeo, Thomas
2014-01-01
Thrombin has multiple functions in blood coagulation and its regulation is central to maintaining the balance between hemorrhage and thrombosis. Empirical and computational methods that capture thrombin generation can provide advancements to current clinical screening of the hemostatic balance at the level of the individual. In any individual, procoagulant and anticoagulant factor levels together act to generate a unique coagulation phenotype (net balance) that is reflective of the sum of its developmental, environmental, genetic, nutritional and pharmacological influences. Defining such thrombin phenotypes may provide a means to track disease progression pre-crisis. In this review we briefly describe thrombin function, methods for assessing thrombin dynamics as a phenotypic marker, computationally derived thrombin phenotypes versus determined clinical phenotypes, the boundaries of normal range thrombin generation using plasma composition based approaches and the feasibility of these approaches for predicting risk.
Computations of Combustion-Powered Actuation for Dynamic Stall Suppression
NASA Technical Reports Server (NTRS)
Jee, Solkeun; Bowles, Patrick O.; Matalanis, Claude G.; Min, Byung-Young; Wake, Brian E.; Crittenden, Tom; Glezer, Ari
2016-01-01
A computational framework for the simulation of dynamic stall suppression with combustion-powered actuation (COMPACT) is validated against wind tunnel experimental results on a VR-12 airfoil. COMPACT slots are located at 10% chord from the leading edge of the airfoil and directed tangentially along the suction-side surface. Helicopter rotor-relevant flow conditions are used in the study. A computationally efficient two-dimensional approach, based on unsteady Reynolds-averaged Navier-Stokes (RANS), is compared in detail against the baseline and the modified airfoil with COMPACT, using aerodynamic forces, pressure profiles, and flow-field data. The two-dimensional RANS approach predicts baseline static and dynamic stall very well. Most of the differences between the computational and experimental results are within two standard deviations of the experimental data. The current framework demonstrates an ability to predict COMPACT efficacy across the experimental dataset. Enhanced aerodynamic lift on the downstroke of the pitching cycle due to COMPACT is well predicted, and the cycleaveraged lift enhancement computed is within 3% of the test data. Differences with experimental data are discussed with a focus on three-dimensional features not included in the simulations and the limited computational model for COMPACT.
NASA Technical Reports Server (NTRS)
Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.
2016-01-01
An immersed boundary method for the compressible Navier-Stokes equation and the additional infrastructure that is needed to solve moving boundary problems and fully coupled fluid-structure interaction is described. All the methods described in this paper were implemented in NASA's LAVA solver framework. The underlying immersed boundary method is based on the locally stabilized immersed boundary method that was previously introduced by the authors. In the present paper this method is extended to account for all aspects that are involved for fluid structure interaction simulations, such as fast geometry queries and stencil computations, the treatment of freshly cleared cells, and the coupling of the computational fluid dynamics solver with a linear structural finite element method. The current approach is validated for moving boundary problems with prescribed body motion and fully coupled fluid structure interaction problems in 2D and 3D. As part of the validation procedure, results from the second AIAA aeroelastic prediction workshop are also presented. The current paper is regarded as a proof of concept study, while more advanced methods for fluid structure interaction are currently being investigated, such as geometric and material nonlinearities, and advanced coupling approaches.
Development and verification of local/global analysis techniques for laminated composites
NASA Technical Reports Server (NTRS)
Griffin, O. Hayden, Jr.
1989-01-01
Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.
A Quasi-Dynamic Approach to modelling Hydrodynamic Focusing
NASA Astrophysics Data System (ADS)
Kommajosula, Aditya; Xu, Songzhe; Wu, Chueh-Yu; di Carlo, Dino; Ganapathysubramanian, Baskar; ComPM Lab Team; Di Carlo Lab Collaboration
2016-11-01
We examine a particle's tendency at different spatial locations to shift/rotate towards the equilibrium location, by constrained simulation. Although studies in the past have used this procedure in conjunction with FSI methods to great effect, the current work in 2D explores an alternative approach by utilizing a modified trust-region-based root-finding algorithm to solve for particle position and velocities at equilibrium, using "snapshots" of finite-element solutions to the steady-state Navier-Stokes equations iteratively over a computational domain attached to the particle reference frame. Through an assortment of test cases comprising circular and non-circular particle geometries, an incorporation of stability theory as applicable to dynamical systems is demonstrated, to locate the final focusing location and velocities. The results are compared with previous experimental/numerical reports, and found to be in close agreement. A thousand-fold increase is observed in computational time for the current workflow from its transient counterpart, for an illustrative case. The current framework is formulated in 2D for 3 Degrees-of-Freedom, and will be extended to 3D. This framework potentially allows for quick, high-throughput parametric space studies of equilibrium scaling laws.
Computational Physics? Some perspectives and responses of the undergraduate physics community
NASA Astrophysics Data System (ADS)
Chonacky, Norman
2011-03-01
Any of the many answers possible to the evocative question ``What is ...'' will likely be heavily shaded by the experience of the respondent. This is partly due to absence of a canon of practice in this still immature, hence dynamic and exciting, method of physics. The diversity of responses is even more apparent in the area of physics education, and more disruptive because an undergraduate educational canon uniformly accepted across institutions for decades already exists. I will present evidence of this educational community's lagging response to the challenge of the current dynamic and diverse practice of computational physics in research. I will also summarize current measures that attempt respond to this lag, discuss a researched-based approach for moving beyond these early measures, and suggest how DCOMP might help. I hope this will generate criticisms and concurrences from the floor. Research support for material in this talk was from: IEEE-Computer Society; Shodor Foundation; Teragrid Project.
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application of both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. Georgia Tech has proposed the development of an Integrated Design Engineering Simulator that will merge Integrated Product and Process Development with interdisciplinary analysis techniques and state-of-the-art computational technologies. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. The current status of development is given and future directions are outlined.
Computational studies of physical properties of Nb-Si based alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, Lizhi
2015-04-16
The overall goal is to provide physical properties data supplementing experiments for thermodynamic modeling and other simulations such as phase filed simulation for microstructure and continuum simulations for mechanical properties. These predictive computational modeling and simulations may yield insights that can be used to guide materials design, processing, and manufacture. Ultimately, they may lead to usable Nb-Si based alloy which could play an important role in current plight towards greener energy. The main objectives of the proposed projects are: (1) developing a first principles method based supercell approach for calculating thermodynamic and mechanic properties of ordered crystals and disordered latticesmore » including solid solution; (2) application of the supercell approach to Nb-Si base alloy to compute physical properties data that can be used for thermodynamic modeling and other simulations to guide the optimal design of Nb-Si based alloy.« less
Robbins, L G
2000-01-01
Graduate school programs in genetics have become so full that courses in statistics have often been eliminated. In addition, typical introductory statistics courses for the "statistics user" rather than the nascent statistician are laden with methods for analysis of measured variables while genetic data are most often discrete numbers. These courses are often seen by students and genetics professors alike as largely irrelevant cookbook courses. The powerful methods of likelihood analysis, although commonly employed in human genetics, are much less often used in other areas of genetics, even though current computational tools make this approach readily accessible. This article introduces the MLIKELY.PAS computer program and the logic of do-it-yourself maximum-likelihood statistics. The program itself, course materials, and expanded discussions of some examples that are only summarized here are available at http://www.unisi. it/ricerca/dip/bio_evol/sitomlikely/mlikely.h tml. PMID:10628965
Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.
Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron
2017-10-21
The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.
Using machine learning to accelerate sampling-based inversion
NASA Astrophysics Data System (ADS)
Valentine, A. P.; Sambridge, M.
2017-12-01
In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.
Computer Generated View of Earth as seen from the Asteroid Toutatis
1996-11-27
This computer generated image depicts a view of Earth as seen from the surface of the asteroid Toutatis on Nov 29th 1996. A 2.5 degree field-of-view synthetic computer camera was used for this simulation. Toutatis is visible on this date as a twelfth magnitude object in the night sky in the constellation of Virgo and could be viewed with a medium sized telescope. Toutatis currently approaches Earth once every four years and, on Nov. 29th, 1996 will be 5.2 million kilometers away (approx. 3.3 million miles). In approximately 8 years, on Sept. 29th, 2004, it will be less than 1.6 million kilometers from Earth. This is only 4 times the distance to the moon, and is the closest approach predicted for any known asteroid or comet during the next 60 years. http://photojournal.jpl.nasa.gov/catalog/PIA00515
Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola
2016-01-01
Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-05-31
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.
Assessing Metacognitive Knowledge in Web-Based Call: A Neural Network Approach
ERIC Educational Resources Information Center
Yeh, Siou-Wen; Lo, Jia-Jiunn
2005-01-01
The assessment of learners' metacognitive knowledge level is crucial when developing computer-assisted language learning systems. Currently, many systems assess learners' metacognitive knowledge level with pre-instructional questionnaires or metacognitive interviews. However, learners with limited language proficiency may be at a disadvantage in…
The Bayesian Revolution Approaches Psychological Development
ERIC Educational Resources Information Center
Shultz, Thomas R.
2007-01-01
This commentary reviews five articles that apply Bayesian ideas to psychological development, some with psychology experiments, some with computational modeling, and some with both experiments and modeling. The reviewed work extends the current Bayesian revolution into tasks often studied in children, such as causal learning and word learning, and…
Essential Use Cases for Pedagogical Patterns
ERIC Educational Resources Information Center
Derntl, Michael; Botturi, Luca
2006-01-01
Coming from architecture, through computer science, pattern-based design spread into other disciplines and is nowadays recognized as a powerful way of capturing and reusing effective design practice. However, current pedagogical pattern approaches lack widespread adoption, both by users and authors, and are still limited to individual initiatives.…
THE CURRENT STATUS OF RESEARCH AND THEORY IN HUMAN PROBLEM SOLVING.
ERIC Educational Resources Information Center
DAVIS, GARY A.
PROBLEM-SOLVING THEORIES IN THREE AREAS - TRADITIONAL (STIMULUS-RESPONSE) LEARNING, COGNITIVE-GESTALT APPROACHES, AND COMPUTER AND MATHEMATICAL MODELS - WERE SUMMARIZED. RECENT EMPIRICAL STUDIES (1960-65) ON PROBLEM SOLVING WERE CATEGORIZED ACCORDING TO TYPE OF BEHAVIOR ELICITED BY PARTICULAR PROBLEM-SOLVING TASKS. ANAGRAM,…
Applications of computer-aided approaches in the development of hepatitis C antiviral agents.
Ganesan, Aravindhan; Barakat, Khaled
2017-04-01
Hepatitis C virus (HCV) is a global health problem that causes several chronic life-threatening liver diseases. The numbers of people affected by HCV are rising annually. Since 2011, the FDA has approved several anti-HCV drugs; while many other promising HCV drugs are currently in late clinical trials. Areas covered: This review discusses the applications of different computational approaches in HCV drug design. Expert opinion: Molecular docking and virtual screening approaches have emerged as a low-cost tool to screen large databases and identify potential small-molecule hits against HCV targets. Ligand-based approaches are useful for filtering-out compounds with rich physicochemical properties to inhibit HCV targets. Molecular dynamics (MD) remains a useful tool in optimizing the ligand-protein complexes and understand the ligand binding modes and drug resistance mechanisms in HCV. Despite their varied roles, the application of in-silico approaches in HCV drug design is still in its infancy. A more mature application should aim at modelling the whole HCV replicon in its active form and help to identify new effective druggable sites within the replicon system. With more technological advancements, the roles of computer-aided methods are only going to increase several folds in the development of next-generation HCV drugs.
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.; Ladbury, Ray; Oldhamm, Timothy
2010-01-01
As NASA has evolved it's usage of spaceflight computing, memory applications have followed as well. In this slide presentation, the history of NASA's memories from magnetic core and tape recorders to current semiconductor approaches is discussed. There is a brief description of current functional memory usage in NASA space systems followed by a description of potential radiation-induced failure modes along with considerations for reliable system design.
Limited-memory trust-region methods for sparse relaxation
NASA Astrophysics Data System (ADS)
Adhikari, Lasith; DeGuchy, Omar; Erway, Jennifer B.; Lockhart, Shelby; Marcia, Roummel F.
2017-08-01
In this paper, we solve the l2-l1 sparse recovery problem by transforming the objective function of this problem into an unconstrained differentiable function and applying a limited-memory trust-region method. Unlike gradient projection-type methods, which uses only the current gradient, our approach uses gradients from previous iterations to obtain a more accurate Hessian approximation. Numerical experiments show that our proposed approach eliminates spurious solutions more effectively while improving computational time.
NASA Astrophysics Data System (ADS)
Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh
2015-07-01
This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.
A compendium of computational fluid dynamics at the Langley Research Center
NASA Technical Reports Server (NTRS)
1980-01-01
Through numerous summary examples, the scope and general nature of the computational fluid dynamics (CFD) effort at Langley is identified. These summaries will help inform researchers in CFD and line management at Langley of the overall effort. In addition to the inhouse efforts, out of house CFD work supported by Langley through industrial contracts and university grants are included. Researchers were encouraged to include summaries of work in preliminary and tentative states of development as well as current research approaching definitive results.
2017-11-13
behavior . The International Journal of Human-Computer Studies , 108, 105-121. https://doi.org/10.1016/j.ijhcs.2017.06.006 A second journal article...documenting the erroneous behavior generation approach and the case study analyses is currently being written. Planned submission is Spring 2017. RPPR...Belvoir, 2010. [3] A task-based taxonomy of erroneous human behavior . International Journal of Human-Computer Studies , 108:105–121, 2017. [4] M. L
Bryce, Richard A
2011-04-01
The ability to accurately predict the interaction of a ligand with its receptor is a key limitation in computer-aided drug design approaches such as virtual screening and de novo design. In this article, we examine current strategies for a physics-based approach to scoring of protein-ligand affinity, as well as outlining recent developments in force fields and quantum chemical techniques. We also consider advances in the development and application of simulation-based free energy methods to study protein-ligand interactions. Fuelled by recent advances in computational algorithms and hardware, there is the opportunity for increased integration of physics-based scoring approaches at earlier stages in computationally guided drug discovery. Specifically, we envisage increased use of implicit solvent models and simulation-based scoring methods as tools for computing the affinities of large virtual ligand libraries. Approaches based on end point simulations and reference potentials allow the application of more advanced potential energy functions to prediction of protein-ligand binding affinities. Comprehensive evaluation of polarizable force fields and quantum mechanical (QM)/molecular mechanical and QM methods in scoring of protein-ligand interactions is required, particularly in their ability to address challenging targets such as metalloproteins and other proteins that make highly polar interactions. Finally, we anticipate increasingly quantitative free energy perturbation and thermodynamic integration methods that are practical for optimization of hits obtained from screened ligand libraries.
NASA Astrophysics Data System (ADS)
Broccard, Frédéric D.; Joshi, Siddharth; Wang, Jun; Cauwenberghs, Gert
2017-08-01
Objective. Computation in nervous systems operates with different computational primitives, and on different hardware, than traditional digital computation and is thus subjected to different constraints from its digital counterpart regarding the use of physical resources such as time, space and energy. In an effort to better understand neural computation on a physical medium with similar spatiotemporal and energetic constraints, the field of neuromorphic engineering aims to design and implement electronic systems that emulate in very large-scale integration (VLSI) hardware the organization and functions of neural systems at multiple levels of biological organization, from individual neurons up to large circuits and networks. Mixed analog/digital neuromorphic VLSI systems are compact, consume little power and operate in real time independently of the size and complexity of the model. Approach. This article highlights the current efforts to interface neuromorphic systems with neural systems at multiple levels of biological organization, from the synaptic to the system level, and discusses the prospects for future biohybrid systems with neuromorphic circuits of greater complexity. Main results. Single silicon neurons have been interfaced successfully with invertebrate and vertebrate neural networks. This approach allowed the investigation of neural properties that are inaccessible with traditional techniques while providing a realistic biological context not achievable with traditional numerical modeling methods. At the network level, populations of neurons are envisioned to communicate bidirectionally with neuromorphic processors of hundreds or thousands of silicon neurons. Recent work on brain-machine interfaces suggests that this is feasible with current neuromorphic technology. Significance. Biohybrid interfaces between biological neurons and VLSI neuromorphic systems of varying complexity have started to emerge in the literature. Primarily intended as a computational tool for investigating fundamental questions related to neural dynamics, the sophistication of current neuromorphic systems now allows direct interfaces with large neuronal networks and circuits, resulting in potentially interesting clinical applications for neuroengineering systems, neuroprosthetics and neurorehabilitation.
An "Intelligent" Optical Design Program
NASA Astrophysics Data System (ADS)
Bohachevsky, I. O.; Viswanathan, V. K.; Woodfin, G.
1984-06-01
Described is a general approach to the development of computer programs capable of designing image-forming optical systems without human intervention and of improving their performance with repeated attempts. The approach utilizes two ideas: 1) interpretation of technical design as a mapping in the configuration space of technical characteristics and 2) development of an "intelligent" routine that recognizes global optima. Examples of lens systems designed and used in the development of the general approach are presented, current status of the project is summarized, and plans for the future efforts are indicated.
Use of cloud computing in biomedicine.
Sobeslav, Vladimir; Maresova, Petra; Krejcar, Ondrej; Franca, Tanos C C; Kuca, Kamil
2016-12-01
Nowadays, biomedicine is characterised by a growing need for processing of large amounts of data in real time. This leads to new requirements for information and communication technologies (ICT). Cloud computing offers a solution to these requirements and provides many advantages, such as cost savings, elasticity and scalability of using ICT. The aim of this paper is to explore the concept of cloud computing and the related use of this concept in the area of biomedicine. Authors offer a comprehensive analysis of the implementation of the cloud computing approach in biomedical research, decomposed into infrastructure, platform and service layer, and a recommendation for processing large amounts of data in biomedicine. Firstly, the paper describes the appropriate forms and technological solutions of cloud computing. Secondly, the high-end computing paradigm of cloud computing aspects is analysed. Finally, the potential and current use of applications in scientific research of this technology in biomedicine is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Han-Wei; Rode, Johann C.; Choudhary, Prateek
2014-01-21
The DC current gain in In{sub 0.53}Ga{sub 0.47}As/InP double-heterojunction bipolar transistors is computed based on a drift-diffusion model, and is compared with experimental data. Even in the absence of other scaling effects, lateral diffusion of electrons to the base Ohmic contacts causes a rapid reduction in DC current gain as the emitter junction width and emitter-base contact spacing are reduced. The simulation and experimental data are compared in order to examine the effect of carrier lateral diffusion on current gain. The impact on current gain due to device scaling and approaches to increase current gain are discussed.
Meher, Prabina K.; Sahu, Tanmaya K.; Gahoi, Shachi; Rao, Atmakuri R.
2018-01-01
Heat shock proteins (HSPs) play a pivotal role in cell growth and variability. Since conventional approaches are expensive and voluminous protein sequence information is available in the post-genomic era, development of an automated and accurate computational tool is highly desirable for prediction of HSPs, their families and sub-types. Thus, we propose a computational approach for reliable prediction of all these components in a single framework and with higher accuracy as well. The proposed approach achieved an overall accuracy of ~84% in predicting HSPs, ~97% in predicting six different families of HSPs, and ~94% in predicting four types of DnaJ proteins, with bench mark datasets. The developed approach also achieved higher accuracy as compared to most of the existing approaches. For easy prediction of HSPs by experimental scientists, a user friendly web server ir-HSP is made freely accessible at http://cabgrid.res.in:8080/ir-hsp. The ir-HSP was further evaluated for proteome-wide identification of HSPs by using proteome datasets of eight different species, and ~50% of the predicted HSPs in each species were found to be annotated with InterPro HSP families/domains. Thus, the developed computational method is expected to supplement the currently available approaches for prediction of HSPs, to the extent of their families and sub-types. PMID:29379521
Lu, T Z; Kostelecki, W; Sun, C L F; Dong, N; Pérez Velázquez, J L; Feng, Z-P
2016-12-01
The spontaneous rhythmic firing of action potentials in pacemaker neurons depends on the biophysical properties of voltage-gated ion channels and background leak currents. The background leak current includes a large K + and a small Na + component. We previously reported that a Na + -leak current via U-type channels is required to generate spontaneous action potential firing in the identified respiratory pacemaker neuron, RPeD1, in the freshwater pond snail Lymnaea stagnalis. We further investigated the functional significance of the background Na + current in rhythmic spiking of RPeD1 neurons. Whole-cell patch-clamp recording and computational modeling approaches were carried out in isolated RPeD1 neurons. The whole-cell current of the major ion channel components in RPeD1 neurons were characterized, and a conductance-based computational model of the rhythmic pacemaker activity was simulated with the experimental measurements. We found that the spiking rate is more sensitive to changes in the Na + leak current as compared to the K + leak current, suggesting a robust function of Na + leak current in regulating spontaneous neuronal firing activity. Our study provides new insight into our current understanding of the role of Na + leak current in intrinsic properties of pacemaker neurons. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Constructing Precisely Computing Networks with Biophysical Spiking Neurons.
Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T
2015-07-15
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.
Progress and challenges in bioinformatics approaches for enhancer identification
Kleftogiannis, Dimitrios; Kalnis, Panos
2016-01-01
Enhancers are cis-acting DNA elements that play critical roles in distal regulation of gene expression. Identifying enhancers is an important step for understanding distinct gene expression programs that may reflect normal and pathogenic cellular conditions. Experimental identification of enhancers is constrained by the set of conditions used in the experiment. This requires multiple experiments to identify enhancers, as they can be active under specific cellular conditions but not in different cell types/tissues or cellular states. This has opened prospects for computational prediction methods that can be used for high-throughput identification of putative enhancers to complement experimental approaches. Potential functions and properties of predicted enhancers have been catalogued and summarized in several enhancer-oriented databases. Because the current methods for the computational prediction of enhancers produce significantly different enhancer predictions, it will be beneficial for the research community to have an overview of the strategies and solutions developed in this field. In this review, we focus on the identification and analysis of enhancers by bioinformatics approaches. First, we describe a general framework for computational identification of enhancers, present relevant data types and discuss possible computational solutions. Next, we cover over 30 existing computational enhancer identification methods that were developed since 2000. Our review highlights advantages, limitations and potentials, while suggesting pragmatic guidelines for development of more efficient computational enhancer prediction methods. Finally, we discuss challenges and open problems of this topic, which require further consideration. PMID:26634919
ERIC Educational Resources Information Center
Al-Bataineh, Adel; Brooks, Leanne
2003-01-01
Presents a twenty-year history of computer-based technology integration, focusing on print automation, learner-centered approaches, and virtual learning via the Internet. Discusses integration strategies applicable in the present-day classroom, the hierarchy of teacher effectiveness, and current challenges and trends. Asserts that the ultimate…
NASA's Use of Human Behavior Models for Concept Development and Evaluation
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2012-01-01
Overview of NASA's use of computational approaches and methods to support research goals, of human performance models, with a focus on examples of the methods used in Code TH and TI at NASA Ames, followed by an in depth review of MIDAS' current FAA work.
A Software Development Approach for Computer Assisted Language Learning
ERIC Educational Resources Information Center
Cushion, Steve
2005-01-01
Over the last 5 years we have developed, produced, tested, and evaluated an authoring software package to produce web-based, interactive, audio-enhanced language-learning material. That authoring package has been used to produce language-learning material in French, Spanish, German, Arabic, and Tamil. We are currently working on increasing…
Build MyTune: Children's Reflective Practice during Music Creativity Processes
ERIC Educational Resources Information Center
Hsu, Chia-Pao
2015-01-01
The current study examined how components of reflective practice interplay with children's music-making and sharing processes. This study employed a qualitative approach with 11 children who played classroom instruments and researcher-designed computer programs ("Build MyTune I" and "Build MyTune II") while attending music…
ERIC Educational Resources Information Center
Khribi, Mohamed Koutheair; Jemni, Mohamed; Nasraoui, Olfa
2009-01-01
In this paper, we describe an automatic personalization approach aiming to provide online automatic recommendations for active learners without requiring their explicit feedback. Recommended learning resources are computed based on the current learner's recent navigation history, as well as exploiting similarities and dissimilarities among…
Current Density and Continuity in Discretized Models
ERIC Educational Resources Information Center
Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard
2010-01-01
Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schrodinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying…
New Approaches to Technology in HE Management
ERIC Educational Resources Information Center
Cobb, Chris
2012-01-01
Most UK universities can trace their current management information systems back to significant investments made in the 1990s, largely fuelled by concerns about the millenium bug and a change from character interfaces to graphical user interfaces following the introduction of the personal computer. It was during this period that institutions also…
Novel Features for Brain-Computer Interfaces
Woon, W. L.; Cichocki, A.
2007-01-01
While conventional approaches of BCI feature extraction are based on the power spectrum, we have tried using nonlinear features for classifying BCI data. In this paper, we report our test results and findings, which indicate that the proposed method is a potentially useful addition to current feature extraction techniques. PMID:18364991
Unsteady Aerodynamic Force Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2016-01-01
A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm. A cantilevered rectangular wing built and tested at the NASA Langley Research Center (Hampton, Virginia, USA) in 1959 is used to validate the simple approach. Unsteady aerodynamic forces as well as wing deflections, velocities, accelerations, and strains are computed using the CFL3D computational fluid dynamics (CFD) code and an MSC/NASTRAN code (MSC Software Corporation, Newport Beach, California, USA), and these CFL3D-based results are assumed as measured quantities. Based on the measured strains, wing deflections, velocities, accelerations, and aerodynamic forces are computed using the proposed approach. These computed deflections, velocities, accelerations, and unsteady aerodynamic forces are compared with the CFL3D/NASTRAN-based results. In general, computed aerodynamic forces based on the lifting surface theory in subsonic speeds are in good agreement with the target aerodynamic forces generated using CFL3D code with the Euler equation. Excellent aeroelastic responses are obtained even with unsteady strain data under the signal to noise ratio of -9.8dB. The deflections, velocities, and accelerations at each sensor location are independent of structural and aerodynamic models. Therefore, the distributed strain data together with the current proposed approaches can be used as distributed deflection, velocity, and acceleration sensors. This research demonstrates the feasibility of obtaining induced drag and lift forces through the use of distributed sensor technology with measured strain data. An active induced drag control system thus can be designed using the two computed aerodynamic forces, induced drag and lift, to improve the fuel efficiency of an aircraft. Interpolation elements between structural finite element grids and the CFD grids and centroids are successfully incorporated with the unsteady aeroelastic computation scheme. The most critical technology for the success of the proposed approach is the robust on-line parameter estimator, since the least-squares curve fitting method depends heavily on aeroelastic system frequencies and damping factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zi-Kui; Gleeson, Brian; Shang, Shunli
This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities,more » which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.« less
A spline-based approach for computing spatial impulse responses.
Ellis, Michael A; Guenther, Drake; Walker, William F
2007-05-01
Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.
The Prediction of Drug-Disease Correlation Based on Gene Expression Data.
Cui, Hui; Zhang, Menghuan; Yang, Qingmin; Li, Xiangyi; Liebman, Michael; Yu, Ying; Xie, Lu
2018-01-01
The explosive growth of high-throughput experimental methods and resulting data yields both opportunity and challenge for selecting the correct drug to treat both a specific patient and their individual disease. Ideally, it would be useful and efficient if computational approaches could be applied to help achieve optimal drug-patient-disease matching but current efforts have met with limited success. Current approaches have primarily utilized the measureable effect of a specific drug on target tissue or cell lines to identify the potential biological effect of such treatment. While these efforts have met with some level of success, there exists much opportunity for improvement. This specifically follows the observation that, for many diseases in light of actual patient response, there is increasing need for treatment with combinations of drugs rather than single drug therapies. Only a few previous studies have yielded computational approaches for predicting the synergy of drug combinations by analyzing high-throughput molecular datasets. However, these computational approaches focused on the characteristics of the drug itself, without fully accounting for disease factors. Here, we propose an algorithm to specifically predict synergistic effects of drug combinations on various diseases, by integrating the data characteristics of disease-related gene expression profiles with drug-treated gene expression profiles. We have demonstrated utility through its application to transcriptome data, including microarray and RNASeq data, and the drug-disease prediction results were validated using existing publications and drug databases. It is also applicable to other quantitative profiling data such as proteomics data. We also provide an interactive web interface to allow our Prediction of Drug-Disease method to be readily applied to user data. While our studies represent a preliminary exploration of this critical problem, we believe that the algorithm can provide the basis for further refinement towards addressing a large clinical need.
Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques
Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering. PMID:21949695
Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.
Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering.
Machine learning action parameters in lattice quantum chromodynamics
NASA Astrophysics Data System (ADS)
Shanahan, Phiala E.; Trewartha, Daniel; Detmold, William
2018-05-01
Numerical lattice quantum chromodynamics studies of the strong interaction are important in many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. The high information content and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.
The transition of GTDS to the Unix workstation environment
NASA Technical Reports Server (NTRS)
Carter, D.; Metzinger, R.; Proulx, R.; Cefola, P.
1995-01-01
Future Flight Dynamics systems should take advantage of the possibilities provided by current and future generations of low-cost, high performance workstation computing environments with Graphical User Interface. The port of the existing mainframe Flight Dynamics systems to the workstation environment offers an economic approach for combining the tremendous engineering heritage that has been encapsulated in these systems with the advantages of the new computing environments. This paper will describe the successful transition of the Draper Laboratory R&D version of GTDS (Goddard Trajectory Determination System) from the IBM Mainframe to the Unix workstation environment. The approach will be a mix of historical timeline notes, descriptions of the technical problems overcome, and descriptions of associated SQA (software quality assurance) issues.
Vehicle Maneuver Detection with Accelerometer-Based Classification.
Cervantes-Villanueva, Javier; Carrillo-Zapata, Daniel; Terroso-Saenz, Fernando; Valdes-Vela, Mercedes; Skarmeta, Antonio F
2016-09-29
In the mobile computing era, smartphones have become instrumental tools to develop innovative mobile context-aware systems. In that sense, their usage in the vehicular domain eases the development of novel and personal transportation solutions. In this frame, the present work introduces an innovative mechanism to perceive the current kinematic state of a vehicle on the basis of the accelerometer data from a smartphone mounted in the vehicle. Unlike previous proposals, the introduced architecture targets the computational limitations of such devices to carry out the detection process following an incremental approach. For its realization, we have evaluated different classification algorithms to act as agents within the architecture. Finally, our approach has been tested with a real-world dataset collected by means of the ad hoc mobile application developed.
PIXIE3D: A Parallel, Implicit, eXtended MHD 3D Code.
NASA Astrophysics Data System (ADS)
Chacon, L.; Knoll, D. A.
2004-11-01
We report on the development of PIXIE3D, a 3D parallel, fully implicit Newton-Krylov extended primitive-variable MHD code in general curvilinear geometry. PIXIE3D employs a second-order, finite-volume-based spatial discretization that satisfies remarkable properties such as being conservative, solenoidal in the magnetic field, non-dissipative, and stable in the absence of physical dissipation.(L. Chacón , phComput. Phys. Comm.) submitted (2004) PIXIE3D employs fully-implicit Newton-Krylov methods for the time advance. Currently, first and second-order implicit schemes are available, although higher-order temporal implicit schemes can be effortlessly implemented within the Newton-Krylov framework. A successful, scalable, MG physics-based preconditioning strategy, similar in concept to previous 2D MHD efforts,(L. Chacón et al., phJ. Comput. Phys). 178 (1), 15- 36 (2002); phJ. Comput. Phys., 188 (2), 573-592 (2003) has been developed. We are currently in the process of parallelizing the code using the PETSc library, and a Newton-Krylov-Schwarz approach for the parallel treatment of the preconditioner. In this poster, we will report on both the serial and parallel performance of PIXIE3D, focusing primarily on scalability and CPU speedup vs. an explicit approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, P.; Pham, K.
1995-12-31
Under emergency conditions, a bare overhead conductor can carry an increased amount of current that is well in excess of its normal rating. When there is this increase in current flow on a bare overhead conductor, the temperature does not rise instantaneously. but increases along a curve determined by the current, the conductor properties and the ambient conditions. The conductor temperature at the end of a short-time overload period must be restricted to its maximum design value. This paper presents a simplified approach in analyzing the dynamic performance for bare overhead conductors during short-time overload condition. A computer program wasmore » developed to calculate the short-time ratings for bare overhead conductors. The following parameters: current induced heating. solar load, convective/conductive cooling, radiative cooling, altitude, wind velocity and ampacity of the bare conductor were considered. Several sample graphical output lots are included with the paper.« less
A biologically consistent hierarchical framework for self-referencing survivalist computation
NASA Astrophysics Data System (ADS)
Cottam, Ron; Ranson, Willy; Vounckx, Roger
2000-05-01
Extensively scaled formally rational hardware and software are indirectly fallible, at the very least through temporal restrictions on the evaluation of their correctness. In addition, the apparent inability of formal rationality to successfully describe living systems as anything other than inanimate structures suggests that the development of self-referencing computational machines will require a different approach. There is currently a strong movement towards the adoption of semiotics as a descriptive medium in theoretical biology. We present a related computational semiosic construction (1, 2) consistent with evolutionary hierarchical emergence (3), which may serve as a framework for implementing anticipatory-oriented survivalist processing in real environments.
Single Cell Genomics: Approaches and Utility in Immunology
Neu, Karlynn E; Tang, Qingming; Wilson, Patrick C; Khan, Aly A
2017-01-01
Single cell genomics offers powerful tools for studying lymphocytes, which make it possible to observe rare and intermediate cell states that cannot be resolved at the population-level. Advances in computer science and single cell sequencing technology have created a data-driven revolution in immunology. The challenge for immunologists is to harness computing and turn an avalanche of quantitative data into meaningful discovery of immunological principles, predictive models, and strategies for therapeutics. Here, we review the current literature on computational analysis of single cell RNA-seq data and discuss underlying assumptions, methods, and applications in immunology, and highlight important directions for future research. PMID:28094102
A survey of an introduction to fault diagnosis algorithms
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
This report surveys the field of diagnosis and introduces some of the key algorithms and heuristics currently in use. Fault diagnosis is an important and a rapidly growing discipline. This is important in the design of self-repairable computers because the present diagnosis resolution of its fault-tolerant computer is limited to a functional unit or processor. Better resolution is necessary before failed units can become partially reuseable. The approach that holds the greatest promise is that of resident microdiagnostics; however, that presupposes a microprogrammable architecture for the computer being self-diagnosed. The presentation is tutorial and contains examples. An extensive bibliography of some 220 entries is included.
NASA Astrophysics Data System (ADS)
Oztekin, Halit; Temurtas, Feyzullah; Gulbag, Ali
The Arithmetic and Logic Unit (ALU) design is one of the important topics in Computer Architecture and Organization course in Computer and Electrical Engineering departments. There are ALU designs that have non-modular nature to be used as an educational tool. As the programmable logic technology has developed rapidly, it is feasible that ALU design based on Field Programmable Gate Array (FPGA) is implemented in this course. In this paper, we have adopted the modular approach to ALU design based on FPGA. All the modules in the ALU design are realized using schematic structure on Altera's Cyclone II Development board. Under this model, the ALU content is divided into four distinct modules. These are arithmetic unit except for multiplication and division operations, logic unit, multiplication unit and division unit. User can easily design any size of ALU unit since this approach has the modular nature. Then, this approach was applied to microcomputer architecture design named BZK.SAU.FPGA10.0 instead of the current ALU unit.
Boundary methods for mode estimation
NASA Astrophysics Data System (ADS)
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
Efficient anharmonic vibrational spectroscopy for large molecules using local-mode coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Xiaolu; Steele, Ryan P., E-mail: ryan.steele@utah.edu
This article presents a general computational approach for efficient simulations of anharmonic vibrational spectra in chemical systems. An automated local-mode vibrational approach is presented, which borrows techniques from localized molecular orbitals in electronic structure theory. This approach generates spatially localized vibrational modes, in contrast to the delocalization exhibited by canonical normal modes. The method is rigorously tested across a series of chemical systems, ranging from small molecules to large water clusters and a protonated dipeptide. It is interfaced with exact, grid-based approaches, as well as vibrational self-consistent field methods. Most significantly, this new set of reference coordinates exhibits a well-behavedmore » spatial decay of mode couplings, which allows for a systematic, a priori truncation of mode couplings and increased computational efficiency. Convergence can typically be reached by including modes within only about 4 Å. The local nature of this truncation suggests particular promise for the ab initio simulation of anharmonic vibrational motion in large systems, where connection to experimental spectra is currently most challenging.« less
An Integrative Bioinformatics Approach for Knowledge Discovery
NASA Astrophysics Data System (ADS)
Peña-Castillo, Lourdes; Phan, Sieu; Famili, Fazel
The vast amount of data being generated by large scale omics projects and the computational approaches developed to deal with this data have the potential to accelerate the advancement of our understanding of the molecular basis of genetic diseases. This better understanding may have profound clinical implications and transform the medical practice; for instance, therapeutic management could be prescribed based on the patient’s genetic profile instead of being based on aggregate data. Current efforts have established the feasibility and utility of integrating and analysing heterogeneous genomic data to identify molecular associations to pathogenesis. However, since these initiatives are data-centric, they either restrict the research community to specific data sets or to a certain application domain, or force researchers to develop their own analysis tools. To fully exploit the potential of omics technologies, robust computational approaches need to be developed and made available to the community. This research addresses such challenge and proposes an integrative approach to facilitate knowledge discovery from diverse datasets and contribute to the advancement of genomic medicine.
Exploiting Self-organization in Bioengineered Systems: A Computational Approach.
Davis, Delin; Doloman, Anna; Podgorski, Gregory J; Vargis, Elizabeth; Flann, Nicholas S
2017-01-01
The productivity of bioengineered cell factories is limited by inefficiencies in nutrient delivery and waste and product removal. Current solution approaches explore changes in the physical configurations of the bioreactors. This work investigates the possibilities of exploiting self-organizing vascular networks to support producer cells within the factory. A computational model simulates de novo vascular development of endothelial-like cells and the resultant network functioning to deliver nutrients and extract product and waste from the cell culture. Microbial factories with vascular networks are evaluated for their scalability, robustness, and productivity compared to the cell factories without a vascular network. Initial studies demonstrate that at least an order of magnitude increase in production is possible, the system can be scaled up, and the self-organization of an efficient vascular network is robust. The work suggests that bioengineered multicellularity may offer efficiency improvements difficult to achieve with physical engineering approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Caldwell, Blake A.; Hicks, Susan Elaine
High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges formore » the use of shared infrastructure in HPC environments. This report details current state-of-the-art in virtualization, reconfigurable network enclaving via Software Defined Networking (SDN), and storage architectures and bridging techniques for creating secure enclaves in HPC environments.« less
Computational Flow Modeling of Human Upper Airway Breathing
NASA Astrophysics Data System (ADS)
Mylavarapu, Goutham
Computational modeling of biological systems have gained a lot of interest in biomedical research, in the recent past. This thesis focuses on the application of computational simulations to study airflow dynamics in human upper respiratory tract. With advancements in medical imaging, patient specific geometries of anatomically accurate respiratory tracts can now be reconstructed from Magnetic Resonance Images (MRI) or Computed Tomography (CT) scans, with better and accurate details than traditional cadaver cast models. Computational studies using these individualized geometrical models have advantages of non-invasiveness, ease, minimum patient interaction, improved accuracy over experimental and clinical studies. Numerical simulations can provide detailed flow fields including velocities, flow rates, airway wall pressure, shear stresses, turbulence in an airway. Interpretation of these physical quantities will enable to develop efficient treatment procedures, medical devices, targeted drug delivery etc. The hypothesis for this research is that computational modeling can predict the outcomes of a surgical intervention or a treatment plan prior to its application and will guide the physician in providing better treatment to the patients. In the current work, three different computational approaches Computational Fluid Dynamics (CFD), Flow-Structure Interaction (FSI) and Particle Flow simulations were used to investigate flow in airway geometries. CFD approach assumes airway wall as rigid, and relatively easy to simulate, compared to the more challenging FSI approach, where interactions of airway wall deformations with flow are also accounted. The CFD methodology using different turbulence models is validated against experimental measurements in an airway phantom. Two case-studies using CFD, to quantify a pre and post-operative airway and another, to perform virtual surgery to determine the best possible surgery in a constricted airway is demonstrated. The unsteady Large Eddy simulations (LES) and a steady Reynolds Averaged Navier Stokes (RANS) approaches in CFD modeling are discussed. The more challenging FSI approach is modeled first in simple two-dimensional anatomical geometry and then extended to simplified three dimensional geometry and finally in three dimensionally accurate geometries. The concepts of virtual surgery and the differences to CFD are discussed. Finally, the influence of various drug delivery parameters on particle deposition efficiency in airway anatomy are investigated through particle-flow simulations in a nasal airway model.
Computational modeling in melanoma for novel drug discovery.
Pennisi, Marzio; Russo, Giulia; Di Salvatore, Valentina; Candido, Saverio; Libra, Massimo; Pappalardo, Francesco
2016-06-01
There is a growing body of evidence highlighting the applications of computational modeling in the field of biomedicine. It has recently been applied to the in silico analysis of cancer dynamics. In the era of precision medicine, this analysis may allow the discovery of new molecular targets useful for the design of novel therapies and for overcoming resistance to anticancer drugs. According to its molecular behavior, melanoma represents an interesting tumor model in which computational modeling can be applied. Melanoma is an aggressive tumor of the skin with a poor prognosis for patients with advanced disease as it is resistant to current therapeutic approaches. This review discusses the basics of computational modeling in melanoma drug discovery and development. Discussion includes the in silico discovery of novel molecular drug targets, the optimization of immunotherapies and personalized medicine trials. Mathematical and computational models are gradually being used to help understand biomedical data produced by high-throughput analysis. The use of advanced computer models allowing the simulation of complex biological processes provides hypotheses and supports experimental design. The research in fighting aggressive cancers, such as melanoma, is making great strides. Computational models represent the key component to complement these efforts. Due to the combinatorial complexity of new drug discovery, a systematic approach based only on experimentation is not possible. Computational and mathematical models are necessary for bringing cancer drug discovery into the era of omics, big data and personalized medicine.
The evolution of eLearning background, blends and blackboard....
Sleator, Roy D
2010-01-01
This review of eLearning is divided into three sections: the first charts the evolution of eLearning from early correspondence courses to the current computer mediated approaches to distributed learning. The second section deals with the concept of blended learning; combining best practice in face-to-face and online learning. The final section focuses on current platform technologies in eLearning and outlines the strengths and weaknesses of learning management systems such as Blackboard.
Real-time computer treatment of THz passive device images with the high image quality
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2012-06-01
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.
Development of a Renormalization Group Approach to Multi-Scale Plasma Physics Computation
2012-03-28
with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: a . REPORT...code) 29-12-2008 Final Technical Report From 29-12-2008 To 16-95-2011 (STTR PHASE II) DEVELOPMENT OF A RENORMALIZATION GROUP APPROACH TO MULTI-SCALE
Optical quantum memory based on electromagnetically induced transparency
Ma, Lijun; Slattery, Oliver
2017-01-01
Electromagnetically induced transparency (EIT) is a promising approach to implement quantum memory in quantum communication and quantum computing applications. In this paper, following a brief overview of the main approaches to quantum memory, we provide details of the physical principle and theory of quantum memory based specifically on EIT. We discuss the key technologies for implementing quantum memory based on EIT and review important milestones, from the first experimental demonstration to current applications in quantum information systems. PMID:28828172
Optical quantum memory based on electromagnetically induced transparency.
Ma, Lijun; Slattery, Oliver; Tang, Xiao
2017-04-01
Electromagnetically induced transparency (EIT) is a promising approach to implement quantum memory in quantum communication and quantum computing applications. In this paper, following a brief overview of the main approaches to quantum memory, we provide details of the physical principle and theory of quantum memory based specifically on EIT. We discuss the key technologies for implementing quantum memory based on EIT and review important milestones, from the first experimental demonstration to current applications in quantum information systems.
Stone, David E; Haswell, Elizabeth S; Sztul, Elizabeth
2017-01-01
In classical Cell Biology, fundamental cellular processes are revealed empirically, one experiment at a time. While this approach has been enormously fruitful, our understanding of cells is far from complete. In fact, the more we know, the more keenly we perceive our ignorance of the profoundly complex and dynamic molecular systems that underlie cell structure and function. Thus, it has become apparent to many cell biologists that experimentation alone is unlikely to yield major new paradigms, and that empiricism must be combined with theory and computational approaches to yield major new discoveries. To facilitate those discoveries, three workshops will convene annually for one day in three successive summers (2017-2019) to promote the use of computational modeling by cell biologists currently unconvinced of its utility or unsure how to apply it. The first of these workshops was held at the University of Illinois, Chicago in July 2017. Organized to facilitate interactions between traditional cell biologists and computational modelers, it provided a unique educational opportunity: a primer on how cell biologists with little or no relevant experience can incorporate computational modeling into their research. Here, we report on the workshop and describe how it addressed key issues that cell biologists face when considering modeling including: (1) Is my project appropriate for modeling? (2) What kind of data do I need to model my process? (3) How do I find a modeler to help me in integrating modeling approaches into my work? And, perhaps most importantly, (4) why should I bother?
Physics-of-Failure Approach to Prognostics
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.
2017-01-01
As more and more electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of the electrical components present in the system. In case of electric vehicles, computing remaining battery charge is safety-critical. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle. In this presentation our approach to develop a system level health monitoring safety indicator for different electronic components is presented which runs estimation and prediction algorithms to determine state-of-charge and estimate remaining useful life of respective components. Given models of the current and future system behavior, the general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.
Technological advances in real-time tracking of cell death
Skommer, Joanna; Darzynkiewicz, Zbigniew; Wlodkowic, Donald
2010-01-01
Cell population can be viewed as a quantum system, which like Schrödinger’s cat exists as a combination of survival- and death-allowing states. Tracking and understanding cell-to-cell variability in processes of high spatio-temporal complexity such as cell death is at the core of current systems biology approaches. As probabilistic modeling tools attempt to impute information inaccessible by current experimental approaches, advances in technologies for single-cell imaging and omics (proteomics, genomics, metabolomics) should go hand in hand with the computational efforts. Over the last few years we have made exciting technological advances that allow studies of cell death dynamically in real-time and with the unprecedented accuracy. These approaches are based on innovative fluorescent assays and recombinant proteins, bioelectrical properties of cells, and more recently also on state-of-the-art optical spectroscopy. Here, we review current status of the most innovative analytical technologies for dynamic tracking of cell death, and address the interdisciplinary promises and future challenges of these methods. PMID:20519963
Computing rank-revealing QR factorizations of dense matrices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science
1998-06-01
We develop algorithms and implementations for computing rank-revealing QR (RRQR) factorizations of dense matrices. First, we develop an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy, aided by incremental condition estimation. Second, we develop efficiently implementable variants of guaranteed reliable RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang. We suggest algorithmic improvements with respect to condition estimation, termination criteria, and Givens updating. By combining the block algorithm with one of the triangular postprocessing steps, we arrive at an efficient and reliablemore » algorithm for computing an RRQR factorization of a dense matrix. Experimental results on IBM RS/6000 SGI R8000 platforms show that this approach performs up to three times faster that the less reliable QR factorization with column pivoting as it is currently implemented in LAPACK, and comes within 15% of the performance of the LAPACK block algorithm for computing a QR factorization without any column exchanges. Thus, we expect this routine to be useful in may circumstances where numerical rank deficiency cannot be ruled out, but currently has been ignored because of the computational cost of dealing with it.« less
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
Stamatakis, Alexandros; Ott, Michael
2008-12-27
The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.
Allmer, Jens; Yousef, Malik
2016-12-01
Editorial The term MicroRNA or its contraction miRNA currently appears in 21,215 titles of abstracts, published between 1997 and now, available on Pubmed (2016-21-22:12:59 EET). 4,108 of these were published in 2016 alone which signifies the importance of miRNA-related research. MicroRNAs can be detected experimentally using various techniques like directional cloning of endogenous small RNAs but they are time consuming [1]. Additionally, it is necessary for the miRNA and its mRNA target(s) to be co-expressed to infer a functional relationship which is difficult, if not impossible, to achieve [2]. Since experimental approaches are facing such difficulties, they have been complemented by computational approaches [3] thereby defining the field of computational miRNomics. Due to the rapid development in the discipline, it is important to assess the state-of-the-art. In this special issue, several areas of the field are investigated ranging from pre-miRNA detection via machine learning to application of differential expression analysis in plants. First, Saçar Demirci et al. discuss an approach to virus pre-miRNA detection using machine learning [4]. Such approaches are based on parameterization of miRNAs and Yousef et al. discuss how to select among such features [5]. A different computational perspective is provided by Kotipalli et al. who model the kinetics of miRNA genesis and targeting [6]. To fuel more refined future models for genesis and targeting, it is important to establish miRNA and target expression under varying conditions. Zhang et al. [7] and Kanke et al. [8] discuss two approaches to quantify miRNAs and other non-coding short RNAs. Diler et al., finally, discuss actual biological implications of differentially expressed miRNAs [9]. This special issue on computational miRNomics, thus, provides a trajectory from detection of pre-miRNAs to biological implications of differentially expressed miRNAs. Additional topics will be covered in the upcoming second volume of the special issue on computational miRNomics.
NASA Astrophysics Data System (ADS)
Noor, F. A.; Nabila, E.; Mardianti, H.; Ariani, T. I.; Khairurrijal
2018-04-01
The transmittance and tunneling current in heterostructures under spin polarization consideration were studied by employing a zinc-blended structure for the heterostructures. An electron tunnels through a potential barrier by applying a bias voltage to the barrier, which is called the trapezoidal potential barrier. In order to study the transmittance, an Airy wave function approach was employed to find the transmittance. The obtained transmittance was then utilized to compute the tunneling current by using a Gauss quadrature method. It was shown that the transmittances were asymmetric with the incident angle of the electron. It was also shown that the tunneling currents increased as the bias voltage increased.
Using Cellular Automata for Parking Recommendations in Smart Environments
Horng, Gwo-Jiun
2014-01-01
In this work, we propose an innovative adaptive recommendation mechanism for smart parking. The cognitive RF module will transmit the vehicle location information and the parking space requirements to the parking congestion computing center (PCCC) when the driver must find a parking space. Moreover, for the parking spaces, we use a cellular automata (CA) model mechanism that can adjust to full and not full parking lot situations. Here, the PCCC can compute the nearest parking lot, the parking lot status and the current or opposite driving direction with the vehicle location information. By considering the driving direction, we can determine when the vehicles must turn around and thus reduce road congestion and speed up finding a parking space. The recommendation will be sent to the drivers through a wireless communication cognitive radio (CR) model after the computation and analysis by the PCCC. The current study evaluates the performance of this approach by conducting computer simulations. The simulation results show the strengths of the proposed smart parking mechanism in terms of avoiding increased congestion and decreasing the time to find a parking space. PMID:25153671
The impact of countermeasure propagation on the prevalence of computer viruses.
Chen, Li-Chiou; Carley, Kathleen M
2004-04-01
Countermeasures such as software patches or warnings can be effective in helping organizations avert virus infection problems. However, current strategies for disseminating such countermeasures have limited their effectiveness. We propose a new approach, called the Countermeasure Competing (CMC) strategy, and use computer simulation to formally compare its relative effectiveness with three antivirus strategies currently under consideration. CMC is based on the idea that computer viruses and countermeasures spread through two separate but interlinked complex networks-the virus-spreading network and the countermeasure-propagation network, in which a countermeasure acts as a competing species against the computer virus. Our results show that CMC is more effective than other strategies based on the empirical virus data. The proposed CMC reduces the size of virus infection significantly when the countermeasure-propagation network has properties that favor countermeasures over viruses, or when the countermeasure-propagation rate is higher than the virus-spreading rate. In addition, our work reveals that CMC can be flexibly adapted to different uncertainties in the real world, enabling it to be tuned to a greater variety of situations than other strategies.
Machine learning for epigenetics and future medical applications
Holder, Lawrence B.; Haque, M. Muksitul; Skinner, Michael K.
2017-01-01
ABSTRACT Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review. PMID:28524769
Computational Fragment-Based Drug Design: Current Trends, Strategies, and Applications.
Bian, Yuemin; Xie, Xiang-Qun Sean
2018-04-09
Fragment-based drug design (FBDD) has become an effective methodology for drug development for decades. Successful applications of this strategy brought both opportunities and challenges to the field of Pharmaceutical Science. Recent progress in the computational fragment-based drug design provide an additional approach for future research in a time- and labor-efficient manner. Combining multiple in silico methodologies, computational FBDD possesses flexibilities on fragment library selection, protein model generation, and fragments/compounds docking mode prediction. These characteristics provide computational FBDD superiority in designing novel and potential compounds for a certain target. The purpose of this review is to discuss the latest advances, ranging from commonly used strategies to novel concepts and technologies in computational fragment-based drug design. Particularly, in this review, specifications and advantages are compared between experimental and computational FBDD, and additionally, limitations and future prospective are discussed and emphasized.
Robust tuning of robot control systems
NASA Technical Reports Server (NTRS)
Minis, I.; Uebel, M.
1992-01-01
The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
A method for the computational modeling of the physics of heart murmurs
NASA Astrophysics Data System (ADS)
Seo, Jung Hee; Bakhshaee, Hani; Garreau, Guillaume; Zhu, Chi; Andreou, Andreas; Thompson, William R.; Mittal, Rajat
2017-05-01
A computational method for direct simulation of the generation and propagation of blood flow induced sounds is proposed. This computational hemoacoustic method is based on the immersed boundary approach and employs high-order finite difference methods to resolve wave propagation and scattering accurately. The current method employs a two-step, one-way coupled approach for the sound generation and its propagation through the tissue. The blood flow is simulated by solving the incompressible Navier-Stokes equations using the sharp-interface immersed boundary method, and the equations corresponding to the generation and propagation of the three-dimensional elastic wave corresponding to the murmur are resolved with a high-order, immersed boundary based, finite-difference methods in the time-domain. The proposed method is applied to a model problem of aortic stenosis murmur and the simulation results are verified and validated by comparing with known solutions as well as experimental measurements. The murmur propagation in a realistic model of a human thorax is also simulated by using the computational method. The roles of hemodynamics and elastic wave propagation on the murmur are discussed based on the simulation results.
Siekmeier, Peter J
2015-10-01
A good deal of recent research has centered on the identification of biomarkers and endophenotypic measures of psychiatric illnesses using in vivo and in vitro studies. This is understandable, as these measures-as opposed to complex clinical phenotypes-may be more closely related to neurobiological and genetic vulnerabilities. However, instantiation of such biomarkers using computational models-in silico studies-has received less attention. This approach could become increasingly important, given the wealth of detailed information produced by recent basic neuroscience research, and increasing availability of high capacity computing platforms. The purpose of this review is to survey the current state of the art of research in this area. We discuss computational approaches to schizophrenia, bipolar disorder, Alzheimer's disease, fragile X syndrome and autism, and argue that it represents a promising and underappreciated research modality. In conclusion, we outline specific avenues for future research; also, potential uses of in silico models to conduct "virtual experiments" and to generate novel hypotheses, and as an aid in neuropsychiatric drug development are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Computational chemistry in structure-based drug design].
Cao, Ran; Li, Wei; Sun, Han-Zi; Zhou, Yu; Huang, Niu
2013-07-01
Today, the understanding of the sequence and structure of biologically relevant targets is growing rapidly and researchers from many disciplines, physics and computational science in particular, are making significant contributions to modern biology and drug discovery. However, it remains challenging to rationally design small molecular ligands with desired biological characteristics based on the structural information of the drug targets, which demands more accurate calculation of ligand binding free-energy. With the rapid advances in computer power and extensive efforts in algorithm development, physics-based computational chemistry approaches have played more important roles in structure-based drug design. Here we reviewed the newly developed computational chemistry methods in structure-based drug design as well as the elegant applications, including binding-site druggability assessment, large scale virtual screening of chemical database, and lead compound optimization. Importantly, here we address the current bottlenecks and propose practical solutions.
A Parallel Processing Algorithm for Remote Sensing Classification
NASA Technical Reports Server (NTRS)
Gualtieri, J. Anthony
2005-01-01
A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.
Alexander Meets Michotte: A Simulation Tool Based on Pattern Programming and Phenomenology
ERIC Educational Resources Information Center
Basawapatna, Ashok
2016-01-01
Simulation and modeling activities, a key point of computational thinking, are currently not being integrated into the science classroom. This paper describes a new visual programming tool entitled the Simulation Creation Toolkit. The Simulation Creation Toolkit is a high level pattern-based phenomenological approach to bringing rapid simulation…
Studying Students' Attitudes on Using Examples of Game Source Code for Learning Programming
ERIC Educational Resources Information Center
Theodoraki, Aristea; Xinogalos, Stelios
2014-01-01
Games for learning are currently used in several disciplines for motivating students and enhancing their learning experience. This new approach of technology-enhanced learning has attracted researchers' and instructors' attention in the area of programming that is one of the most cognitively demanding fields in Computer Science. Several…
An Effective Approach to Biomedical Information Extraction with Limited Training Data
ERIC Educational Resources Information Center
Jonnalagadda, Siddhartha
2011-01-01
In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…
Backtrack Programming: A Computer-Based Approach to Group Problem Solving.
ERIC Educational Resources Information Center
Scott, Michael D.; Bodaken, Edward M.
Backtrack problem-solving appears to be a viable alternative to current problem-solving methodologies. It appears to have considerable heuristic potential as a conceptual and operational framework for small group communication research, as well as functional utility for the student group in the small group class or the management team in the…
Evaluating Cognitive Theory: A Joint Modeling Approach Using Responses and Response Times
ERIC Educational Resources Information Center
Klein Entink, Rinke H.; Kuhn, Jorg-Tobias; Hornke, Lutz F.; Fox, Jean-Paul
2009-01-01
In current psychological research, the analysis of data from computer-based assessments or experiments is often confined to accuracy scores. Response times, although being an important source of additional information, are either neglected or analyzed separately. In this article, a new model is developed that allows the simultaneous analysis of…
2005-06-01
of current military C2 organizations. The unit of analysis for organizational diagnosis is the Joint Task Force (JTF). It represents a multi-Service...Strategy as Structured Chaos Boston, MA: Harvard Business School Press (1998). [5] Burton, R.M. and Obel, B., Strategic Organizational Diagnosis and
Aircraft Survivability. Susceptibility Reduction. Fall 2010
2010-01-01
limits flexibility when issues are encountered during development. Once a program enters Engineering, Manufacturing, and Development (EMD), the...using a flexible , efficient computational environment based on a credible set of components. Unfortunately, current survivability codes contain many...approach limits flexibility when issues are encountered during development. Once a program enters Engineering Manufacturing and Development (EMD), the
So Many Chemicals, So Little Time... Evolution of ...
Current testing is limited by traditional testing models and regulatory systems. An overview is given of high throughput screening approaches to provide broader chemical and biological coverage, toxicokinetics and molecular pathway data and tools to facilitate utilization for regulatory application. Presentation at the NCSU Toxicology lecture series on the Evolution of Computational Toxicology
Learner-Content-Interface as an Approach for Self-Reliant and Student-Centered Learning
ERIC Educational Resources Information Center
Nicolay, Robin; Schwennigcke, Bastian; Sahl, Sarah; Martens, Alke
2015-01-01
Conceptualization and implementation of computer supported teaching and training is currently not tailored to the paradigm of learner centration. Many technical solutions lack transparency and consistency regarding the supported learner activities. An insight into learners activities correlated to learning tasks is needed. In this paper we outline…
Gartner, Daniel; Zhang, Yiye; Padman, Rema
2018-06-01
Order sets are a critical component in hospital information systems that are expected to substantially reduce physicians' physical and cognitive workload and improve patient safety. Order sets represent time interval-clustered order items, such as medications prescribed at hospital admission, that are administered to patients during their hospital stay. In this paper, we develop a mathematical programming model and an exact and a heuristic solution procedure with the objective of minimizing physicians' cognitive workload associated with prescribing order sets. Furthermore, we provide structural insights into the problem which lead us to a valid lower bound on the order set size. In a case study using order data on Asthma patients with moderate complexity from a major pediatric hospital, we compare the hospital's current solution with the exact and heuristic solutions on a variety of performance metrics. Our computational results confirm our lower bound and reveal that using a time interval decomposition approach substantially reduces computation times for the mathematical program, as does a K -means clustering based decomposition approach which, however, does not guarantee optimality because it violates the lower bound. The results of comparing the mathematical program with the current order set configuration in the hospital indicates that cognitive workload can be reduced by about 20.2% by allowing 1 to 5 order sets, respectively. The comparison of the K -means based decomposition with the hospital's current configuration reveals a cognitive workload reduction of about 19.5%, also by allowing 1 to 5 order sets, respectively. We finally provide a decision support system to help practitioners analyze the current order set configuration, the results of the mathematical program and the heuristic approach.
LBM-EP: Lattice-Boltzmann method for fast cardiac electrophysiology simulation from 3D images.
Rapaka, S; Mansi, T; Georgescu, B; Pop, M; Wright, G A; Kamen, A; Comaniciu, Dorin
2012-01-01
Current treatments of heart rhythm troubles require careful planning and guidance for optimal outcomes. Computational models of cardiac electrophysiology are being proposed for therapy planning but current approaches are either too simplified or too computationally intensive for patient-specific simulations in clinical practice. This paper presents a novel approach, LBM-EP, to solve any type of mono-domain cardiac electrophysiology models at near real-time that is especially tailored for patient-specific simulations. The domain is discretized on a Cartesian grid with a level-set representation of patient's heart geometry, previously estimated from images automatically. The cell model is calculated node-wise, while the transmembrane potential is diffused using Lattice-Boltzmann method within the domain defined by the level-set. Experiments on synthetic cases, on a data set from CESC'10 and on one patient with myocardium scar showed that LBM-EP provides results comparable to an FEM implementation, while being 10 - 45 times faster. Fast, accurate, scalable and requiring no specific meshing, LBM-EP paves the way to efficient and detailed models of cardiac electrophysiology for therapy planning.
Simulations of Solar Wind Turbulence
NASA Technical Reports Server (NTRS)
Goldstein, Melvyn L.; Usmanov, A. V.; Roberts, D. A.
2008-01-01
Recently we have restructured our approach to simulating magnetohydrodynamic (MHD) turbulence in the solar wind. Previously, we had defined a 'virtual' heliosphere that contained, for example, a tilted rotating current sheet, microstreams, quasi-two-dimensional fluctuations as well as Alfven waves. In this new version of the code, we use the global, time-stationary, WKB Alfven wave-driven solar wind model developed by Usmanov and described in Usmanov and Goldstein [2003] to define the initial state of the system. Consequently, current sheets, and fast and slow streams are computed self-consistently from an inner, photospheric, boundary. To this steady-state configuration, we add fluctuations close to, but above, the surface where the flow become super-Alfvenic. The time-dependent MHD equations are then solved using a semi-discrete third-order Central Weighted Essentially Non-Oscillatory (CWENO) numerical scheme. The computational domain now includes the entire sphere; the geometrical singularity at the poles is removed using the multiple grid approach described in Usmanov [1996]. Wave packets are introduced at the inner boundary such as to satisfy Faraday's Law [Yeh and Dryer, 1985] and their nonlinear evolution are followed in time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potash, Peter J.; Bell, Eric B.; Harrison, Joshua J.
Predictive models for tweet deletion have been a relatively unexplored area of Twitter-related computational research. We first approach the deletion of tweets as a spam detection problem, applying a small set of handcrafted features to improve upon the current state-of-the- art in predicting deleted tweets. Next, we apply our approach to a dataset of deleted tweets that better reflects the current deletion rate. Since tweets are deleted for reasons beyond just the presence of spam, we apply topic modeling and text embeddings in order to capture the semantic content of tweets that can lead to tweet deletion. Our goal ismore » to create an effective model that has a low-dimensional feature space and is also language-independent. A lean model would be computationally advantageous processing high-volumes of Twitter data, which can reach 9,885 tweets per second. Our results show that a small set of spam-related features combined with word topics and character-level text embeddings provide the best f1 when trained with a random forest model. The highest precision of the deleted tweet class is achieved by a modification of paragraph2vec to capture author identity.« less
Towards inverse modeling of turbidity currents: The inverse lock-exchange problem
NASA Astrophysics Data System (ADS)
Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison
2011-04-01
A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.
Lexical is as lexical does: computational approaches to lexical representation
Woollams, Anna M.
2015-01-01
In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanza, Mathieu; Lique, François, E-mail: francois.lique@univ-lehavre.fr
The determination of hyperfine structure resolved excitation cross sections and rate coefficients due to H{sub 2} collisions is required to interpret astronomical spectra. In this paper, we present several theoretical approaches to compute these data. An almost exact recoupling approach and approximate sudden methods are presented. We apply these different approaches to the HCl–H{sub 2} collisional system in order to evaluate their respective accuracy. HCl–H{sub 2} hyperfine structure resolved cross sections and rate coefficients are then computed using recoupling and approximate sudden methods. As expected, the approximate sudden approaches are more accurate when the collision energy increases and the resultsmore » suggest that these approaches work better for para-H{sub 2} than for ortho-H{sub 2} colliding partner. For the first time, we present HCl–H{sub 2} hyperfine structure resolved rate coefficients, computed here for temperatures ranging from 5 to 300 K. The usual Δj{sub 1} = ΔF{sub 1} propensity rules are observed for the hyperfine transitions. The new rate coefficients will significantly help the interpretation of interstellar HCl emission lines observed with current and future telescopes. We expect that these new data will allow a better determination of the HCl abundance in the interstellar medium, that is crucial to understand the interstellar chlorine chemistry.« less
The comparison of various approach to evaluation erosion risks and design control erosion measures
NASA Astrophysics Data System (ADS)
Kapicka, Jiri
2015-04-01
In the present is in the Czech Republic one methodology how to compute and compare erosion risks. This methodology contain also method to design erosion control measures. The base of this methodology is Universal Soil Loss Equation (USLE) and their result long-term average annual rate of erosion (G). This methodology is used for landscape planners. Data and statistics from database of erosion events in the Czech Republic shows that many troubles and damages are from local episodes of erosion events. An extent of these events and theirs impact are conditional to local precipitation events, current plant phase and soil conditions. These erosion events can do troubles and damages on agriculture land, municipally property and hydro components and even in a location is from point of view long-term average annual rate of erosion in good conditions. Other way how to compute and compare erosion risks is episodes approach. In this paper is presented the compare of various approach to compute erosion risks. The comparison was computed to locality from database of erosion events on agricultural land in the Czech Republic where have been records two erosion events. The study area is a simple agriculture land without any barriers that can have high influence to water flow and soil sediment transport. The computation of erosion risks (for all methodology) was based on laboratory analysis of soil samples which was sampled on study area. Results of the methodology USLE, MUSLE and results from mathematical model Erosion 3D have been compared. Variances of the results in space distribution of the places with highest soil erosion where compared and discussed. Other part presents variances of design control erosion measures where their design was done on based different methodology. The results shows variance of computed erosion risks which was done by different methodology. These variances can start discussion about different approach how compute and evaluate erosion risks in areas with different importance.
Calculating orthologs in bacteria and Archaea: a divide and conquer approach.
Halachev, Mihail R; Loman, Nicholas J; Pallen, Mark J
2011-01-01
Among proteins, orthologs are defined as those that are derived by vertical descent from a single progenitor in the last common ancestor of their host organisms. Our goal is to compute a complete set of protein orthologs derived from all currently available complete bacterial and archaeal genomes. Traditional approaches typically rely on all-against-all BLAST searching which is prohibitively expensive in terms of hardware requirements or computational time (requiring an estimated 18 months or more on a typical server). Here, we present xBASE-Orth, a system for ongoing ortholog annotation, which applies a "divide and conquer" approach and adopts a pragmatic scheme that trades accuracy for speed. Starting at species level, xBASE-Orth carefully constructs and uses pan-genomes as proxies for the full collections of coding sequences at each level as it progressively climbs the taxonomic tree using the previously computed data. This leads to a significant decrease in the number of alignments that need to be performed, which translates into faster computation, making ortholog computation possible on a global scale. Using xBASE-Orth, we analyzed an NCBI collection of 1,288 bacterial and 94 archaeal complete genomes with more than 4 million coding sequences in 5 weeks and predicted more than 700 million ortholog pairs, clustered in 175,531 orthologous groups. We have also identified sets of highly conserved bacterial and archaeal orthologs and in so doing have highlighted anomalies in genome annotation and in the proposed composition of the minimal bacterial genome. In summary, our approach allows for scalable and efficient computation of the bacterial and archaeal ortholog annotations. In addition, due to its hierarchical nature, it is suitable for incorporating novel complete genomes and alternative genome annotations. The computed ortholog data and a continuously evolving set of applications based on it are integrated in the xBASE database, available at http://www.xbase.ac.uk/.
Hawking radiation, covariant boundary conditions, and vacuum states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Rabin; Kulkarni, Shailesh
2009-04-15
The basic characteristics of the covariant chiral current
NASA Technical Reports Server (NTRS)
Wood, M. E.
1980-01-01
Four wire Wye connected ac power systems exhibit peculiar steady state fault characteristics when the fourth wire of three phase induction motors is connected. The loss of one phase of power source due to a series or shunt fault results in currents higher than anticipated on the remaining two phases. A theoretical approach to compute the fault currents and voltages is developed. A FORTRAN program is included in the appendix.
SGFSC: speeding the gene functional similarity calculation based on hash tables.
Tian, Zhen; Wang, Chunyu; Guo, Maozu; Liu, Xiaoyan; Teng, Zhixia
2016-11-04
In recent years, many measures of gene functional similarity have been proposed and widely used in all kinds of essential research. These methods are mainly divided into two categories: pairwise approaches and group-wise approaches. However, a common problem with these methods is their time consumption, especially when measuring the gene functional similarities of a large number of gene pairs. The problem of computational efficiency for pairwise approaches is even more prominent because they are dependent on the combination of semantic similarity. Therefore, the efficient measurement of gene functional similarity remains a challenging problem. To speed current gene functional similarity calculation methods, a novel two-step computing strategy is proposed: (1) establish a hash table for each method to store essential information obtained from the Gene Ontology (GO) graph and (2) measure gene functional similarity based on the corresponding hash table. There is no need to traverse the GO graph repeatedly for each method with the help of the hash table. The analysis of time complexity shows that the computational efficiency of these methods is significantly improved. We also implement a novel Speeding Gene Functional Similarity Calculation tool, namely SGFSC, which is bundled with seven typical measures using our proposed strategy. Further experiments show the great advantage of SGFSC in measuring gene functional similarity on the whole genomic scale. The proposed strategy is successful in speeding current gene functional similarity calculation methods. SGFSC is an efficient tool that is freely available at http://nclab.hit.edu.cn/SGFSC . The source code of SGFSC can be downloaded from http://pan.baidu.com/s/1dFFmvpZ .
Automated eddy current analysis of materials
NASA Technical Reports Server (NTRS)
Workman, Gary L.
1990-01-01
This research effort focused on the use of eddy current techniques for characterizing flaws in graphite-based filament-wound cylindrical structures. A major emphasis was on incorporating artificial intelligence techniques into the signal analysis portion of the inspection process. Developing an eddy current scanning system using a commercial robot for inspecting graphite structures (and others) has been a goal in the overall concept and is essential for the final implementation for expert system interpretation. Manual scans, as performed in the preliminary work here, do not provide sufficiently reproducible eddy current signatures to be easily built into a real time expert system. The expert systems approach to eddy current signal analysis requires that a suitable knowledge base exist in which correct decisions as to the nature of the flaw can be performed. In eddy current or any other expert systems used to analyze signals in real time in a production environment, it is important to simplify computational procedures as much as possible. For that reason, we have chosen to use the measured resistance and reactance values for the preliminary aspects of this work. A simple computation, such as phase angle of the signal, is certainly within the real time processing capability of the computer system. In the work described here, there is a balance between physical measurements and finite element calculations of those measurements. The goal is to evolve into the most cost effective procedures for maintaining the correctness of the knowledge base.
NASA Astrophysics Data System (ADS)
Buongiorno Nardelli, Marco
High-Throughput Quantum-Mechanics computation of materials properties by ab initio methods has become the foundation of an effective approach to materials design, discovery and characterization. This data driven approach to materials science currently presents the most promising path to the development of advanced technological materials that could solve or mitigate important social and economic challenges of the 21st century. In particular, the rapid proliferation of computational data on materials properties presents the possibility to complement and extend materials property databases where the experimental data is lacking and difficult to obtain. Enhanced repositories such as AFLOWLIB open novel opportunities for structure discovery and optimization, including uncovering of unsuspected compounds, metastable structures and correlations between various properties. The practical realization of these opportunities depends almost exclusively on the the design of efficient algorithms for electronic structure simulations of realistic material systems beyond the limitations of the current standard theories. In this talk, I will review recent progress in theoretical and computational tools, and in particular, discuss the development and validation of novel functionals within Density Functional Theory and of local basis representations for effective ab-initio tight-binding schemes. Marco Buongiorno Nardelli is a pioneer in the development of computational platforms for theory/data/applications integration rooted in his profound and extensive expertise in the design of electronic structure codes and in his vision for sustainable and innovative software development for high-performance materials simulations. His research activities range from the design and discovery of novel materials for 21st century applications in renewable energy, environment, nano-electronics and devices, the development of advanced electronic structure theories and high-throughput techniques in materials genomics and computational materials design, to an active role as community scientific software developer (QUANTUM ESPRESSO, WanT, AFLOWpi)
Methods and compositions for protection of cells and tissues from computed tomography radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grdina, David J.
Described are methods for preventing or inhibiting genomic instability and in cells affected by diagnostic radiology procedures employing ionizing radiation. Embodiments include methods of preventing or inhibiting genomic instability and in cells affected by computed tomography (CT) radiation. Subjects receiving ionizing radiation may be those persons suspected of having cancer, or cancer patients having received or currently receiving cancer therapy, and or those patients having received previous ionizing radiation, including those who are approaching or have exceeded the recommended total radiation dose for a person.
Development of STOLAND, a versatile navigation, guidance and control system
NASA Technical Reports Server (NTRS)
Young, L. S.; Hansen, Q. M.; Rouse, W. E.; Osder, S. S.
1972-01-01
STOLAND has been developed to perform navigation, guidance, control, and flight management experiments in advanced V/STOL aircraft. The experiments have broad requirements and have dictated that STOLAND be capable of providing performance that would be realistic and equivalent to a wide range of current and future avionics systems. An integrated digital concept using modern avionics components was selected as the simplest approach to maximizing versatility and growth potential. Unique flexibility has been obtained by use of a single, general-purpose digital computer for all navigation, guidance, control, and displays computation.
An Application Development Platform for Neuromorphic Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dean, Mark; Chan, Jason; Daffron, Christopher
2016-01-01
Dynamic Adaptive Neural Network Arrays (DANNAs) are neuromorphic computing systems developed as a hardware based approach to the implementation of neural networks. They feature highly adaptive and programmable structural elements, which model arti cial neural networks with spiking behavior. We design them to solve problems using evolutionary optimization. In this paper, we highlight the current hardware and software implementations of DANNA, including their features, functionalities and performance. We then describe the development of an Application Development Platform (ADP) to support efficient application implementation and testing of DANNA based solutions. We conclude with future directions.
NASA Technical Reports Server (NTRS)
Bareiss, L. E.
1978-01-01
The paper presents a compilation of the results of a systems level Shuttle/payload contamination analysis and related computer modeling activities. The current technical assessment of the contamination problems anticipated during the Spacelab program are discussed and recommendations are presented on contamination abatement designs and operational procedures based on experience gained in the field of contamination analysis and assessment, dating back to the pre-Skylab era. The ultimate test of the Shuttle/Payload Contamination Evaluation program will be through comparison of predictions with measured levels of contamination during actual flight.
Molecular Nanotechnology and Designs of Future
NASA Technical Reports Server (NTRS)
Srivastava, Deepak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
Reviewing the status of current approaches and future projections, as already published in the scientific journals and books, the talk will summarize the direction in which computational and experimental molecular nanotechnologies are progressing. Examples of nanotechnological approach to the concepts of design and simulation of atomically precise materials in a variety of interdisciplinary areas will be presented. The concepts of hypothetical molecular machines and assemblers as explained in Drexler's and Merckle's already published work and Han et. al's WWW distributed molecular gears will be explained.
2005-03-01
International Conference On Computers Communications and Networks, 153- 161, Lafayette, L.A. Deitel , H.M. and P.J. Deitel . 2003. C++ How to Program ...of this study is to provide an additional performance evaluation technique for the TNT program of Naval Postgraduate School. The current approach...case are the PAMAS and DBTMA protocols. Toh (2002) illustrates how these approaches succeed in solving the problem. In order to address all the
Biomechanics of metastatic disease in the vertebral column.
Whyne, Cari M
2014-06-01
Metastatic disease in the vertebral column compromises the structural stability of the spine leading to increased risk of fracture. The complex patterns of osteolytic and osteoblastic disease within the bony spine have motivated a multimodal approach to better characterize the biomechanics of tumor-involved bone. This review presents our current understanding of the biomechanical behavior of metastatically involved vertebrae, and experimental and computational image-based approaches that have been employed to quantify structural integrity in preclinical models with translation to clinical data sets.
Modeling of a carbon nanotube ultracapacitor.
Orphanou, Antonis; Yamada, Toshishige; Yang, Cary Y
2012-03-09
The modeling of carbon nanotube ultracapacitor (CNU) performance based on the simulation of electrolyte ion motion between the cathode and the anode is described. Using a molecular dynamics (MD) approach, the equilibrium positions of the electrode charges interacting through the Coulomb potential are determined, which in turn yield the equipotential surface and electric field associated with the capacitor. With an applied ac voltage, the current is computed based on the nanotube and electrolyte particle distribution and interaction, resulting in the frequency-dependent impedance Z(ω). From the current and impedance profiles, the Nyquist and cyclic voltammetry (CV) plots are then extracted. The results of these calculations compare well with existing experimental data. A lumped-element equivalent circuit for the CNU is proposed and the impedance computed from this circuit correlates well with the simulated and measured impedances.
High-Fidelity 3D-Nanoprinting via Focused Electron Beams: Computer-Aided Design (3BID)
Fowlkes, Jason D.; Winkler, Robert; Lewis, Brett B.; ...
2018-02-14
Currently, there are few techniques that allow true 3D-printing on the nanoscale. The most promising candidate to fill this void is focused electron-beam-induced deposition (FEBID), a resist-free, nanofabrication compatible, direct-write method. The basic working principles of a computer-aided design (CAD) program (3BID) enabling 3D-FEBID is presented and simultaneously released for download. The 3BID capability significantly expands the currently limited toolbox for 3D-nanoprinting, providing access to geometries for optoelectronic, plasmonic, and nanomagnetic applications that were previously unattainable due to the lack of a suitable method for synthesis. In conclusion, the CAD approach supplants trial and error toward more precise/accurate FEBID requiredmore » for real applications/device prototyping.« less
The Scalable Checkpoint/Restart Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.
The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less
High-Fidelity 3D-Nanoprinting via Focused Electron Beams: Computer-Aided Design (3BID)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowlkes, Jason D.; Winkler, Robert; Lewis, Brett B.
Currently, there are few techniques that allow true 3D-printing on the nanoscale. The most promising candidate to fill this void is focused electron-beam-induced deposition (FEBID), a resist-free, nanofabrication compatible, direct-write method. The basic working principles of a computer-aided design (CAD) program (3BID) enabling 3D-FEBID is presented and simultaneously released for download. The 3BID capability significantly expands the currently limited toolbox for 3D-nanoprinting, providing access to geometries for optoelectronic, plasmonic, and nanomagnetic applications that were previously unattainable due to the lack of a suitable method for synthesis. In conclusion, the CAD approach supplants trial and error toward more precise/accurate FEBID requiredmore » for real applications/device prototyping.« less
A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment
NASA Astrophysics Data System (ADS)
Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.
2017-12-01
Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A
2012-01-01
Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.
Ayres, Daniel L.; Darling, Aaron; Zwickl, Derrick J.; Beerli, Peter; Holder, Mark T.; Lewis, Paul O.; Huelsenbeck, John P.; Ronquist, Fredrik; Swofford, David L.; Cummings, Michael P.; Rambaut, Andrew; Suchard, Marc A.
2012-01-01
Abstract Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software. PMID:21963610
The Applicability of Emerging Quantum Computing Capabilities to Exo-Planet Research
NASA Astrophysics Data System (ADS)
Correll, Randall; Worden, S.
2014-01-01
In conjunction with the Universities Space Research Association and Google, Inc. NASA Ames has acquired a quantum computing device built by DWAVE Systems with approximately 512 “qubits.” Quantum computers have the feature that their capabilities to find solutions to problems with large numbers of variables scale linearly with the number of variables rather than exponentially with that number. These devices may have significant applicability to detection of exoplanet signals in noisy data. We have therefore explored the application of quantum computing to analyse stellar transiting exoplanet data from NASA’s Kepler Mission. The analysis of the case studies was done using the DWAVE Systems’s BlackBox compiler software emulator, although one dataset was run successfully on the DWAVE Systems’s 512 qubit Vesuvius machine. The approach first extracts a list of candidate transits from the photometric lightcurve of a given Kepler target, and then applies a quantum annealing algorithm to find periodicity matches between subsets of the candidate transit list. We examined twelve case studies and were successful in reproducing the results of the Kepler science pipeline in finding validated exoplanets, and matched the results for a pair of candidate exoplanets. We conclude that the current implementation of the algorithm is not sufficiently challenging to require a quantum computer as opposed to a conventional computer. We are developing more robust algorithms better tailored to the quantum computer and do believe that our approach has the potential to extract exoplanet transits in some cases where a conventional approach would not in Kepler data. Additionally, we believe the new quantum capabilities may have even greater relevance for new exoplanet data sets such as that contemplated for NASA’s Transiting Exoplanet Survey Satellite (TESS) and other astrophysics data sets.
Machine learning action parameters in lattice quantum chromodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shanahan, Phiala; Trewartha, Daneil; Detmold, William
Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less
Machine learning action parameters in lattice quantum chromodynamics
Shanahan, Phiala; Trewartha, Daneil; Detmold, William
2018-05-16
Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less
Tong, Wing-Chiu; Ghouri, Iffath; Taggart, Michael J.
2014-01-01
The uterus and heart share the important physiological feature whereby contractile activation of the muscle tissue is regulated by the generation of periodic, spontaneous electrical action potentials (APs). Preterm birth arising from premature uterine contractions is a major complication of pregnancy and there remains a need to pursue avenues of research that facilitate the use of drugs, tocolytics, to limit these inappropriate contractions without deleterious actions on cardiac electrical excitation. A novel approach is to make use of mathematical models of uterine and cardiac APs, which incorporate many ionic currents contributing to the AP forms, and test the cell-specific responses to interventions. We have used three such models—of uterine smooth muscle cells (USMC), cardiac sinoatrial node cells (SAN), and ventricular cells—to investigate the relative effects of reducing two important voltage-gated Ca currents—the L-type (ICaL) and T-type (ICaT) Ca currents. Reduction of ICaL (10%) alone, or ICaT (40%) alone, blunted USMC APs with little effect on ventricular APs and only mild effects on SAN activity. Larger reductions in either current further attenuated the USMC APs but with also greater effects on SAN APs. Encouragingly, a combination of ICaL and ICaT reduction did blunt USMC APs as intended with little detriment to APs of either cardiac cell type. Subsequent overlapping maps of ICaL and ICaT inhibition profiles from each model revealed a range of combined reductions of ICaL and ICaT over which an appreciable diminution of USMC APs could be achieved with no deleterious action on cardiac SAN or ventricular APs. This novel approach illustrates the potential for computational biology to inform us of possible uterine and cardiac cell-specific mechanisms. Incorporating such computational approaches in future studies directed at designing new, or repurposing existing, tocolytics will be beneficial for establishing a desired uterine specificity of action. PMID:25360118
Prospects for improving the representation of coastal and shelf seas in global ocean models
NASA Astrophysics Data System (ADS)
Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard
2017-02-01
Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions < 500 m deep, but this increases to ˜ 70 % for a 1/72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1/4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1/72° global model by 2026. However, we also note that a 1/12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to ˜ 1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.
NASA Astrophysics Data System (ADS)
Vasco, D. W.
2018-04-01
Following an approach used in quantum dynamics, an exponential representation of the hydraulic head transforms the diffusion equation governing pressure propagation into an equivalent set of ordinary differential equations. Using a reservoir simulator to determine one set of dependent variables leaves a reduced set of equations for the path of a pressure transient. Unlike the current approach for computing the path of a transient, based on a high-frequency asymptotic solution, the trajectories resulting from this new formulation are valid for arbitrary spatial variations in aquifer properties. For a medium containing interfaces and layers with sharp boundaries, the trajectory mechanics approach produces paths that are compatible with travel time fields produced by a numerical simulator, while the asymptotic solution produces paths that bend too strongly into high permeability regions. The breakdown of the conventional asymptotic solution, due to the presence of sharp boundaries, has implications for model parameter sensitivity calculations and the solution of the inverse problem. For example, near an abrupt boundary, trajectories based on the asymptotic approach deviate significantly from regions of high sensitivity observed in numerical computations. In contrast, paths based on the new trajectory mechanics approach coincide with regions of maximum sensitivity to permeability changes.
The AIST Managed Cloud Environment
NASA Astrophysics Data System (ADS)
Cook, S.
2016-12-01
ESTO is currently in the process of developing and implementing the AIST Managed Cloud Environment (AMCE) to offer cloud computing services to ESTO-funded PIs to conduct their project research. AIST will provide projects access to a cloud computing framework that incorporates NASA security, technical, and financial standards, on which project can freely store, run, and process data. Currently, many projects led by research groups outside of NASA do not have the awareness of requirements or the resources to implement NASA standards into their research, which limits the likelihood of infusing the work into NASA applications. Offering this environment to PIs will allow them to conduct their project research using the many benefits of cloud computing. In addition to the well-known cost and time savings that it allows, it also provides scalability and flexibility. The AMCE will facilitate infusion and end user access by ensuring standardization and security. This approach will ultimately benefit ESTO, the science community, and the research, allowing the technology developments to have quicker and broader applications.
Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.
2013-01-01
There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable
Challenges of Future High-End Computing
NASA Technical Reports Server (NTRS)
Bailey, David; Kutler, Paul (Technical Monitor)
1998-01-01
The next major milestone in high performance computing is a sustained rate of one Pflop/s (also written one petaflops, or 10(circumflex)15 floating-point operations per second). In addition to prodigiously high computational performance, such systems must of necessity feature very large main memories, as well as comparably high I/O bandwidth and huge mass storage facilities. The current consensus of scientists who have studied these issues is that "affordable" petaflops systems may be feasible by the year 2010, assuming that certain key technologies continue to progress at current rates. One important question is whether applications can be structured to perform efficiently on such systems, which are expected to incorporate many thousands of processors and deeply hierarchical memory systems. To answer these questions, advanced performance modeling techniques, including simulation of future architectures and applications, may be required. It may also be necessary to formulate "latency tolerant algorithms" and other completely new algorithmic approaches for certain applications. This talk will give an overview of these challenges.
Some Comments on Topological Approaches to the π-Electron Currents in Conjugated Systems.
Dickens, Timothy K; Gomes, José A N F; Mallion, Roger B
2011-11-08
Within the past two years, three sets of independent authors (Mandado, Ciesielski et al., and Randić) have proposed methods in which π-electron currents in conjugated systems are estimated by invoking the concept of circuits of conjugation. These methods are here compared with ostensibly similar approaches published more than 30 years ago by two of the present authors (Gomes and Mallion) and (likewise independently) by Gayoso. Patterns of bond currents and ring currents computed by these methods for the nonalternant isomer of coronene that was studied by Randić are also systematically compared with those calculated by the Hückel-London-Pople-McWeeny (HLPM) "topological" approach and with the ab initio, "ipso-centric" current-density maps of Balaban et al. These all agree that a substantial diamagnetic π-electron current flows around the periphery of the selected structure (which could be thought of as a "perturbed" [18]-annulene), and consideration is given to the differing trends predicted by these several methods for the π-electron currents around its central six-membered ring and in its internal bonds. It is observed that, for any method in which calculated π-electron currents respect Kirchhoff's Laws of current conservation at a junction, consideration of bond currents-as an alternative to the more-traditional ring currents-can give a different insight into the magnetic properties of conjugated systems. However, provided that charge/current conservation is guaranteed-or Kirchhoff's First Law holds for bond currents instead of the more-general current-densities-then ring currents represent a more efficient way of describing the molecular reaction to the external magnetic field: ring currents are independent quantities, while bond currents are not.
Computational scheme for the prediction of metal ion binding by a soil fulvic acid
Marinsky, J.A.; Reddy, M.M.; Ephraim, J.H.; Mathuthu, A.S.
1995-01-01
The dissociation and metal ion binding properties of a soil fulvic acid have been characterized. Information thus gained was used to compensate for salt and site heterogeneity effects in metal ion complexation by the fulvic acid. An earlier computational scheme has been modified by incorporating an additional step which improves the accuracy of metal ion speciation estimates. An algorithm is employed for the prediction of metal ion binding by organic acid constituents of natural waters (once the organic acid is characterized in terms of functional group identity and abundance). The approach discussed here, currently used with a spreadsheet program on a personal computer, is conceptually envisaged to be compatible with computer programs available for ion binding by inorganic ligands in natural waters.
Computer assisted thermal-vacuum testing
NASA Technical Reports Server (NTRS)
Petrie, W.; Mikk, G.
1977-01-01
In testing complex systems and components under dynamic thermal-vacuum environments, it is desirable to optimize the environment control sequence in order to reduce test duration and cost. This paper describes an approach where a computer is utilized as part of the test control operation. Real time test data is made available to the computer through time-sharing terminals at appropriate time intervals. A mathematical model of the test article and environmental control equipment is then operated on using the real time data to yield current thermal status, temperature analysis, trend prediction and recommended thermal control setting changes to arrive at the required thermal condition. The data acquisition interface and the time-sharing hook-up to an IBM-370 computer is described along with a typical control program and data demonstrating its use.
Solid-liquid phase coexistence of alkali nitrates from molecular dynamics simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayaraman, Saivenkataraman
2010-03-01
Alkali nitrate eutectic mixtures are finding application as industrial heat transfer fluids in concentrated solar power generation systems. An important property for such applications is the melting point, or phase coexistence temperature. We have computed melting points for lithium, sodium and potassium nitrate from molecular dynamics simulations using a recently developed method, which uses thermodynamic integration to compute the free energy difference between the solid and liquid phases. The computed melting point for NaNO3 was within 15K of its experimental value, while for LiNO3 and KNO3, the computed melting points were within 100K of the experimental values [4]. We aremore » currently extending the approach to calculate melting temperatures for binary mixtures of lithium and sodium nitrate.« less
Single-Cell Genomics: Approaches and Utility in Immunology.
Neu, Karlynn E; Tang, Qingming; Wilson, Patrick C; Khan, Aly A
2017-02-01
Single-cell genomics offers powerful tools for studying immune cells, which make it possible to observe rare and intermediate cell states that cannot be resolved at the population level. Advances in computer science and single-cell sequencing technology have created a data-driven revolution in immunology. The challenge for immunologists is to harness computing and turn an avalanche of quantitative data into meaningful discovery of immunological principles, predictive models, and strategies for therapeutics. Here, we review the current literature on computational analysis of single-cell RNA-sequencing data and discuss underlying assumptions, methods, and applications in immunology, and highlight important directions for future research. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1990-01-01
Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.
Drug repurposing: translational pharmacology, chemistry, computers and the clinic.
Issa, Naiem T; Byers, Stephen W; Dakshanamurthy, Sivanesan
2013-01-01
The process of discovering a pharmacological compound that elicits a desired clinical effect with minimal side effects is a challenge. Prior to the advent of high-performance computing and large-scale screening technologies, drug discovery was largely a serendipitous endeavor, as in the case of thalidomide for erythema nodosum leprosum or cancer drugs in general derived from flora located in far-reaching geographic locations. More recently, de novo drug discovery has become a more rationalized process where drug-target-effect hypotheses are formulated on the basis of already known compounds/protein targets and their structures. Although this approach is hypothesis-driven, the actual success has been very low, contributing to the soaring costs of research and development as well as the diminished pharmaceutical pipeline in the United States. In this review, we discuss the evolution in computational pharmacology as the next generation of successful drug discovery and implementation in the clinic where high-performance computing (HPC) is used to generate and validate drug-target-effect hypotheses completely in silico. The use of HPC would decrease development time and errors while increasing productivity prior to in vitro, animal and human testing. We highlight approaches in chemoinformatics, bioinformatics as well as network biopharmacology to illustrate potential avenues from which to design clinically efficacious drugs. We further discuss the implications of combining these approaches into an integrative methodology for high-accuracy computational predictions within the context of drug repositioning for the efficient streamlining of currently approved drugs back into clinical trials for possible new indications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, M.A.; Craig, J.I.
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implementmore » the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.« less
Development of computer-based analytical tool for assessing physical protection system
NASA Astrophysics Data System (ADS)
Mardhi, Alim; Pengvanich, Phongphaeth
2016-01-01
Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer-based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system's detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.; Schiess, Adrian B.; Howell, Jamie
2013-10-01
The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we willmore » instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.« less
Quaglio, Pietro; Yegenoglu, Alper; Torre, Emiliano; Endres, Dominik M; Grün, Sonja
2017-01-01
Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons). In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE) analysis.
Quaglio, Pietro; Yegenoglu, Alper; Torre, Emiliano; Endres, Dominik M.; Grün, Sonja
2017-01-01
Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons). In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE) analysis. PMID:28596729
Computation of breast ptosis from 3D surface scans of the female torso
Li, Danni; Cheong, Audrey; Reece, Gregory P.; Crosby, Melissa A.; Fingeret, Michelle C.; Merchant, Fatima A.
2016-01-01
Stereophotography is now finding a niche in clinical breast surgery, and several methods for quantitatively measuring breast morphology from 3D surface images have been developed. Breast ptosis (sagging of the breast), which refers to the extent by which the nipple is lower than the inframammary fold (the contour along which the inferior part of the breast attaches to the chest wall), is an important morphological parameter that is frequently used for assessing the outcome of breast surgery. This study presents a novel algorithm that utilizes three-dimensional (3D) features such as surface curvature and orientation for the assessment of breast ptosis from 3D scans of the female torso. The performance of the computational approach proposed was compared against the consensus of manual ptosis ratings by nine plastic surgeons, and that of current 2D photogrammetric methods. Compared to the 2D methods, the average accuracy for 3D features was ~13% higher, with an increase in precision, recall, and F-score of 37%, 29%, and 33%, respectively. The computational approach proposed provides an improved and unbiased objective method for rating ptosis when compared to qualitative visualization by observers, and distance based 2D photogrammetry approaches. PMID:27643463
Computational approaches to predict bacteriophage–host relationships
Edwards, Robert A.; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E.
2015-01-01
Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus–host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage–host signals. Sequence homology approaches are the most effective at identifying known phage–host pairs. Compositional and abundance-based methods contain significant signal for phage–host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage–host relationships, with potential relevance for medical and industrial applications. PMID:26657537
Struct2Net: a web service to predict protein–protein interactions using a structure-based approach
Singh, Rohit; Park, Daniel; Xu, Jinbo; Hosur, Raghavendra; Berger, Bonnie
2010-01-01
Struct2Net is a web server for predicting interactions between arbitrary protein pairs using a structure-based approach. Prediction of protein–protein interactions (PPIs) is a central area of interest and successful prediction would provide leads for experiments and drug design; however, the experimental coverage of the PPI interactome remains inadequate. We believe that Struct2Net is the first community-wide resource to provide structure-based PPI predictions that go beyond homology modeling. Also, most web-resources for predicting PPIs currently rely on functional genomic data (e.g. GO annotation, gene expression, cellular localization, etc.). Our structure-based approach is independent of such methods and only requires the sequence information of the proteins being queried. The web service allows multiple querying options, aimed at maximizing flexibility. For the most commonly studied organisms (fly, human and yeast), predictions have been pre-computed and can be retrieved almost instantaneously. For proteins from other species, users have the option of getting a quick-but-approximate result (using orthology over pre-computed results) or having a full-blown computation performed. The web service is freely available at http://struct2net.csail.mit.edu. PMID:20513650
GeoBrain Computational Cyber-laboratory for Earth Science Studies
NASA Astrophysics Data System (ADS)
Deng, M.; di, L.
2009-12-01
Computational approaches (e.g., computer-based data visualization, analysis and modeling) are critical for conducting increasingly data-intensive Earth science (ES) studies to understand functions and changes of the Earth system. However, currently Earth scientists, educators, and students have met two major barriers that prevent them from being effectively using computational approaches in their learning, research and application activities. The two barriers are: 1) difficulties in finding, obtaining, and using multi-source ES data; and 2) lack of analytic functions and computing resources (e.g., analysis software, computing models, and high performance computing systems) to analyze the data. Taking advantages of recent advances in cyberinfrastructure, Web service, and geospatial interoperability technologies, GeoBrain, a project funded by NASA, has developed a prototype computational cyber-laboratory to effectively remove the two barriers. The cyber-laboratory makes ES data and computational resources at large organizations in distributed locations available to and easily usable by the Earth science community through 1) enabling seamless discovery, access and retrieval of distributed data, 2) federating and enhancing data discovery with a catalogue federation service and a semantically-augmented catalogue service, 3) customizing data access and retrieval at user request with interoperable, personalized, and on-demand data access and services, 4) automating or semi-automating multi-source geospatial data integration, 5) developing a large number of analytic functions as value-added, interoperable, and dynamically chainable geospatial Web services and deploying them in high-performance computing facilities, 6) enabling the online geospatial process modeling and execution, and 7) building a user-friendly extensible web portal for users to access the cyber-laboratory resources. Users can interactively discover the needed data and perform on-demand data analysis and modeling through the web portal. The GeoBrain cyber-laboratory provides solutions to meet common needs of ES research and education, such as, distributed data access and analysis services, easy access to and use of ES data, and enhanced geoprocessing and geospatial modeling capability. It greatly facilitates ES research, education, and applications. The development of the cyber-laboratory provides insights, lessons-learned, and technology readiness to build more capable computing infrastructure for ES studies, which can meet wide-range needs of current and future generations of scientists, researchers, educators, and students for their formal or informal educational training, research projects, career development, and lifelong learning.
Advanced EUV mask and imaging modeling
NASA Astrophysics Data System (ADS)
Evanschitzky, Peter; Erdmann, Andreas
2017-10-01
The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.
NASA Astrophysics Data System (ADS)
Salman Shahid, Syed; Bikson, Marom; Salman, Humaira; Wen, Peng; Ahfock, Tony
2014-06-01
Objectives. Computational methods are increasingly used to optimize transcranial direct current stimulation (tDCS) dose strategies and yet complexities of existing approaches limit their clinical access. Since predictive modelling indicates the relevance of subject/pathology based data and hence the need for subject specific modelling, the incremental clinical value of increasingly complex modelling methods must be balanced against the computational and clinical time and costs. For example, the incorporation of multiple tissue layers and measured diffusion tensor (DTI) based conductivity estimates increase model precision but at the cost of clinical and computational resources. Costs related to such complexities aggregate when considering individual optimization and the myriad of potential montages. Here, rather than considering if additional details change current-flow prediction, we consider when added complexities influence clinical decisions. Approach. Towards developing quantitative and qualitative metrics of value/cost associated with computational model complexity, we considered field distributions generated by two 4 × 1 high-definition montages (m1 = 4 × 1 HD montage with anode at C3 and m2 = 4 × 1 HD montage with anode at C1) and a single conventional (m3 = C3-Fp2) tDCS electrode montage. We evaluated statistical methods, including residual error (RE) and relative difference measure (RDM), to consider the clinical impact and utility of increased complexities, namely the influence of skull, muscle and brain anisotropic conductivities in a volume conductor model. Main results. Anisotropy modulated current-flow in a montage and region dependent manner. However, significant statistical changes, produced within montage by anisotropy, did not change qualitative peak and topographic comparisons across montages. Thus for the examples analysed, clinical decision on which dose to select would not be altered by the omission of anisotropic brain conductivity. Significance. Results illustrate the need to rationally balance the role of model complexity, such as anisotropy in detailed current flow analysis versus value in clinical dose design. However, when extending our analysis to include axonal polarization, the results provide presumably clinically meaningful information. Hence the importance of model complexity may be more relevant with cellular level predictions of neuromodulation.
Giga-voxel computational morphogenesis for structural design
NASA Astrophysics Data System (ADS)
Aage, Niels; Andreassen, Erik; Lazarov, Boyan S.; Sigmund, Ole
2017-10-01
In the design of industrial products ranging from hearing aids to automobiles and aeroplanes, material is distributed so as to maximize the performance and minimize the cost. Historically, human intuition and insight have driven the evolution of mechanical design, recently assisted by computer-aided design approaches. The computer-aided approach known as topology optimization enables unrestricted design freedom and shows great promise with regard to weight savings, but its applicability has so far been limited to the design of single components or simple structures, owing to the resolution limits of current optimization methods. Here we report a computational morphogenesis tool, implemented on a supercomputer, that produces designs with giga-voxel resolution—more than two orders of magnitude higher than previously reported. Such resolution provides insights into the optimal distribution of material within a structure that were hitherto unachievable owing to the challenges of scaling up existing modelling and optimization frameworks. As an example, we apply the tool to the design of the internal structure of a full-scale aeroplane wing. The optimized full-wing design has unprecedented structural detail at length scales ranging from tens of metres to millimetres and, intriguingly, shows remarkable similarity to naturally occurring bone structures in, for example, bird beaks. We estimate that our optimized design corresponds to a reduction in mass of 2-5 per cent compared to currently used aeroplane wing designs, which translates into a reduction in fuel consumption of about 40-200 tonnes per year per aeroplane. Our morphogenesis process is generally applicable, not only to mechanical design, but also to flow systems, antennas, nano-optics and micro-systems.
Giga-voxel computational morphogenesis for structural design.
Aage, Niels; Andreassen, Erik; Lazarov, Boyan S; Sigmund, Ole
2017-10-04
In the design of industrial products ranging from hearing aids to automobiles and aeroplanes, material is distributed so as to maximize the performance and minimize the cost. Historically, human intuition and insight have driven the evolution of mechanical design, recently assisted by computer-aided design approaches. The computer-aided approach known as topology optimization enables unrestricted design freedom and shows great promise with regard to weight savings, but its applicability has so far been limited to the design of single components or simple structures, owing to the resolution limits of current optimization methods. Here we report a computational morphogenesis tool, implemented on a supercomputer, that produces designs with giga-voxel resolution-more than two orders of magnitude higher than previously reported. Such resolution provides insights into the optimal distribution of material within a structure that were hitherto unachievable owing to the challenges of scaling up existing modelling and optimization frameworks. As an example, we apply the tool to the design of the internal structure of a full-scale aeroplane wing. The optimized full-wing design has unprecedented structural detail at length scales ranging from tens of metres to millimetres and, intriguingly, shows remarkable similarity to naturally occurring bone structures in, for example, bird beaks. We estimate that our optimized design corresponds to a reduction in mass of 2-5 per cent compared to currently used aeroplane wing designs, which translates into a reduction in fuel consumption of about 40-200 tonnes per year per aeroplane. Our morphogenesis process is generally applicable, not only to mechanical design, but also to flow systems, antennas, nano-optics and micro-systems.
NASA Astrophysics Data System (ADS)
Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang
2010-05-01
CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces
Firmware Development Improves System Efficiency
NASA Technical Reports Server (NTRS)
Chern, E. James; Butler, David W.
1993-01-01
Most manufacturing processes require physical pointwise positioning of the components or tools from one location to another. Typical mechanical systems utilize either stop-and-go or fixed feed-rate procession to accomplish the task. The first approach achieves positional accuracy but prolongs overall time and increases wear on the mechanical system. The second approach sustains the throughput but compromises positional accuracy. A computer firmware approach has been developed to optimize this point wise mechanism by utilizing programmable interrupt controls to synchronize engineering processes 'on the fly'. This principle has been implemented in an eddy current imaging system to demonstrate the improvement. Software programs were developed that enable a mechanical controller card to transmit interrupts to a system controller as a trigger signal to initiate an eddy current data acquisition routine. The advantages are: (1) optimized manufacturing processes, (2) increased throughput of the system, (3) improved positional accuracy, and (4) reduced wear and tear on the mechanical system.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2001-01-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2000-12-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
Development of a new model for short period ocean tidal variations of Earth rotation
NASA Astrophysics Data System (ADS)
Schuh, Harald
2015-08-01
Within project SPOT (Short Period Ocean Tidal variations in Earth rotation) we develop a new high frequency Earth rotation model based on empirical ocean tide models. The main purpose of the SPOT model is its application to space geodetic observations such as GNSS and VLBI.We consider an empirical ocean tide model, which does not require hydrodynamic ocean modeling to determine ocean tidal angular momentum. We use here the EOT11a model of Savcenko & Bosch (2012), which is extended for some additional minor tides (e.g. M1, J1, T2). As empirical tidal models do not provide ocean tidal currents, which are re- quired for the computation of oceanic relative angular momentum, we implement an approach first published by Ray (2001) to estimate ocean tidal current veloci- ties for all tides considered in the extended EOT11a model. The approach itself is tested by application to tidal heights from hydrodynamic ocean tide models, which also provide tidal current velocities. Based on the tidal heights and the associated current velocities the oceanic tidal angular momentum (OTAM) is calculated.For the computation of the related short period variation of Earth rotation, we have re-examined the Euler-Liouville equation for an elastic Earth model with a liquid core. The focus here is on the consistent calculation of the elastic Love num- bers and associated Earth model parameters, which are considered in the Euler- Liouville equation for diurnal and sub-diurnal periods in the frequency domain.
How Do Tides and Tsunamis Interact in a Highly Energetic Channel? The Case of Canal Chacao, Chile
NASA Astrophysics Data System (ADS)
Winckler, Patricio; Sepúlveda, Ignacio; Aron, Felipe; Contreras-López, Manuel
2017-12-01
This study aims at understanding the role of tidal level, speed, and direction in tsunami propagation in highly energetic tidal channels. The main goal is to comprehend whether tide-tsunami interactions enhance/reduce elevation, currents speeds, and arrival times, when compared to pure tsunami models and to simulations in which tides and tsunamis are linearly superimposed. We designed various numerical experiments to compute the tsunami propagation along Canal Chacao, a highly energetic channel in the Chilean Patagonia lying on a subduction margin prone to megathrust earthquakes. Three modeling approaches were implemented under the same seismic scenario: a tsunami model with a constant tide level, a series of six composite models in which independent tide and tsunami simulations are linearly superimposed, and a series of six tide-tsunami nonlinear interaction models (full models). We found that hydrodynamic patterns differ significantly among approaches, being the composite and full models sensitive to both the tidal phase at which the tsunami is triggered and the local depth of the channel. When compared to full models, composite models adequately predicted the maximum surface elevation, but largely overestimated currents. The amplitude and arrival time of the tsunami-leading wave computed with the full model was found to be strongly dependent on the direction of the tidal current and less responsive to the tide level and the tidal current speed. These outcomes emphasize the importance of addressing more carefully the interactions of tides and tsunamis on hazard assessment studies.
Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems
Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.
2014-01-01
The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545
Experimental scattershot boson sampling
Bentivegna, Marco; Spagnolo, Nicolò; Vitelli, Chiara; Flamini, Fulvio; Viggianiello, Niko; Latmiral, Ludovico; Mataloni, Paolo; Brod, Daniel J.; Galvão, Ernesto F.; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto; Sciarrino, Fabio
2015-01-01
Boson sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialized quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, scattershot boson sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric down-conversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We use recently proposed statistical tools to analyze our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy. PMID:26601164
Experimental scattershot boson sampling.
Bentivegna, Marco; Spagnolo, Nicolò; Vitelli, Chiara; Flamini, Fulvio; Viggianiello, Niko; Latmiral, Ludovico; Mataloni, Paolo; Brod, Daniel J; Galvão, Ernesto F; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto; Sciarrino, Fabio
2015-04-01
Boson sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialized quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, scattershot boson sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric down-conversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We use recently proposed statistical tools to analyze our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy.
Advanced Architectures for Astrophysical Supercomputing
NASA Astrophysics Data System (ADS)
Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.
2010-12-01
Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.
Wei, Zhenglun Alan; Sonntag, Simon Johannes; Toma, Milan; Singh-Gryzbon, Shelly; Sun, Wei
2018-04-19
The governing international standard for the development of prosthetic heart valves is International Organization for Standardization (ISO) 5840. This standard requires the assessment of the thrombus potential of transcatheter heart valve substitutes using an integrated thrombus evaluation. Besides experimental flow field assessment and ex vivo flow testing, computational fluid dynamics is a critical component of this integrated approach. This position paper is intended to provide and discuss best practices for the setup of a computational model, numerical solving, post-processing, data evaluation and reporting, as it relates to transcatheter heart valve substitutes. This paper is not intended to be a review of current computational technology; instead, it represents the position of the ISO working group consisting of experts from academia and industry with regards to considerations for computational fluid dynamic assessment of transcatheter heart valve substitutes.
ERIC Educational Resources Information Center
Hao, Shuang
2016-01-01
Scaffolding is a type of instructional support that helps students to complete a learning task that exceeds their current ability. Scaffolding plays an important role in augmenting other instructional approaches, such as problem-based learning, and facilitates gradual shifts of responsibility from the more advanced others to the learner (Belland,…
Assessing the Effectiveness of Web-Based Tutorials Using Pre-and Post-Test Measurements
ERIC Educational Resources Information Center
Guy, Retta Sweat; Lownes-Jackson, Millicent
2012-01-01
Computer technology in general and the Internet in particular have facilitated as well as motivated the development of Web-based tutorials (MacKinnon & Williams, 2006). The current research study describes a pedagogical approach that exploits the use of self-paced, Web-based tutorials for assisting students with reviewing grammar and mechanics…
Knowledge-Base Semantic Gap Analysis for the Vulnerability Detection
NASA Astrophysics Data System (ADS)
Wu, Raymond; Seki, Keisuke; Sakamoto, Ryusuke; Hisada, Masayuki
Web security became an alert in internet computing. To cope with ever-rising security complexity, semantic analysis is proposed to fill-in the gap that the current approaches fail to commit. Conventional methods limit their focus to the physical source codes instead of the abstraction of semantics. It bypasses new types of vulnerability and causes tremendous business loss.
ERIC Educational Resources Information Center
Lu, Hui-Chuan; Chu, Yu-Hsin; Chang, Cheng-Yu
2013-01-01
Compared with English learners, Spanish learners have fewer resources for automatic error detection and revision and following the current integrative Computer Assisted Language Learning (CALL), we combined corpus-based approach and CALL to create the System of Error Detection and Revision Suggestion (SEDRS) for learning Spanish. Through…
New Metaphors for Organizing Data Could Change the Nature of Computers.
ERIC Educational Resources Information Center
Young, Jeffrey R.
1997-01-01
Based on the idea that the current framework for organizing electronic data does not take advantage of the mind's ability to make connections among disparate pieces of information, several projects at universities around the country are taking new approaches to classification and storage of vast amounts of computerized data. The new systems take…
New directions for Artificial Intelligence (AI) methods in optimum design
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1989-01-01
Developments and applications of artificial intelligence (AI) methods in the design of structural systems is reviewed. Principal shortcomings in the current approach are emphasized, and the need for some degree of formalism in the development environment for such design tools is underscored. Emphasis is placed on efforts to integrate algorithmic computations in expert systems.
Integrating Learning Services in the Cloud: An Approach That Benefits Both Systems and Learning
ERIC Educational Resources Information Center
Gutiérrez-Carreón, Gustavo; Daradoumis, Thanasis; Jorba, Josep
2015-01-01
Currently there is an increasing trend to implement functionalities that allow for the development of applications based on Cloud computing. In education there are high expectations for Learning Management Systems since they can be powerful tools to foster more effective collaboration within a virtual classroom. Tools can also be integrated with…
A Systematic Approach to Improving E-Learning Implementations in High Schools
ERIC Educational Resources Information Center
Pardamean, Bens; Suparyanto, Teddy
2014-01-01
This study was based on the current growing trend of implementing e-learning in high schools. Most endeavors have been inefficient, rendering an objective of determining the initial steps that could be taken to improve these efforts by assessing a student population's computer skill levels and performances in an IT course. Demographic factors were…
Introducing Innovative Approaches to Learning in Fluid Mechanics: A Case Study
ERIC Educational Resources Information Center
Gynnild, Vidar; Myrhaug, Dag; Pettersen, Bjornar
2007-01-01
The purpose of the current article is to examine the impact of laboratory demonstrations and computer visualizations on learning in a third-year fluid mechanics course at Norwegian University of Science and Technology (NTNU). As a first step, on entering the course, students were exposed to a laboratory demonstration focusing on the nature of…
Optical hybrid quantum teleportation and its applications
NASA Astrophysics Data System (ADS)
Takeda, Shuntaro; Okada, Masanori; Furusawa, Akira
2017-08-01
Quantum teleportation, a transfer protocol of quantum states, is the essence of many sophisticated quantum information protocols. There have been two complementary approaches to optical quantum teleportation: discrete variables (DVs) and continuous variables (CVs). However, both approaches have pros and cons. Here we take a "hybrid" approach to overcome the current limitations: CV quantum teleportation of DVs. This approach enabled the first realization of deterministic quantum teleportation of photonic qubits without post-selection. We also applied the hybrid scheme to several experiments, including entanglement swapping between DVs and CVs, conditional CV teleportation of single photons, and CV teleportation of qutrits. We are now aiming at universal, scalable, and fault-tolerant quantum computing based on these hybrid technologies.
Geerts, Hugo; Hofmann-Apitius, Martin; Anastasio, Thomas J
2017-11-01
Neurodegenerative diseases such as Alzheimer's disease (AD) follow a slowly progressing dysfunctional trajectory, with a large presymptomatic component and many comorbidities. Using preclinical models and large-scale omics studies ranging from genetics to imaging, a large number of processes that might be involved in AD pathology at different stages and levels have been identified. The sheer number of putative hypotheses makes it almost impossible to estimate their contribution to the clinical outcome and to develop a comprehensive view on the pathological processes driving the clinical phenotype. Traditionally, bioinformatics approaches have provided correlations and associations between processes and phenotypes. Focusing on causality, a new breed of advanced and more quantitative modeling approaches that use formalized domain expertise offer new opportunities to integrate these different modalities and outline possible paths toward new therapeutic interventions. This article reviews three different computational approaches and their possible complementarities. Process algebras, implemented using declarative programming languages such as Maude, facilitate simulation and analysis of complicated biological processes on a comprehensive but coarse-grained level. A model-driven Integration of Data and Knowledge, based on the OpenBEL platform and using reverse causative reasoning and network jump analysis, can generate mechanistic knowledge and a new, mechanism-based taxonomy of disease. Finally, Quantitative Systems Pharmacology is based on formalized implementation of domain expertise in a more fine-grained, mechanism-driven, quantitative, and predictive humanized computer model. We propose a strategy to combine the strengths of these individual approaches for developing powerful modeling methodologies that can provide actionable knowledge for rational development of preventive and therapeutic interventions. Development of these computational approaches is likely to be required for further progress in understanding and treating AD. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Computational neuroscience approach to biomarkers and treatments for mental disorders.
Yahata, Noriaki; Kasai, Kiyoto; Kawato, Mitsuo
2017-04-01
Psychiatry research has long experienced a stagnation stemming from a lack of understanding of the neurobiological underpinnings of phenomenologically defined mental disorders. Recently, the application of computational neuroscience to psychiatry research has shown great promise in establishing a link between phenomenological and pathophysiological aspects of mental disorders, thereby recasting current nosology in more biologically meaningful dimensions. In this review, we highlight recent investigations into computational neuroscience that have undertaken either theory- or data-driven approaches to quantitatively delineate the mechanisms of mental disorders. The theory-driven approach, including reinforcement learning models, plays an integrative role in this process by enabling correspondence between behavior and disorder-specific alterations at multiple levels of brain organization, ranging from molecules to cells to circuits. Previous studies have explicated a plethora of defining symptoms of mental disorders, including anhedonia, inattention, and poor executive function. The data-driven approach, on the other hand, is an emerging field in computational neuroscience seeking to identify disorder-specific features among high-dimensional big data. Remarkably, various machine-learning techniques have been applied to neuroimaging data, and the extracted disorder-specific features have been used for automatic case-control classification. For many disorders, the reported accuracies have reached 90% or more. However, we note that rigorous tests on independent cohorts are critically required to translate this research into clinical applications. Finally, we discuss the utility of the disorder-specific features found by the data-driven approach to psychiatric therapies, including neurofeedback. Such developments will allow simultaneous diagnosis and treatment of mental disorders using neuroimaging, thereby establishing 'theranostics' for the first time in clinical psychiatry. © 2016 The Authors. Psychiatry and Clinical Neurosciences © 2016 Japanese Society of Psychiatry and Neurology.
Natural flow and water consumption in the Milk River basin, Montana and Alberta, Canada
Thompson, R.E.
1986-01-01
A study was conducted to determine the differences between natural and nonnatural Milk River streamflow, to delineate and quantify the types and effects of water consumption on streamflow, and to refine the current computation procedure into one which computes and apportions natural flow. Water consumption consists principally of irrigated agriculture, municipal use, and evapotranspiration. Mean daily water consumption by irrigation ranged from 10 cu ft/sec to 26 cu ft/sec in the Canada part and from 6 cu ft/sec to 41 cu ft/sec in the US part. Two Canadian municipalities consume about 320 acre-ft and one US municipality consumes about 20 acre-ft yearly. Evaporation from the water surface comprises 80% 0 90% of the flow reduction in the Milk River attributed to total evapotranspiration. The current water-budget approach for computing natural flow of the Milk River where it reenters the US was refined into an interim procedure which includes allowances for man-induced consumption and a method for apportioning computed natural flow between the US and Canada. The refined procedure is considered interim because further study of flow routing, tributary inflow, and man-induced consumption is needed before a more accurate procedure for computing natural flow can be developed. (Author 's abstract)
Kumar, Akhil; Tiwari, Ashish; Sharma, Ashok
2018-03-15
Alzheimer disease (AD) is now considered as a multifactorial neurodegenerative disorder and rapidly increasing to an alarming situation and causing higher death rate. One target one ligand hypothesis is not able to provide complete solution of AD due to multifactorial nature of disease and one target one drug seems to fail to provide better treatment against AD. Moreover, current available treatments are limited and most of the upcoming treatments under clinical trials are based on modulating single target. So the current AD drug discovery research shifting towards new approach for better solution that simultaneously modulate more than one targets in the neurodegenerative cascade. This can be achieved by network pharmacology, multi-modal therapies, multifaceted, and/or the more recently proposed term "multi-targeted designed drugs. Drug discovery project is tedious, costly and long term project. Moreover, multi target AD drug discovery added extra challenges such as good binding affinity of ligands for multiple targets, optimal ADME/T properties, no/less off target side effect and crossing of the blood brain barrier. These hurdles may be addressed by insilico methods for efficient solution in less time and cost as computational methods successfully applied to single target drug discovery project. Here we are summarizing some of the most prominent and computationally explored single target against AD and further we discussed successful example of dual or multiple inhibitors for same targets. Moreover we focused on ligand and structure based computational approach to design MTDL against AD. However is not an easy task to balance dual activity in a single molecule but computational approach such as virtual screening docking, QSAR, simulation and free energy are useful in future MTDLs drug discovery alone or in combination with fragment based method. However, rational and logical implementations of computational drug designing methods are capable of assisting AD drug discovery and play an important role in optimizing multi-target drug discovery. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Technical Reports Server (NTRS)
Raju, M. S.
2016-01-01
The open national combustion code (Open- NCC) is developed with the aim of advancing the current multi-dimensional computational tools used in the design of advanced technology combustors. In this paper we provide an overview of the spray module, LSPRAY-V, developed as a part of this effort. The spray solver is mainly designed to predict the flow, thermal, and transport properties of a rapidly evaporating multi-component liquid spray. The modeling approach is applicable over a wide-range of evaporating conditions (normal, superheat, and supercritical). The modeling approach is based on several well-established atomization, vaporization, and wall/droplet impingement models. It facilitates large-scale combustor computations through the use of massively parallel computers with the ability to perform the computations on either structured & unstructured grids. The spray module has a multi-liquid and multi-injector capability, and can be used in the calculation of both steady and unsteady computations. We conclude the paper by providing the results for a reacting spray generated by a single injector element with 600 axially swept swirler vanes. It is a configuration based on the next-generation lean-direct injection (LDI) combustor concept. The results include comparisons for both combustor exit temperature and EINOX at three different fuel/air ratios.
NASA Astrophysics Data System (ADS)
Stoilescu, Dorian; Egodawatte, Gunawardena
2010-12-01
Research shows that female and male students in undergraduate computer science programs view computer culture differently. Female students are interested more in the use of computers than in doing programming, whereas male students see computer science mainly as a programming activity. The overall purpose of our research was not to find new definitions for computer science culture but to see how male and female students see themselves involved in computer science practices, how they see computer science as a successful career, and what they like and dislike about current computer science practices. The study took place in a mid-sized university in Ontario. Sixteen students and two instructors were interviewed to get their views. We found that male and female views are different on computer use, programming, and the pattern of student interactions. Female and male students did not have any major issues in using computers. In computing programming, female students were not so involved in computing activities whereas male students were heavily involved. As for the opinions about successful computer science professionals, both female and male students emphasized hard working, detailed oriented approaches, and enjoying playing with computers. The myth of the geek as a typical profile of successful computer science students was not found to be true.
Analysis and Design of Bridgeless Switched Mode Power Supply for Computers
NASA Astrophysics Data System (ADS)
Singh, S.; Bhuvaneswari, G.; Singh, B.
2014-09-01
Switched mode power supplies (SMPSs) used in computers need multiple isolated and stiffly regulated output dc voltages with different current ratings. These isolated multiple output dc voltages are obtained by using a multi-winding high frequency transformer (HFT). A half-bridge dc-dc converter is used here for obtaining different isolated and well regulated dc voltages. In the front end, non-isolated Single Ended Primary Inductance Converters (SEPICs) are added to improve the power quality in terms of low input current harmonics and high power factor (PF). Two non-isolated SEPICs are connected in a way to completely eliminate the need of single-phase diode-bridge rectifier at the front end. Output dc voltages at both the non-isolated and isolated stages are controlled and regulated separately for power quality improvement. A voltage mode control approach is used in the non-isolated SEPIC stage for simple and effective control whereas average current control is used in the second isolated stage.
Truong, Dennis Q; Hüber, Mathias; Xie, Xihe; Datta, Abhishek; Rahman, Asif; Parra, Lucas C; Dmochowski, Jacek P; Bikson, Marom
2014-01-01
Computational models of brain current flow during transcranial electrical stimulation (tES), including transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS), are increasingly used to understand and optimize clinical trials. We propose that broad dissemination requires a simple graphical user interface (GUI) software that allows users to explore and design montages in real-time, based on their own clinical/experimental experience and objectives. We introduce two complimentary open-source platforms for this purpose: BONSAI and SPHERES. BONSAI is a web (cloud) based application (available at neuralengr.com/bonsai) that can be accessed through any flash-supported browser interface. SPHERES (available at neuralengr.com/spheres) is a stand-alone GUI application that allow consideration of arbitrary montages on a concentric sphere model by leveraging an analytical solution. These open-source tES modeling platforms are designed go be upgraded and enhanced. Trade-offs between open-access approaches that balance ease of access, speed, and flexibility are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
Tong, Wing-Chiu; Choi, Cecilia Y.; Karche, Sanjay; Holden, Arun V.; Zhang, Henggui; Taggart, Michael J.
2011-01-01
Uterine contractions during labor are discretely regulated by rhythmic action potentials (AP) of varying duration and form that serve to determine calcium-dependent force production. We have employed a computational biology approach to develop a fuller understanding of the complexity of excitation-contraction (E-C) coupling of uterine smooth muscle cells (USMC). Our overall aim is to establish a mathematical platform of sufficient biophysical detail to quantitatively describe known uterine E-C coupling parameters and thereby inform future empirical investigations of physiological and pathophysiological mechanisms governing normal and dysfunctional labors. From published and unpublished data we construct mathematical models for fourteen ionic currents of USMCs: currents (L- and T-type), current, an hyperpolarization-activated current, three voltage-gated currents, two -activated current, -activated current, non-specific cation current, - exchanger, - pump and background current. The magnitudes and kinetics of each current system in a spindle shaped single cell with a specified surface area∶volume ratio is described by differential equations, in terms of maximal conductances, electrochemical gradient, voltage-dependent activation/inactivation gating variables and temporal changes in intracellular computed from known fluxes. These quantifications are validated by the reconstruction of the individual experimental ionic currents obtained under voltage-clamp. Phasic contraction is modeled in relation to the time constant of changing . This integrated model is validated by its reconstruction of the different USMC AP configurations (spikes, plateau and bursts of spikes), the change from bursting to plateau type AP produced by estradiol and of simultaneous experimental recordings of spontaneous AP, and phasic force. In summary, our advanced mathematical model provides a powerful tool to investigate the physiological ionic mechanisms underlying the genesis of uterine electrical E-C coupling of labor and parturition. This will furnish the evolution of descriptive and predictive quantitative models of myometrial electrogenesis at the whole cell and tissue levels. PMID:21559514
Boolean and brain-inspired computing using spin-transfer torque devices
NASA Astrophysics Data System (ADS)
Fan, Deliang
Several completely new approaches (such as spintronic, carbon nanotube, graphene, TFETs, etc.) to information processing and data storage technologies are emerging to address the time frame beyond current Complementary Metal-Oxide-Semiconductor (CMOS) roadmap. The high speed magnetization switching of a nano-magnet due to current induced spin-transfer torque (STT) have been demonstrated in recent experiments. Such STT devices can be explored in compact, low power memory and logic design. In order to truly leverage STT devices based computing, researchers require a re-think of circuit, architecture, and computing model, since the STT devices are unlikely to be drop-in replacements for CMOS. The potential of STT devices based computing will be best realized by considering new computing models that are inherently suited to the characteristics of STT devices, and new applications that are enabled by their unique capabilities, thereby attaining performance that CMOS cannot achieve. The goal of this research is to conduct synergistic exploration in architecture, circuit and device levels for Boolean and brain-inspired computing using nanoscale STT devices. Specifically, we first show that the non-volatile STT devices can be used in designing configurable Boolean logic blocks. We propose a spin-memristor threshold logic (SMTL) gate design, where memristive cross-bar array is used to perform current mode summation of binary inputs and the low power current mode spintronic threshold device carries out the energy efficient threshold operation. Next, for brain-inspired computing, we have exploited different spin-transfer torque device structures that can implement the hard-limiting and soft-limiting artificial neuron transfer functions respectively. We apply such STT based neuron (or 'spin-neuron') in various neural network architectures, such as hierarchical temporal memory and feed-forward neural network, for performing "human-like" cognitive computing, which show more than two orders of lower energy consumption compared to state of the art CMOS implementation. Finally, we show the dynamics of injection locked Spin Hall Effect Spin-Torque Oscillator (SHE-STO) cluster can be exploited as a robust multi-dimensional distance metric for associative computing, image/ video analysis, etc. Our simulation results show that the proposed system architecture with injection locked SHE-STOs and the associated CMOS interface circuits can be suitable for robust and energy efficient associative computing and pattern matching.
NASA Astrophysics Data System (ADS)
Nava, Andrea; Giuliano, Rosa; Campagnano, Gabriele; Giuliano, Domenico
2016-11-01
Using the properties of the transfer matrix of one-dimensional quantum mechanical systems, we derive an exact formula for the persistent current across a quantum mechanical ring pierced by a magnetic flux Φ as a single integral of a known function of the system's parameters. Our approach provides exact results at zero temperature, which can be readily extended to a finite temperature T . We apply our technique to exactly compute the persistent current through p -wave and s -wave superconducting-normal hybrid rings, deriving full plots of the current as a function of the applied flux at various system's scales. Doing so, we recover at once a number of effects such as the crossover in the current periodicity on increasing the size of the ring and the signature of the topological phase transition in the p -wave case. In the limit of a large ring size, resorting to a systematic expansion in inverse powers of the ring length, we derive exact analytic closed-form formulas, applicable to a number of cases of physical interest.
Computational Prediction of Metabolism: Sites, Products, SAR, P450 Enzyme Dynamics, and Mechanisms
2012-01-01
Metabolism of xenobiotics remains a central challenge for the discovery and development of drugs, cosmetics, nutritional supplements, and agrochemicals. Metabolic transformations are frequently related to the incidence of toxic effects that may result from the emergence of reactive species, the systemic accumulation of metabolites, or by induction of metabolic pathways. Experimental investigation of the metabolism of small organic molecules is particularly resource demanding; hence, computational methods are of considerable interest to complement experimental approaches. This review provides a broad overview of structure- and ligand-based computational methods for the prediction of xenobiotic metabolism. Current computational approaches to address xenobiotic metabolism are discussed from three major perspectives: (i) prediction of sites of metabolism (SOMs), (ii) elucidation of potential metabolites and their chemical structures, and (iii) prediction of direct and indirect effects of xenobiotics on metabolizing enzymes, where the focus is on the cytochrome P450 (CYP) superfamily of enzymes, the cardinal xenobiotics metabolizing enzymes. For each of these domains, a variety of approaches and their applications are systematically reviewed, including expert systems, data mining approaches, quantitative structure–activity relationships (QSARs), and machine learning-based methods, pharmacophore-based algorithms, shape-focused techniques, molecular interaction fields (MIFs), reactivity-focused techniques, protein–ligand docking, molecular dynamics (MD) simulations, and combinations of methods. Predictive metabolism is a developing area, and there is still enormous potential for improvement. However, it is clear that the combination of rapidly increasing amounts of available ligand- and structure-related experimental data (in particular, quantitative data) with novel and diverse simulation and modeling approaches is accelerating the development of effective tools for prediction of in vivo metabolism, which is reflected by the diverse and comprehensive data sources and methods for metabolism prediction reviewed here. This review attempts to survey the range and scope of computational methods applied to metabolism prediction and also to compare and contrast their applicability and performance. PMID:22339582
Integration of Titan supercomputer at OLCF with ATLAS Production System
NASA Astrophysics Data System (ADS)
Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Low-cost space-varying FIR filter architecture for computational imaging systems
NASA Astrophysics Data System (ADS)
Feng, Guotong; Shoaib, Mohammed; Schwartz, Edward L.; Dirk Robinson, M.
2010-01-01
Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system requires low-complexity algorithms enabling space-varying sharpening. In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost space-varying FIR filter architecture.
PaPrBaG: A machine learning approach for the detection of novel pathogens from NGS data
NASA Astrophysics Data System (ADS)
Deneke, Carlus; Rentzsch, Robert; Renard, Bernhard Y.
2017-01-01
The reliable detection of novel bacterial pathogens from next-generation sequencing data is a key challenge for microbial diagnostics. Current computational tools usually rely on sequence similarity and often fail to detect novel species when closely related genomes are unavailable or missing from the reference database. Here we present the machine learning based approach PaPrBaG (Pathogenicity Prediction for Bacterial Genomes). PaPrBaG overcomes genetic divergence by training on a wide range of species with known pathogenicity phenotype. To that end we compiled a comprehensive list of pathogenic and non-pathogenic bacteria with human host, using various genome metadata in conjunction with a rule-based protocol. A detailed comparative study reveals that PaPrBaG has several advantages over sequence similarity approaches. Most importantly, it always provides a prediction whereas other approaches discard a large number of sequencing reads with low similarity to currently known reference genomes. Furthermore, PaPrBaG remains reliable even at very low genomic coverages. CombiningPaPrBaG with existing approaches further improves prediction results.
Mirroring co-evolving trees in the light of their topologies.
Hajirasouliha, Iman; Schönhuth, Alexander; de Juan, David; Valencia, Alfonso; Sahinalp, S Cenk
2012-05-01
Determining the interaction partners among protein/domain families poses hard computational problems, in particular in the presence of paralogous proteins. Available approaches aim to identify interaction partners among protein/domain families through maximizing the similarity between trimmed versions of their phylogenetic trees. Since maximization of any natural similarity score is computationally difficult, many approaches employ heuristics to evaluate the distance matrices corresponding to the tree topologies in question. In this article, we devise an efficient deterministic algorithm which directly maximizes the similarity between two leaf labeled trees with edge lengths, obtaining a score-optimal alignment of the two trees in question. Our algorithm is significantly faster than those methods based on distance matrix comparison: 1 min on a single processor versus 730 h on a supercomputer. Furthermore, we outperform the current state-of-the-art exhaustive search approach in terms of precision, while incurring acceptable losses in recall. A C implementation of the method demonstrated in this article is available at http://compbio.cs.sfu.ca/mirrort.htm
Steffen, Michael; Curtis, Sean; Kirby, Robert M; Ryan, Jennifer K
2008-01-01
Streamline integration of fields produced by computational fluid mechanics simulations is a commonly used tool for the investigation and analysis of fluid flow phenomena. Integration is often accomplished through the application of ordinary differential equation (ODE) integrators--integrators whose error characteristics are predicated on the smoothness of the field through which the streamline is being integrated--smoothness which is not available at the inter-element level of finite volume and finite element data. Adaptive error control techniques are often used to ameliorate the challenge posed by inter-element discontinuities. As the root of the difficulties is the discontinuous nature of the data, we present a complementary approach of applying smoothness-enhancing accuracy-conserving filters to the data prior to streamline integration. We investigate whether such an approach applied to uniform quadrilateral discontinuous Galerkin (high-order finite volume) data can be used to augment current adaptive error control approaches. We discuss and demonstrate through numerical example the computational trade-offs exhibited when one applies such a strategy.
Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.
Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo
2016-07-01
During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Cunningham, Sally Jo
The current crop of digital libraries for the computing community are strongly grounded in the conventional library paradigm: they provide indexes to support searching of collections of research papers. As such, these digital libraries are relatively impoverished; the present computing digital libraries omit many of the documents and resources that are currently available to computing researchers, and offer few browsing structures. These computing digital libraries were built 'top down': the resources and collection contents are forced to fit an existing digital library architecture. A 'bottom up' approach to digital library development would begin with an investigation of a community's information needs and available documents, and then design a library to organize those documents in such a way as to fulfill the community's needs. The 'home grown', informal information resources developed by and for the machine learning community are examined as a case study, to determine the types of information and document organizations 'native' to this group of researchers. The insights gained in this type of case study can be used to inform construction of a digital library tailored to this community.
NASA Technical Reports Server (NTRS)
Steinberg, R.
1984-01-01
It is suggested that the very short range forecast problem for aviation is one of data management rather than model development and the possibility of improving the aviation forecast using current technology is underlined. The MERIT concept of modeling technology, and advanced man/computer interactive data management and enhancement techniques to provide a tailored, accurate and timely forecast for aviation is outlined. The MERIT includes utilization of the Langrangian approach, extensive use of the automated aircraft report to complement the present data base and provide the most current observations; and the concept that a 2 to 12 hour forecast provided every 3 hr can meet the domestic needs of aviation instead of the present 18 and 24 hr forecast provided every 12 hr.
NASA Technical Reports Server (NTRS)
Rogers, Ralph V.
1992-01-01
This research project addresses the need to provide an efficient and safe mechanism to investigate the effects and requirements of the tiltrotor aircraft's commercial operations on air transportation infrastructures, particularly air traffic control. The mechanism of choice is computer simulation. Unfortunately, the fundamental paradigms of the current air traffic control simulation models do not directly support the broad range of operational options and environments necessary to study tiltrotor operations. Modification of current air traffic simulation models to meet these requirements does not appear viable given the range and complexity of issues needing resolution. As a result, the investigation of systemic, infrastructure issues surrounding the effects of tiltrotor commercial operations requires new approaches to simulation modeling. These models should be based on perspectives and ideas closer to those associated with tiltrotor air traffic operations.
A Multi-Agent Approach to the Simulation of Robotized Manufacturing Systems
NASA Astrophysics Data System (ADS)
Foit, K.; Gwiazda, A.; Banaś, W.
2016-08-01
The recent years of eventful industry development, brought many competing products, addressed to the same market segment. The shortening of a development cycle became a necessity if the company would like to be competitive. Because of switching to the Intelligent Manufacturing model the industry search for new scheduling algorithms, while the traditional ones do not meet the current requirements. The agent-based approach has been considered by many researchers as an important way of evolution of modern manufacturing systems. Due to the properties of the multi-agent systems, this methodology is very helpful during creation of the model of production system, allowing depicting both processing and informational part. The complexity of such approach makes the analysis impossible without the computer assistance. Computer simulation still uses a mathematical model to recreate a real situation, but nowadays the 2D or 3D virtual environments or even virtual reality have been used for realistic illustration of the considered systems. This paper will focus on robotized manufacturing system and will present the one of possible approaches to the simulation of such systems. The selection of multi-agent approach is motivated by the flexibility of this solution that offers the modularity, robustness and autonomy.
Ding, Xuemei; Bucholc, Magda; Wang, Haiying; Glass, David H; Wang, Hui; Clarke, Dave H; Bjourson, Anthony John; Dowey, Le Roy C; O'Kane, Maurice; Prasad, Girijesh; Maguire, Liam; Wong-Lin, KongFatt
2018-06-27
There is currently a lack of an efficient, objective and systemic approach towards the classification of Alzheimer's disease (AD), due to its complex etiology and pathogenesis. As AD is inherently dynamic, it is also not clear how the relationships among AD indicators vary over time. To address these issues, we propose a hybrid computational approach for AD classification and evaluate it on the heterogeneous longitudinal AIBL dataset. Specifically, using clinical dementia rating as an index of AD severity, the most important indicators (mini-mental state examination, logical memory recall, grey matter and cerebrospinal volumes from MRI and active voxels from PiB-PET brain scans, ApoE, and age) can be automatically identified from parallel data mining algorithms. In this work, Bayesian network modelling across different time points is used to identify and visualize time-varying relationships among the significant features, and importantly, in an efficient way using only coarse-grained data. Crucially, our approach suggests key data features and their appropriate combinations that are relevant for AD severity classification with high accuracy. Overall, our study provides insights into AD developments and demonstrates the potential of our approach in supporting efficient AD diagnosis.
NASA Astrophysics Data System (ADS)
Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce
2015-09-01
The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.
Inpainting approaches to fill in detector gaps in phase contrast computed tomography
NASA Astrophysics Data System (ADS)
Brun, F.; Delogu, P.; Longo, R.; Dreossi, D.; Rigon, L.
2018-01-01
Photon counting semiconductor detectors in radiation imaging present attractive properties, such as high efficiency, low noise, and energy sensitivity. The very complex electronics limits the sensitive area of current devices to a few square cm. This disadvantage is often compensated by tiling a larger matrix with an adequate number of detector units but this usually results in non-negligible insensitive gaps between two adjacent modules. When considering the case of Computed Tomography (CT), these gaps lead to degraded reconstructed images with severe streak and ring artifacts. This work presents two digital image processing solutions to fill in these gaps when considering the specific case of synchrotron radiation x-ray parallel beam phase contrast CT. While not discussed with experimental data, other CT modalities, such as spectral, cone beam and other geometries might benefit from the presented approaches.
Multiprocessor architectural study
NASA Technical Reports Server (NTRS)
Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.
1972-01-01
An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.
Hydrodynamic models for slurry bubble column reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gidaspow, D.
1995-12-31
The objective of this investigation is to convert a {open_quotes}learning gas-solid-liquid{close_quotes} fluidization model into a predictive design model. This model is capable of predicting local gas, liquid and solids hold-ups and the basic flow regimes: the uniform bubbling, the industrially practical churn-turbulent (bubble coalescence) and the slugging regimes. Current reactor models incorrectly assume that the gas and the particle hold-ups (volume fractions) are uniform in the reactor. They must be given in terms of empirical correlations determined under conditions that radically differ from reactor operation. In the proposed hydrodynamic approach these hold-ups are computed from separate phase momentum balances. Furthermore,more » the kinetic theory approach computes the high slurry viscosities from collisions of the catalyst particles. Thus particle rheology is not an input into the model.« less
Quinn, T. Alexander; Kohl, Peter
2013-01-01
Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215
Affective assessment of computer users based on processing the pupil diameter signal.
Ren, Peng; Barreto, Armando; Gao, Ying; Adjouadi, Malek
2011-01-01
Detecting affective changes of computer users is a current challenge in human-computer interaction which is being addressed with the help of biomedical engineering concepts. This article presents a new approach to recognize the affective state ("relaxation" vs. "stress") of a computer user from analysis of his/her pupil diameter variations caused by sympathetic activation. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features are extracted from the preprocessed PD signal for the affective state classification. Finally, a random tree classifier is implemented, achieving an accuracy of 86.78%. In these experiments the Eye Blink Frequency (EBF), is also recorded and used for affective state classification, but the results show that the PD is a more promising physiological signal for affective assessment.
Consolidation of cloud computing in ATLAS
NASA Astrophysics Data System (ADS)
Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration
2017-10-01
Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.
Docking and scoring in virtual screening for drug discovery: methods and applications.
Kitchen, Douglas B; Decornez, Hélène; Furr, John R; Bajorath, Jürgen
2004-11-01
Computational approaches that 'dock' small molecules into the structures of macromolecular targets and 'score' their potential complementarity to binding sites are widely used in hit identification and lead optimization. Indeed, there are now a number of drugs whose development was heavily influenced by or based on structure-based design and screening strategies, such as HIV protease inhibitors. Nevertheless, there remain significant challenges in the application of these approaches, in particular in relation to current scoring schemes. Here, we review key concepts and specific features of small-molecule-protein docking methods, highlight selected applications and discuss recent advances that aim to address the acknowledged limitations of established approaches.
Vote Stuffing Control in IPTV-based Recommender Systems
NASA Astrophysics Data System (ADS)
Bhatt, Rajen
Vote stuffing is a general problem in the functioning of the content rating-based recommender systems. Currently IPTV viewers browse various contents based on the program ratings. In this paper, we propose a fuzzy clustering-based approach to remove the effects of vote stuffing and consider only the genuine ratings for the programs over multiple genres. The approach requires only one authentic rating, which is generally available from recommendation system administrators or program broadcasters. The entire process is automated using fuzzy c-means clustering. Computational experiments performed over one real-world program rating database shows that the proposed approach is very efficient for controlling vote stuffing.
Costa, Valerio; Federico, Antonio; Pollastro, Carla; Ziviello, Carmela; Cataldi, Simona; Formisano, Pietro; Ciccodicola, Alfredo
2016-01-01
Type 2 diabetes (T2D) is one of the most frequent mortality causes in western countries, with rapidly increasing prevalence. Anti-diabetic drugs are the first therapeutic approach, although many patients develop drug resistance. Most drug responsiveness variability can be explained by genetic causes. Inter-individual variability is principally due to single nucleotide polymorphisms, and differential drug responsiveness has been correlated to alteration in genes involved in drug metabolism (CYP2C9) or insulin signaling (IRS1, ABCC8, KCNJ11 and PPARG). However, most genome-wide association studies did not provide clues about the contribution of DNA variations to impaired drug responsiveness. Thus, characterizing T2D drug responsiveness variants is needed to guide clinicians toward tailored therapeutic approaches. Here, we extensively investigated polymorphisms associated with altered drug response in T2D, predicting their effects in silico. Combining different computational approaches, we focused on the expression pattern of genes correlated to drug resistance and inferred evolutionary conservation of polymorphic residues, computationally predicting the biochemical properties of polymorphic proteins. Using RNA-Sequencing followed by targeted validation, we identified and experimentally confirmed that two nucleotide variations in the CAPN10 gene—currently annotated as intronic—fall within two new transcripts in this locus. Additionally, we found that a Single Nucleotide Polymorphism (SNP), currently reported as intergenic, maps to the intron of a new transcript, harboring CAPN10 and GPR35 genes, which undergoes non-sense mediated decay. Finally, we analyzed variants that fall into non-coding regulatory regions of yet underestimated functional significance, predicting that some of them can potentially affect gene expression and/or post-transcriptional regulation of mRNAs affecting the splicing. PMID:27347941
Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong
2017-07-01
Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.
A computational workflow for designing silicon donor qubits
Humble, Travis S.; Ericson, M. Nance; Jakowski, Jacek; ...
2016-09-19
Developing devices that can reliably and accurately demonstrate the principles of superposition and entanglement is an on-going challenge for the quantum computing community. Modeling and simulation offer attractive means of testing early device designs and establishing expectations for operational performance. However, the complex integrated material systems required by quantum device designs are not captured by any single existing computational modeling method. We examine the development and analysis of a multi-staged computational workflow that can be used to design and characterize silicon donor qubit systems with modeling and simulation. Our approach integrates quantum chemistry calculations with electrostatic field solvers to performmore » detailed simulations of a phosphorus dopant in silicon. We show how atomistic details can be synthesized into an operational model for the logical gates that define quantum computation in this particular technology. In conclusion, the resulting computational workflow realizes a design tool for silicon donor qubits that can help verify and validate current and near-term experimental devices.« less
Computer-Aided Analysis of Patents for Product Technology Maturity Forecasting
NASA Astrophysics Data System (ADS)
Liang, Yanhong; Gan, Dequan; Guo, Yingchun; Zhang, Peng
Product technology maturity foresting is vital for any enterprises to hold the chance for innovation and keep competitive for a long term. The Theory of Invention Problem Solving (TRIZ) is acknowledged both as a systematic methodology for innovation and a powerful tool for technology forecasting. Based on TRIZ, the state -of-the-art on the technology maturity of product and the limits of application are discussed. With the application of text mining and patent analysis technologies, this paper proposes a computer-aided approach for product technology maturity forecasting. It can overcome the shortcomings of the current methods.
Applications of Multi-Agent Technology to Power Systems
NASA Astrophysics Data System (ADS)
Nagata, Takeshi
Currently, agents are focus of intense on many sub-fields of computer science and artificial intelligence. Agents are being used in an increasingly wide variety of applications. Many important computing applications such as planning, process control, communication networks and concurrent systems will benefit from using multi-agent system approach. A multi-agent system is a structure given by an environment together with a set of artificial agents capable to act on this environment. Multi-agent models are oriented towards interactions, collaborative phenomena, and autonomy. This article presents the applications of multi-agent technology to the power systems.
1984-01-01
working drawings, lists, and miscellaneous information needed for construction and testing (fig. 4). Detail design and construction in- cludes...still in test and evaluation phases, and is currently operational on a CDC computer. Its approach to management of geometric data is a unique and...been to provide the high degree of engineering user flexibility and yet achieve acceptable response times. In late 1983, a test system which has user
Enhanced Methods for Local Ancestry Assignment in Sequenced Admixed Individuals
Brown, Robert; Pasaniuc, Bogdan
2014-01-01
Inferring the ancestry at each locus in the genome of recently admixed individuals (e.g., Latino Americans) plays a major role in medical and population genetic inferences, ranging from finding disease-risk loci, to inferring recombination rates, to mapping missing contigs in the human genome. Although many methods for local ancestry inference have been proposed, most are designed for use with genotyping arrays and fail to make use of the full spectrum of data available from sequencing. In addition, current haplotype-based approaches are very computationally demanding, requiring large computational time for moderately large sample sizes. Here we present new methods for local ancestry inference that leverage continent-specific variants (CSVs) to attain increased performance over existing approaches in sequenced admixed genomes. A key feature of our approach is that it incorporates the admixed genomes themselves jointly with public datasets, such as 1000 Genomes, to improve the accuracy of CSV calling. We use simulations to show that our approach attains accuracy similar to widely used computationally intensive haplotype-based approaches with large decreases in runtime. Most importantly, we show that our method recovers comparable local ancestries, as the 1000 Genomes consensus local ancestry calls in the real admixed individuals from the 1000 Genomes Project. We extend our approach to account for low-coverage sequencing and show that accurate local ancestry inference can be attained at low sequencing coverage. Finally, we generalize CSVs to sub-continental population-specific variants (sCSVs) and show that in some cases it is possible to determine the sub-continental ancestry for short chromosomal segments on the basis of sCSVs. PMID:24743331
An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon
NASA Technical Reports Server (NTRS)
Rutherford, Brian
2000-01-01
The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.
Laplace Transform Based Radiative Transfer Studies
NASA Astrophysics Data System (ADS)
Hu, Y.; Lin, B.; Ng, T.; Yang, P.; Wiscombe, W.; Herath, J.; Duffy, D.
2006-12-01
Multiple scattering is the major uncertainty for data analysis of space-based lidar measurements. Until now, accurate quantitative lidar data analysis has been limited to very thin objects that are dominated by single scattering, where photons from the laser beam only scatter a single time with particles in the atmosphere before reaching the receiver, and simple linear relationship between physical property and lidar signal exists. In reality, multiple scattering is always a factor in space-based lidar measurement and it dominates space- based lidar returns from clouds, dust aerosols, vegetation canopy and phytoplankton. While multiple scattering are clear signals, the lack of a fast-enough lidar multiple scattering computation tool forces us to treat the signal as unwanted "noise" and use simple multiple scattering correction scheme to remove them. Such multiple scattering treatments waste the multiple scattering signals and may cause orders of magnitude errors in retrieved physical properties. Thus the lack of fast and accurate time-dependent radiative transfer tools significantly limits lidar remote sensing capabilities. Analyzing lidar multiple scattering signals requires fast and accurate time-dependent radiative transfer computations. Currently, multiple scattering is done with Monte Carlo simulations. Monte Carlo simulations take minutes to hours and are too slow for interactive satellite data analysis processes and can only be used to help system / algorithm design and error assessment. We present an innovative physics approach to solve the time-dependent radiative transfer problem. The technique utilizes FPGA based reconfigurable computing hardware. The approach is as following, 1. Physics solution: Perform Laplace transform on the time and spatial dimensions and Fourier transform on the viewing azimuth dimension, and convert the radiative transfer differential equation solving into a fast matrix inversion problem. The majority of the radiative transfer computation goes to matrix inversion processes, FFT and inverse Laplace transforms. 2. Hardware solutions: Perform the well-defined matrix inversion, FFT and Laplace transforms on highly parallel, reconfigurable computing hardware. This physics-based computational tool leads to accurate quantitative analysis of space-based lidar signals and improves data quality of current lidar mission such as CALIPSO. This presentation will introduce the basic idea of this approach, preliminary results based on SRC's FPGA-based Mapstation, and how we may apply it to CALIPSO data analysis.
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; ...
2017-02-11
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less
NASA Astrophysics Data System (ADS)
Jamroz, Benjamin F.; Klöfkorn, Robert
2016-08-01
The scalability of computational applications on current and next-generation supercomputers is increasingly limited by the cost of inter-process communication. We implement non-blocking asynchronous communication in the High-Order Methods Modeling Environment for the time integration of the hydrostatic fluid equations using both the spectral-element and discontinuous Galerkin methods. This allows the overlap of computation with communication, effectively hiding some of the costs of communication. A novel detail about our approach is that it provides some data movement to be performed during the asynchronous communication even in the absence of other computations. This method produces significant performance and scalability gains in large-scale simulations.
Megawatt Electromagnetic Plasma Propulsion
NASA Technical Reports Server (NTRS)
Gilland, James; Lapointe, Michael; Mikellides, Pavlos
2003-01-01
The NASA Glenn Research Center program in megawatt level electric propulsion is centered on electromagnetic acceleration of quasi-neutral plasmas. Specific concepts currently being examined are the Magnetoplasmadynamic (MPD) thruster and the Pulsed Inductive Thruster (PIT). In the case of the MPD thruster, a multifaceted approach of experiments, computational modeling, and systems-level models of self field MPD thrusters is underway. The MPD thruster experimental research consists of a 1-10 MWe, 2 ms pulse-forming-network, a vacuum chamber with two 32 diffusion pumps, and voltage, current, mass flow rate, and thrust stand diagnostics. Current focus is on obtaining repeatable thrust measurements of a Princeton Benchmark type self field thruster operating at 0.5-1 gls of argon. Operation with hydrogen is the ultimate goal to realize the increased efficiency anticipated using the lighter gas. Computational modeling is done using the MACH2 MHD code, which can include real gas effects for propellants of interest to MPD operation. The MACH2 code has been benchmarked against other MPD thruster data, and has been used to create a point design for a 3000 second specific impulse (Isp) MPD thruster. This design is awaiting testing in the experimental facility. For the PIT, a computational investigation using MACH2 has been initiated, with experiments awaiting further funding. Although the calculated results have been found to be sensitive to the initial ionization assumptions, recent results have agreed well with experimental data. Finally, a systems level self-field MPD thruster model has been developed that allows for a mission planner or system designer to input Isp and power level into the model equations and obtain values for efficiency, mass flow rate, and input current and voltage. This model emphasizes algebraic simplicity to allow its incorporation into larger trajectory or system optimization codes. The systems level approach will be extended to the pulsed inductive thruster and other electrodeless thrusters at a future date.
Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.
NASA Astrophysics Data System (ADS)
Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca
2015-12-01
The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.
A Network Thermodynamic Approach to Compartmental Analysis
Mikulecky, D. C.; Huf, E. G.; Thomas, S. R.
1979-01-01
We introduce a general network thermodynamic method for compartmental analysis which uses a compartmental model of sodium flows through frog skin as an illustrative example (Huf and Howell, 1974a). We use network thermodynamics (Mikulecky et al., 1977b) to formulate the problem, and a circuit simulation program (ASTEC 2, SPICE2, or PCAP) for computation. In this way, the compartment concentrations and net fluxes between compartments are readily obtained for a set of experimental conditions involving a square-wave pulse of labeled sodium at the outer surface of the skin. Qualitative features of the influx at the outer surface correlate very well with those observed for the short circuit current under another similar set of conditions by Morel and LeBlanc (1975). In related work, the compartmental model is used as a basis for simulation of the short circuit current and sodium flows simultaneously using a two-port network (Mikulecky et al., 1977a, and Mikulecky et al., A network thermodynamic model for short circuit current transients in frog skin. Manuscript in preparation; Gary-Bobo et al., 1978). The network approach lends itself to computation of classic compartmental problems in a simple manner using circuit simulation programs (Chua and Lin, 1975), and it further extends the compartmental models to more complicated situations involving coupled flows and non-linearities such as concentration dependencies, chemical reaction kinetics, etc. PMID:262387
Network thermodynamic approach compartmental analysis. Na+ transients in frog skin.
Mikulecky, D C; Huf, E G; Thomas, S R
1979-01-01
We introduce a general network thermodynamic method for compartmental analysis which uses a compartmental model of sodium flows through frog skin as an illustrative example (Huf and Howell, 1974a). We use network thermodynamics (Mikulecky et al., 1977b) to formulate the problem, and a circuit simulation program (ASTEC 2, SPICE2, or PCAP) for computation. In this way, the compartment concentrations and net fluxes between compartments are readily obtained for a set of experimental conditions involving a square-wave pulse of labeled sodium at the outer surface of the skin. Qualitative features of the influx at the outer surface correlate very well with those observed for the short circuit current under another similar set of conditions by Morel and LeBlanc (1975). In related work, the compartmental model is used as a basis for simulation of the short circuit current and sodium flows simultaneously using a two-port network (Mikulecky et al., 1977a, and Mikulecky et al., A network thermodynamic model for short circuit current transients in frog skin. Manuscript in preparation; Gary-Bobo et al., 1978). The network approach lends itself to computation of classic compartmental problems in a simple manner using circuit simulation programs (Chua and Lin, 1975), and it further extends the compartmental models to more complicated situations involving coupled flows and non-linearities such as concentration dependencies, chemical reaction kinetics, etc.
Nanotechnology Review: Molecular Electronics to Molecular Motors
NASA Technical Reports Server (NTRS)
Srivastava, Deepak; Saini, Subhash (Technical Monitor)
1998-01-01
Reviewing the status of current approaches and future projections, as already published in scientific journals and books, the talk will summarize the direction in which computational and experimental nanotechnologies are progressing. Examples of nanotechnological approaches to the concepts of design and simulation of carbon nanotube based molecular electronic and mechanical devices will be presented. The concepts of nanotube based gears and motors will be discussed. The above is a non-technical review talk which covers long term precompetitive basic research in already published material that has been presented before many US scientific meeting audiences.
2015-01-01
Drug development has a high attrition rate, with poor pharmacokinetic and safety properties a significant hurdle. Computational approaches may help minimize these risks. We have developed a novel approach (pkCSM) which uses graph-based signatures to develop predictive models of central ADMET properties for drug development. pkCSM performs as well or better than current methods. A freely accessible web server (http://structure.bioc.cam.ac.uk/pkcsm), which retains no information submitted to it, provides an integrated platform to rapidly evaluate pharmacokinetic and toxicity properties. PMID:25860834
NASA Technical Reports Server (NTRS)
Barghouty, A. F.
2014-01-01
Accurate estimates of electroncapture cross sections at energies relevant to the modeling of the transport, acceleration, and interaction of energetic neutral atoms (ENA) in space (approximately few MeV per nucleon) and especially for multi-electron ions must rely on detailed, but computationally expensive, quantum-mechanical description of the collision process. Kuang's semi-classical approach is an elegant and efficient way to arrive at these estimates. Motivated by ENA modeling efforts for apace applications, we shall briefly present this approach along with sample applications and report on current progress.
Investigation of tDCS volume conduction effects in a highly realistic head model
NASA Astrophysics Data System (ADS)
Wagner, S.; Rampersad, S. M.; Aydin, Ü.; Vorwerk, J.; Oostendorp, T. F.; Neuling, T.; Herrmann, C. S.; Stegeman, D. F.; Wolters, C. H.
2014-02-01
Objective. We investigate volume conduction effects in transcranial direct current stimulation (tDCS) and present a guideline for efficient and yet accurate volume conductor modeling in tDCS using our newly-developed finite element (FE) approach. Approach. We developed a new, accurate and fast isoparametric FE approach for high-resolution geometry-adapted hexahedral meshes and tissue anisotropy. To attain a deeper insight into tDCS, we performed computer simulations, starting with a homogenized three-compartment head model and extending this step by step to a six-compartment anisotropic model. Main results. We are able to demonstrate important tDCS effects. First, we find channeling effects of the skin, the skull spongiosa and the cerebrospinal fluid compartments. Second, current vectors tend to be oriented towards the closest higher conducting region. Third, anisotropic WM conductivity causes current flow in directions more parallel to the WM fiber tracts. Fourth, the highest cortical current magnitudes are not only found close to the stimulation sites. Fifth, the median brain current density decreases with increasing distance from the electrodes. Significance. Our results allow us to formulate a guideline for volume conductor modeling in tDCS. We recommend to accurately model the major tissues between the stimulating electrodes and the target areas, while for efficient yet accurate modeling, an exact representation of other tissues is less important. Because for the low-frequency regime in electrophysiology the quasi-static approach is justified, our results should also be valid for at least low-frequency (e.g., below 100 Hz) transcranial alternating current stimulation.
Emergence of Landauer transport from quantum dynamics: A model Hamiltonian approach
NASA Astrophysics Data System (ADS)
Pal, Partha Pratim; Ramakrishna, S.; Seideman, Tamar
2018-04-01
The Landauer expression for computing current-voltage characteristics in nanoscale devices is efficient but not suited to transient phenomena and a time-dependent current because it is applicable only when the charge carriers transition into a steady flux after an external perturbation. In this article, we construct a very general expression for time-dependent current in an electrode-molecule-electrode arrangement. Utilizing a model Hamiltonian (consisting of the subsystem energy levels and their electronic coupling terms), we propagate the Schrödinger wave function equation to numerically compute the time-dependent population in the individual subsystems. The current in each electrode (defined in terms of the rate of change of the corresponding population) has two components, one due to the charges originating from the same electrode and the other due to the charges initially residing at the other electrode. We derive an analytical expression for the first component and illustrate that it agrees reasonably with its numerical counterpart at early times. Exploiting the unitary evolution of a wavefunction, we construct a more general Landauer style formula and illustrate the emergence of Landauer transport from our simulations without the assumption of time-independent charge flow. Our generalized Landauer formula is valid at all times for models beyond the wide-band limit, non-uniform electrode density of states and for time and energy-dependent electronic coupling between the subsystems. Subsequently, we investigate the ingredients in our model that regulate the onset time scale of this steady state. We compare the performance of our general current expression with the Landauer current for time-dependent electronic coupling. Finally, we comment on the applicability of the Landauer formula to compute hot-electron current arising upon plasmon decoherence.
Emergence of Landauer transport from quantum dynamics: A model Hamiltonian approach.
Pal, Partha Pratim; Ramakrishna, S; Seideman, Tamar
2018-04-14
The Landauer expression for computing current-voltage characteristics in nanoscale devices is efficient but not suited to transient phenomena and a time-dependent current because it is applicable only when the charge carriers transition into a steady flux after an external perturbation. In this article, we construct a very general expression for time-dependent current in an electrode-molecule-electrode arrangement. Utilizing a model Hamiltonian (consisting of the subsystem energy levels and their electronic coupling terms), we propagate the Schrödinger wave function equation to numerically compute the time-dependent population in the individual subsystems. The current in each electrode (defined in terms of the rate of change of the corresponding population) has two components, one due to the charges originating from the same electrode and the other due to the charges initially residing at the other electrode. We derive an analytical expression for the first component and illustrate that it agrees reasonably with its numerical counterpart at early times. Exploiting the unitary evolution of a wavefunction, we construct a more general Landauer style formula and illustrate the emergence of Landauer transport from our simulations without the assumption of time-independent charge flow. Our generalized Landauer formula is valid at all times for models beyond the wide-band limit, non-uniform electrode density of states and for time and energy-dependent electronic coupling between the subsystems. Subsequently, we investigate the ingredients in our model that regulate the onset time scale of this steady state. We compare the performance of our general current expression with the Landauer current for time-dependent electronic coupling. Finally, we comment on the applicability of the Landauer formula to compute hot-electron current arising upon plasmon decoherence.
A Video Lecture and Lab-Based Approach for Learning of Image Processing Concepts
ERIC Educational Resources Information Center
Chiu, Chiung-Fang; Lee, Greg C.
2009-01-01
The current practice of traditional in-class lecture for learning computer science (CS) in the high schools of Taiwan is in need of revamping. Teachers instruct on the use of commercial software instead of teaching CS concepts to students. The lack of more suitable teaching materials and limited classroom time are the main reasons for the…
Advanced Civilian Aeronautical Concepts
NASA Technical Reports Server (NTRS)
Bushnell, Dennis M.
1996-01-01
Paper discusses alternatives to currently deployed systems which could provide revolutionary improvements in metrics applicable to civilian aeronautics. Specific missions addressed include subsonic transports, supersonic transports and personal aircraft. These alternative systems and concepts are enabled by recent and envisaged advancements in electronics, communications, computing and Designer Fluid Mechanics in conjunction with a design approach employing extensive synergistic interactions between propulsion, aerodynamics and structures.
Multivariate Density Estimation and Remote Sensing
NASA Technical Reports Server (NTRS)
Scott, D. W.
1983-01-01
Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.
Integrating Educational Technologies into the Culinary Classroom and Instructional Kitchen
ERIC Educational Resources Information Center
Glass, Samuel
2005-01-01
The integration of educational technologies has and will continue to change the nature of education. From the advent of the printed word to the current use of computer assisted teaching and learning, the use of technology is an integral part of modern day realities and approaches to education. The purpose of this paper is to review some of the…
Samad, Abdul Fatah A; Nazaruddin, Nazaruddin; Murad, Abdul Munir Abdul; Jani, Jaeyres; Zainal, Zamri; Ismail, Ismanizan
2018-03-01
In current era, majority of microRNA (miRNA) are being discovered through computational approaches which are more confined towards model plants. Here, for the first time, we have described the identification and characterization of novel miRNA in a non-model plant, Persicaria minor ( P . minor ) using computational approach. Unannotated sequences from deep sequencing were analyzed based on previous well-established parameters. Around 24 putative novel miRNAs were identified from 6,417,780 reads of the unannotated sequence which represented 11 unique putative miRNA sequences. PsRobot target prediction tool was deployed to identify the target transcripts of putative novel miRNAs. Most of the predicted target transcripts (mRNAs) were known to be involved in plant development and stress responses. Gene ontology showed that majority of the putative novel miRNA targets involved in cellular component (69.07%), followed by molecular function (30.08%) and biological process (0.85%). Out of 11 unique putative miRNAs, 7 miRNAs were validated through semi-quantitative PCR. These novel miRNAs discoveries in P . minor may develop and update the current public miRNA database.
Chen, Guangchao; Peijnenburg, Willie; Xiao, Yinlong; Vijver, Martina G
2017-07-12
As listed by the European Chemicals Agency, the three elements in evaluating the hazards of engineered nanomaterials (ENMs) include the integration and evaluation of toxicity data, categorization and labeling of ENMs, and derivation of hazard threshold levels for human health and the environment. Assessing the hazards of ENMs solely based on laboratory tests is time-consuming, resource intensive, and constrained by ethical considerations. The adoption of computational toxicology into this task has recently become a priority. Alternative approaches such as (quantitative) structure-activity relationships ((Q)SAR) and read-across are of significant help in predicting nanotoxicity and filling data gaps, and in classifying the hazards of ENMs to individual species. Thereupon, the species sensitivity distribution (SSD) approach is able to serve the establishment of ENM hazard thresholds sufficiently protecting the ecosystem. This article critically reviews the current knowledge on the development of in silico models in predicting and classifying the hazard of metallic ENMs, and the development of SSDs for metallic ENMs. Further discussion includes the significance of well-curated experimental datasets and the interpretation of toxicity mechanisms of metallic ENMs based on reported models. An outlook is also given on future directions of research in this frontier.
Large scale analysis of the mutational landscape in HT-SELEX improves aptamer discovery
Hoinka, Jan; Berezhnoy, Alexey; Dao, Phuong; Sauna, Zuben E.; Gilboa, Eli; Przytycka, Teresa M.
2015-01-01
High-Throughput (HT) SELEX combines SELEX (Systematic Evolution of Ligands by EXponential Enrichment), a method for aptamer discovery, with massively parallel sequencing technologies. This emerging technology provides data for a global analysis of the selection process and for simultaneous discovery of a large number of candidates but currently lacks dedicated computational approaches for their analysis. To close this gap, we developed novel in-silico methods to analyze HT-SELEX data and utilized them to study the emergence of polymerase errors during HT-SELEX. Rather than considering these errors as a nuisance, we demonstrated their utility for guiding aptamer discovery. Our approach builds on two main advancements in aptamer analysis: AptaMut—a novel technique allowing for the identification of polymerase errors conferring an improved binding affinity relative to the ‘parent’ sequence and AptaCluster—an aptamer clustering algorithm which is to our best knowledge, the only currently available tool capable of efficiently clustering entire aptamer pools. We applied these methods to an HT-SELEX experiment developing aptamers against Interleukin 10 receptor alpha chain (IL-10RA) and experimentally confirmed our predictions thus validating our computational methods. PMID:25870409
Modeling direct band-to-band tunneling: From bulk to quantum-confined semiconductor devices
NASA Astrophysics Data System (ADS)
Carrillo-Nuñez, H.; Ziegler, A.; Luisier, M.; Schenk, A.
2015-06-01
A rigorous framework to study direct band-to-band tunneling (BTBT) in homo- and hetero-junction semiconductor nanodevices is introduced. An interaction Hamiltonian coupling conduction and valence bands (CVBs) is derived using a multiband envelope method. A general form of the BTBT probability is then obtained from the linear response to the "CVBs interaction" that drives the system out of equilibrium. Simple expressions in terms of the one-electron spectral function are developed to compute the BTBT current in two- and three-dimensional semiconductor structures. Additionally, a two-band envelope equation based on the Flietner model of imaginary dispersion is proposed for the same purpose. In order to characterize their accuracy and differences, both approaches are compared with full-band, atomistic quantum transport simulations of Ge, InAs, and InAs-Si Esaki diodes. As another numerical application, the BTBT current in InAs-Si nanowire tunnel field-effect transistors is computed. It is found that both approaches agree with high accuracy. The first one is considerably easier to conceive and could be implemented straightforwardly in existing quantum transport tools based on the effective mass approximation to account for BTBT in nanodevices.
Roy, Vandana; Shukla, Shailja; Shukla, Piyush Kumar; Rawat, Paresh
2017-01-01
The motion generated at the capturing time of electro-encephalography (EEG) signal leads to the artifacts, which may reduce the quality of obtained information. Existing artifact removal methods use canonical correlation analysis (CCA) for removing artifacts along with ensemble empirical mode decomposition (EEMD) and wavelet transform (WT). A new approach is proposed to further analyse and improve the filtering performance and reduce the filter computation time under highly noisy environment. This new approach of CCA is based on Gaussian elimination method which is used for calculating the correlation coefficients using backslash operation and is designed for EEG signal motion artifact removal. Gaussian elimination is used for solving linear equation to calculate Eigen values which reduces the computation cost of the CCA method. This novel proposed method is tested against currently available artifact removal techniques using EEMD-CCA and wavelet transform. The performance is tested on synthetic and real EEG signal data. The proposed artifact removal technique is evaluated using efficiency matrices such as del signal to noise ratio (DSNR), lambda ( λ ), root mean square error (RMSE), elapsed time, and ROC parameters. The results indicate suitablity of the proposed algorithm for use as a supplement to algorithms currently in use.
Watch what you say, your computer might be listening: A review of automated speech recognition
NASA Technical Reports Server (NTRS)
Degennaro, Stephen V.
1991-01-01
Spoken language is the most convenient and natural means by which people interact with each other and is, therefore, a promising candidate for human-machine interactions. Speech also offers an additional channel for hands-busy applications, complementing the use of motor output channels for control. Current speech recognition systems vary considerably across a number of important characteristics, including vocabulary size, speaking mode, training requirements for new speakers, robustness to acoustic environments, and accuracy. Algorithmically, these systems range from rule-based techniques through more probabilistic or self-learning approaches such as hidden Markov modeling and neural networks. This tutorial begins with a brief summary of the relevant features of current speech recognition systems and the strengths and weaknesses of the various algorithmic approaches.
Bondü, Rebecca; Scheithauer, Herbert
2009-01-01
In March and September 2009 the school shootings in Winnenden and Ansbach once again demonstrated the need for preventive approaches in order to prevent further offences in Germany. Due to the low frequency of such offences and the low specificity of relevant risk factors known so far, prediction and prevention seems difficult though. None the less, several preventive approaches are currently discussed. The present article highlights these approaches and their specific advantages and disadvantages. As school shootings are multicausally determined, approaches focussing only on single aspects (i.e. prohibiting violent computer games or further strengthening gun laws) do not meet requirements. Other measures such as installing technical safety devices or optimizing actions of police and school attendants are supposed to reduce harm in case of emergency. Instead, scientifically founded and promising preventive approaches focus on secondary prevention and for this purpose employ the threat assessment approach, which is widespread within the USA. In this framework, responsible occupational groups such as teachers, school psychologists and police officers are to be trained in identifying students' warning signs, judging danger of these students for self and others in a systematic process and initiating suitable interventions.
Electric current variations and 3D magnetic configuration of coronal jets
NASA Astrophysics Data System (ADS)
Schmieder, Brigitte; Harra, Louise K.; Aulanier, Guillaume; Guo, Yang; Demoulin, Pascal; Moreno-Insertis, Fernando, , Prof
Coronal jets (EUV) were observed by SDO/AIA on September 17, 2010. HMI and THEMIS measured the vector magnetic field from which we derived the magnetic flux, the phostospheric velocity and the vertical electric current. The magnetic configuration was computed with a non linear force-free approach. The phostospheric current pattern of the recurrent jets were associated with the quasi-separatrix layers deduced from the magnetic extrapolation. The large twisted near-by Eiffel-tower-shape jet was also caused by reconnection in current layers containing a null point. This jet cannot be classified precisely within either the quiescent or the blowout jet types. We will show the importance of the existence of bald patches in the low atmosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, A. J.
In our Exascale Computing Project (ECP) we seek to simulate earthquake ground motions at much higher frequency than is currently possible. Previous simulations in the SFBA were limited to 0.5-1 Hz or lower (Aagaard et al. 2008, 2010), while we have recently simulated the response to 5 Hz. In order to improve confidence in simulated ground motions, we must accurately represent the three-dimensional (3D) sub-surface material properties that govern seismic wave propagation over a broad region. We are currently focusing on the San Francisco Bay Area (SFBA) with a Cartesian domain of size 120 x 80 x 35 km, butmore » this area will be expanded to cover a larger domain. Currently, the United States Geologic Survey (USGS) has a 3D model of the SFBA for seismic simulations. However, this model suffers from two serious shortcomings relative to our application: 1) it does not fit most of the available low frequency (< 1 Hz) seismic waveforms from moderate (magnitude M 3.5-5.0) earthquakes; and 2) it is represented with much lower resolution than necessary for the high frequency simulations (> 5 Hz) we seek to perform. The current model will serve as a starting model for full waveform tomography based on 3D sensitivity kernels. This report serves as the deliverable for our ECP FY2017 Quarter 4 milestone to FY 2018 “Computational approach to developing model updates”. We summarize the current state of 3D seismic simulations in the SFBA and demonstrate the performance of the USGS 3D model for a few selected paths. We show the available open-source waveform data sets for model updates, based on moderate earthquakes recorded in the region. We present a plan for improving the 3D model utilizing the available data and further development of our SW4 application. We project how the model could be improved and present options for further improvements focused on the shallow geotechnical layers using dense passive recordings of ambient and human-induced noise.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Salvador B.
Smart grids are a crucial component for enabling the nation’s future energy needs, as part of a modernization effort led by the Department of Energy. Smart grids and smart microgrids are being considered in niche applications, and as part of a comprehensive energy strategy to help manage the nation’s growing energy demands, for critical infrastructures, military installations, small rural communities, and large populations with limited water supplies. As part of a far-reaching strategic initiative, Sandia National Laboratories (SNL) presents herein a unique, three-pronged approach to integrate small modular reactors (SMRs) into microgrids, with the goal of providing economically-competitive, reliable, andmore » secure energy to meet the nation’s needs. SNL’s triad methodology involves an innovative blend of smart microgrid technology, high performance computing (HPC), and advanced manufacturing (AM). In this report, Sandia’s current capabilities in those areas are summarized, as well as paths forward that will enable DOE to achieve its energy goals. In the area of smart grid/microgrid technology, Sandia’s current computational capabilities can model the entire grid, including temporal aspects and cyber security issues. Our tools include system development, integration, testing and evaluation, monitoring, and sustainment.« less
Next Generation Sequence Analysis and Computational Genomics Using Graphical Pipeline Workflows
Torri, Federica; Dinov, Ivo D.; Zamanyan, Alen; Hobel, Sam; Genco, Alex; Petrosyan, Petros; Clark, Andrew P.; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Knowles, James A.; Ames, Joseph; Kesselman, Carl; Toga, Arthur W.; Potkin, Steven G.; Vawter, Marquis P.; Macciardi, Fabio
2012-01-01
Whole-genome and exome sequencing have already proven to be essential and powerful methods to identify genes responsible for simple Mendelian inherited disorders. These methods can be applied to complex disorders as well, and have been adopted as one of the current mainstream approaches in population genetics. These achievements have been made possible by next generation sequencing (NGS) technologies, which require substantial bioinformatics resources to analyze the dense and complex sequence data. The huge analytical burden of data from genome sequencing might be seen as a bottleneck slowing the publication of NGS papers at this time, especially in psychiatric genetics. We review the existing methods for processing NGS data, to place into context the rationale for the design of a computational resource. We describe our method, the Graphical Pipeline for Computational Genomics (GPCG), to perform the computational steps required to analyze NGS data. The GPCG implements flexible workflows for basic sequence alignment, sequence data quality control, single nucleotide polymorphism analysis, copy number variant identification, annotation, and visualization of results. These workflows cover all the analytical steps required for NGS data, from processing the raw reads to variant calling and annotation. The current version of the pipeline is freely available at http://pipeline.loni.ucla.edu. These applications of NGS analysis may gain clinical utility in the near future (e.g., identifying miRNA signatures in diseases) when the bioinformatics approach is made feasible. Taken together, the annotation tools and strategies that have been developed to retrieve information and test hypotheses about the functional role of variants present in the human genome will help to pinpoint the genetic risk factors for psychiatric disorders. PMID:23139896
Computer-Aided Discovery Tools for Volcano Deformation Studies with InSAR and GPS
NASA Astrophysics Data System (ADS)
Pankratius, V.; Pilewskie, J.; Rude, C. M.; Li, J. D.; Gowanlock, M.; Bechor, N.; Herring, T.; Wauthier, C.
2016-12-01
We present a Computer-Aided Discovery approach that facilitates the cloud-scalable fusion of different data sources, such as GPS time series and Interferometric Synthetic Aperture Radar (InSAR), for the purpose of identifying the expansion centers and deformation styles of volcanoes. The tools currently developed at MIT allow the definition of alternatives for data processing pipelines that use various analysis algorithms. The Computer-Aided Discovery system automatically generates algorithmic and parameter variants to help researchers explore multidimensional data processing search spaces efficiently. We present first application examples of this technique using GPS data on volcanoes on the Aleutian Islands and work in progress on combined GPS and InSAR data in Hawaii. In the model search context, we also illustrate work in progress combining time series Principal Component Analysis with InSAR augmentation to constrain the space of possible model explanations on current empirical data sets and achieve a better identification of deformation patterns. This work is supported by NASA AIST-NNX15AG84G and NSF ACI-1442997 (PI: V. Pankratius).
A Multiphysics and Multiscale Software Environment for Modeling Astrophysical Systems
NASA Astrophysics Data System (ADS)
Portegies Zwart, Simon; McMillan, Steve; O'Nualláin, Breanndán; Heggie, Douglas; Lombardi, James; Hut, Piet; Banerjee, Sambaran; Belkus, Houria; Fragos, Tassos; Fregeau, John; Fuji, Michiko; Gaburov, Evghenii; Glebbeek, Evert; Groen, Derek; Harfst, Stefan; Izzard, Rob; Jurić, Mario; Justham, Stephen; Teuben, Peter; van Bever, Joris; Yaron, Ofer; Zemp, Marcel
We present MUSE, a software framework for tying together existing computational tools for different astrophysical domains into a single multiphysics, multiscale workload. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for a generalized stellar systems workload. MUSE has now reached a "Noah's Ark" milestone, with two available numerical solvers for each domain. MUSE can treat small stellar associations, galaxies and everything in between, including planetary systems, dense stellar clusters and galactic nuclei. Here we demonstrate an examples calculated with MUSE: the merger of two galaxies. In addition we demonstrate the working of MUSE on a distributed computer. The current MUSE code base is publicly available as open source at http://muse.li.
Estimation of Geotropic Currents in the Bay of Bengal using In-situ Observations.
NASA Astrophysics Data System (ADS)
T, V. R.
2014-12-01
Geostraphic Currents (GCs) can be estimated from temperature and salinity observations. In this study an attempt has been made to compute GC using temperature and salinity observations from Expendable Bathy Thermograph (XBT) and CTD over Bay of Bengal (BoB). Although in recent time we have Argo observations but it is for a limited period and coarse temporal resolutions. In BoB Bengal, where not enough simultaneous hydrographic temperature and salinity data are available with reasonable spatial resolution (~one degree spatial resolution) and for a longer period. To overcome the limitations of GC computed from XBT profiles, temperature-salinity relationships were used from simultaneous temperature and salinity observations. We have demonstrated that GCs can be computed with an accuracy of less than 8.5 cm/s (root mean square error) at the surface with respect to temperature from XBT and salinity from climatological record. This error reduces with increasing depth. Finally, we demonstrated the application of this approach to study the temporal variation of the GCs during 1992 to 2012 along an XBT transect.
An adjoint method for gradient-based optimization of stellarator coil shapes
NASA Astrophysics Data System (ADS)
Paul, E. J.; Landreman, M.; Bader, A.; Dorland, W.
2018-07-01
We present a method for stellarator coil design via gradient-based optimization of the coil-winding surface. The REGCOIL (Landreman 2017 Nucl. Fusion 57 046003) approach is used to obtain the coil shapes on the winding surface using a continuous current potential. We apply the adjoint method to calculate derivatives of the objective function, allowing for efficient computation of analytic gradients while eliminating the numerical noise of approximate derivatives. We are able to improve engineering properties of the coils by targeting the root-mean-squared current density in the objective function. We obtain winding surfaces for W7-X and HSX which simultaneously decrease the normal magnetic field on the plasma surface and increase the surface-averaged distance between the coils and the plasma in comparison with the actual winding surfaces. The coils computed on the optimized surfaces feature a smaller toroidal extent and curvature and increased inter-coil spacing. A technique for computation of the local sensitivity of figures of merit to normal displacements of the winding surface is presented, with potential applications for understanding engineering tolerances.
Computation of breast ptosis from 3D surface scans of the female torso.
Li, Danni; Cheong, Audrey; Reece, Gregory P; Crosby, Melissa A; Fingeret, Michelle C; Merchant, Fatima A
2016-11-01
Stereophotography is now finding a niche in clinical breast surgery, and several methods for quantitatively measuring breast morphology from 3D surface images have been developed. Breast ptosis (sagging of the breast), which refers to the extent by which the nipple is lower than the inframammary fold (the contour along which the inferior part of the breast attaches to the chest wall), is an important morphological parameter that is frequently used for assessing the outcome of breast surgery. This study presents a novel algorithm that utilizes three-dimensional (3D) features such as surface curvature and orientation for the assessment of breast ptosis from 3D scans of the female torso. The performance of the computational approach proposed was compared against the consensus of manual ptosis ratings by nine plastic surgeons, and that of current 2D photogrammetric methods. Compared to the 2D methods, the average accuracy for 3D features was ~13% higher, with an increase in precision, recall, and F-score of 37%, 29%, and 33%, respectively. The computational approach proposed provides an improved and unbiased objective method for rating ptosis when compared to qualitative visualization by observers, and distance based 2D photogrammetry approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tensor scale-based fuzzy connectedness image segmentation
NASA Astrophysics Data System (ADS)
Saha, Punam K.; Udupa, Jayaram K.
2003-05-01
Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.
NASA Astrophysics Data System (ADS)
Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Carrete, Jesús; Toher, Cormac; de Jong, Maarten; Asta, Mark; Fornari, Marco; Nardelli, Marco Buongiorno; Curtarolo, Stefano
2017-10-01
One of the most accurate approaches for calculating lattice thermal conductivity, , is solving the Boltzmann transport equation starting from third-order anharmonic force constants. In addition to the underlying approximations of ab-initio parameterization, two main challenges are associated with this path: high computational costs and lack of automation in the frameworks using this methodology, which affect the discovery rate of novel materials with ad-hoc properties. Here, the Automatic Anharmonic Phonon Library (AAPL) is presented. It efficiently computes interatomic force constants by making effective use of crystal symmetry analysis, it solves the Boltzmann transport equation to obtain , and allows a fully integrated operation with minimum user intervention, a rational addition to the current high-throughput accelerated materials development framework AFLOW. An "experiment vs. theory" study of the approach is shown, comparing accuracy and speed with respect to other available packages, and for materials characterized by strong electron localization and correlation. Combining AAPL with the pseudo-hybrid functional ACBN0 is possible to improve accuracy without increasing computational requirements.
Plane Smoothers for Multiblock Grids: Computational Aspects
NASA Technical Reports Server (NTRS)
Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane
1999-01-01
Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.
Network-based approaches to climate knowledge discovery
NASA Astrophysics Data System (ADS)
Budich, Reinhard; Nyberg, Per; Weigel, Tobias
2011-11-01
Climate Knowledge Discovery Workshop; Hamburg, Germany, 30 March to 1 April 2011 Do complex networks combined with semantic Web technologies offer the next generation of solutions in climate science? To address this question, a first Climate Knowledge Discovery (CKD) Workshop, hosted by the German Climate Computing Center (Deutsches Klimarechenzentrum (DKRZ)), brought together climate and computer scientists from major American and European laboratories, data centers, and universities, as well as representatives from industry, the broader academic community, and the semantic Web communities. The participants, representing six countries, were concerned with large-scale Earth system modeling and computational data analysis. The motivation for the meeting was the growing problem that climate scientists generate data faster than it can be interpreted and the need to prepare for further exponential data increases. Current analysis approaches are focused primarily on traditional methods, which are best suited for large-scale phenomena and coarse-resolution data sets. The workshop focused on the open discussion of ideas and technologies to provide the next generation of solutions to cope with the increasing data volumes in climate science.
A machine learning approach to improve contactless heart rate monitoring using a webcam.
Monkaresi, Hamed; Calvo, Rafael A; Yan, Hong
2014-07-01
Unobtrusive, contactless recordings of physiological signals are very important for many health and human-computer interaction applications. Most current systems require sensors which intrusively touch the user's skin. Recent advances in contact-free physiological signals open the door to many new types of applications. This technology promises to measure heart rate (HR) and respiration using video only. The effectiveness of this technology, its limitations, and ways of overcoming them deserves particular attention. In this paper, we evaluate this technique for measuring HR in a controlled situation, in a naturalistic computer interaction session, and in an exercise situation. For comparison, HR was measured simultaneously using an electrocardiography device during all sessions. The results replicated the published results in controlled situations, but show that they cannot yet be considered as a valid measure of HR in naturalistic human-computer interaction. We propose a machine learning approach to improve the accuracy of HR detection in naturalistic measurements. The results demonstrate that the root mean squared error is reduced from 43.76 to 3.64 beats/min using the proposed method.
Jaafar, Haryati; Ibrahim, Salwani; Ramli, Dzati Athiar
2015-01-01
Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI) extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN), was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%. PMID:26113861
Computer vision based nacre thickness measurement of Tahitian pearls
NASA Astrophysics Data System (ADS)
Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban
2017-03-01
The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.
CORDIC-based digital signal processing (DSP) element for adaptive signal processing
NASA Astrophysics Data System (ADS)
Bolstad, Gregory D.; Neeld, Kenneth B.
1995-04-01
The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.
Mutual coupling effects in antenna arrays, volume 1
NASA Technical Reports Server (NTRS)
Collin, R. E.
1986-01-01
Mutual coupling between rectangular apertures in a finite antenna array, in an infinite ground plane, is analyzed using the vector potential approach. The method of moments is used to solve the equations that result from setting the tangential magnetic fields across each aperture equal. The approximation uses a set of vector potential model functions to solve for equivalent magnetic currents. A computer program was written to carry out this analysis and the resulting currents were used to determine the co- and cross-polarized far zone radiation patterns. Numerical results for various arrays using several modes in the approximation are presented. Results for one and two aperture arrays are compared against published data to check on the agreement of this model with previous work. Computer derived results are also compared against experimental results to test the accuracy of the model. These tests of the accuracy of the program showed that it yields valid data.
Optical character recognition: an illustrated guide to the frontier
NASA Astrophysics Data System (ADS)
Nagy, George; Nartker, Thomas A.; Rice, Stephen V.
1999-12-01
We offer a perspective on the performance of current OCR systems by illustrating and explaining actual OCR errors made by three commercial devices. After discussing briefly the character recognition abilities of humans and computers, we present illustrated examples of recognition errors. The top level of our taxonomy of the causes of errors consists of Imaging Defects, Similar Symbols, Punctuation, and Typography. The analysis of a series of 'snippets' from this perspective provides insight into the strengths and weaknesses of current systems, and perhaps a road map to future progress. The examples were drawn from the large-scale tests conducted by the authors at the Information Science Research Institute of the University of Nevada, Las Vegas. By way of conclusion, we point to possible approaches for improving the accuracy of today's systems. The talk is based on our eponymous monograph, recently published in The Kluwer International Series in Engineering and Computer Science, Kluwer Academic Publishers, 1999.
High-frequency AC/DC converter with unity power factor and minimum harmonic distortion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wernekinch, E.R.
1987-01-01
The power factor is controlled by adjusting the relative position of the fundamental component of an optimized PWM-type voltage with respect to the supply voltage. Current harmonic distortion is minimized by the use of optimized firing angles for the converter at a frequency where GTO's can be used. This feature makes this approach very attractive at power levels of 100 to 600 kW. To obtain the optimized PWM pattern, a steepest descent digital computer algorithm is used. Digital-computer simulations are performed and a low-power model is constructed and tested to verify the concepts and the behavior of the model. Experimentalmore » results show that unity power factor is achieved and that the distortion in the phase currents is 10.4% at 90% of full load. This is less than achievable with sinusoidal PWM, harmonic elimination, hysteresis control, and deadbeat control for the same switching frequency.« less
Distributive, Non-destructive Real-time System and Method for Snowpack Monitoring
NASA Technical Reports Server (NTRS)
Frolik, Jeff (Inventor); Skalka, Christian (Inventor)
2013-01-01
A ground-based system that provides quasi real-time measurement and collection of snow-water equivalent (SWE) data in remote settings is provided. The disclosed invention is significantly less expensive and easier to deploy than current methods and less susceptible to terrain and snow bridging effects. Embodiments of the invention include remote data recovery solutions. Compared to current infrastructure using existing SWE technology, the disclosed invention allows more SWE sites to be installed for similar cost and effort, in a greater variety of terrain; thus, enabling data collection at improved spatial resolutions. The invention integrates a novel computational architecture with new sensor technologies. The invention's computational architecture is based on wireless sensor networks, comprised of programmable, low-cost, low-powered nodes capable of sophisticated sensor control and remote data communication. The invention also includes measuring attenuation of electromagnetic radiation, an approach that is immune to snow bridging and significantly reduces sensor footprints.
Study of high speed complex number algorithms. [for determining antenna for field radiation patterns
NASA Technical Reports Server (NTRS)
Heisler, R.
1981-01-01
A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.
Accelerating Full Configuration Interaction Calculations for Nuclear Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chao; Sternberg, Philip; Maris, Pieter
2008-04-14
One of the emerging computational approaches in nuclear physics is the full configuration interaction (FCI) method for solving the many-body nuclear Hamiltonian in a sufficiently large single-particle basis space to obtain exact answers - either directly or by extrapolation. The lowest eigenvalues and correspondingeigenvectors for very large, sparse and unstructured nuclear Hamiltonian matrices are obtained and used to evaluate additional experimental quantities. These matrices pose a significant challenge to the design and implementation of efficient and scalable algorithms for obtaining solutions on massively parallel computer systems. In this paper, we describe the computational strategies employed in a state-of-the-art FCI codemore » MFDn (Many Fermion Dynamics - nuclear) as well as techniques we recently developed to enhance the computational efficiency of MFDn. We will demonstrate the current capability of MFDn and report the latest performance improvement we have achieved. We will also outline our future research directions.« less
Psychological Issues in Online Adaptive Task Allocation
NASA Technical Reports Server (NTRS)
Morris, N. M.; Rouse, W. B.; Ward, S. L.; Frey, P. R.
1984-01-01
Adaptive aiding is an idea that offers potential for improvement over many current approaches to aiding in human-computer systems. The expected return of tailoring the system to fit the user could be in the form of improved system performance and/or increased user satisfaction. Issues such as the manner in which information is shared between human and computer, the appropriate division of labor between them, and the level of autonomy of the aid are explored. A simulated visual search task was developed. Subjects are required to identify targets in a moving display while performing a compensatory sub-critical tracking task. By manipulating characteristics of the situation such as imposed task-related workload and effort required to communicate with the computer, it is possible to create conditions in which interaction with the computer would be more or less desirable. The results of preliminary research using this experimental scenario are presented, and future directions for this research effort are discussed.
Defining Computational Thinking for Mathematics and Science Classrooms
NASA Astrophysics Data System (ADS)
Weintrop, David; Beheshti, Elham; Horn, Michael; Orton, Kai; Jona, Kemi; Trouille, Laura; Wilensky, Uri
2016-02-01
Science and mathematics are becoming computational endeavors. This fact is reflected in the recently released Next Generation Science Standards and the decision to include "computational thinking" as a core scientific practice. With this addition, and the increased presence of computation in mathematics and scientific contexts, a new urgency has come to the challenge of defining computational thinking and providing a theoretical grounding for what form it should take in school science and mathematics classrooms. This paper presents a response to this challenge by proposing a definition of computational thinking for mathematics and science in the form of a taxonomy consisting of four main categories: data practices, modeling and simulation practices, computational problem solving practices, and systems thinking practices. In formulating this taxonomy, we draw on the existing computational thinking literature, interviews with mathematicians and scientists, and exemplary computational thinking instructional materials. This work was undertaken as part of a larger effort to infuse computational thinking into high school science and mathematics curricular materials. In this paper, we argue for the approach of embedding computational thinking in mathematics and science contexts, present the taxonomy, and discuss how we envision the taxonomy being used to bring current educational efforts in line with the increasingly computational nature of modern science and mathematics.
System analysis in rotorcraft design: The past decade
NASA Technical Reports Server (NTRS)
Galloway, Thomas L.
1988-01-01
Rapid advances in the technology of electronic digital computers and the need for an integrated synthesis approach in developing future rotorcraft programs has led to increased emphasis on system analysis techniques in rotorcraft design. The task in systems analysis is to deal with complex, interdependent, and conflicting requirements in a structured manner so rational and objective decisions can be made. Whether the results are wisdom or rubbish depends upon the validity and sometimes more importantly, the consistency of the inputs, the correctness of the analysis, and a sensible choice of measures of effectiveness to draw conclusions. In rotorcraft design this means combining design requirements, technology assessment, sensitivity analysis and reviews techniques currently in use by NASA and Army organizations in developing research programs and vehicle specifications for rotorcraft. These procedures span simple graphical approaches to comprehensive analysis on large mainframe computers. Examples of recent applications to military and civil missions are highlighted.
Emerging approaches in predictive toxicology.
Zhang, Luoping; McHale, Cliona M; Greene, Nigel; Snyder, Ronald D; Rich, Ivan N; Aardema, Marilyn J; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan
2014-12-01
Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. © 2014 Wiley Periodicals, Inc.
Emerging Approaches in Predictive Toxicology
Zhang, Luoping; McHale, Cliona M.; Greene, Nigel; Snyder, Ronald D.; Rich, Ivan N.; Aardema, Marilyn J.; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan
2016-01-01
Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. PMID:25044351
Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser
NASA Astrophysics Data System (ADS)
Christen, M.
2016-06-01
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.
Progress in high-lift aerodynamic calculations
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.
1993-01-01
The current work presents progress in the effort to numerically simulate the flow over high-lift aerodynamic components, namely, multi-element airfoils and wings in either a take-off or a landing configuration. The computational approach utilizes an incompressible flow solver and an overlaid chimera grid approach. A detailed grid resolution study is presented for flow over a three-element airfoil. Two turbulence models, a one-equation Baldwin-Barth model and a two equation k-omega model are compared. Excellent agreement with experiment is obtained for the lift coefficient at all angles of attack, including the prediction of maximum lift when using the two-equation model. Results for two other flap riggings are shown. Three-dimensional results are presented for a wing with a square wing-tip as a validation case. Grid generation and topology is discussed for computing the flow over a T-39 Sabreliner wing with flap deployed and the initial calculations for this geometry are presented.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F + H2 yields HF + H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1988-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F+H2 yields HF+H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
NASA Technical Reports Server (NTRS)
Savely, Robert T.; Loftin, R. Bowen
1990-01-01
Training is a major endeavor in all modern societies. Common training methods include training manuals, formal classes, procedural computer programs, simulations, and on-the-job training. NASA's training approach has focussed primarily on on-the-job training in a simulation environment for both crew and ground based personnel. NASA must explore new approaches to training for the 1990's and beyond. Specific autonomous training systems are described which are based on artificial intelligence technology for use by NASA astronauts, flight controllers, and ground based support personnel that show an alternative to current training systems. In addition to these specific systems, the evolution of a general architecture for autonomous intelligent training systems that integrates many of the features of traditional training programs with artificial intelligence techniques is presented. These Intelligent Computer Aided Training (ICAT) systems would provide much of the same experience that could be gained from the best on-the-job training.
The Importance of Proving the Null
Gallistel, C. R.
2010-01-01
Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? PMID:19348549
Rigid-Docking Approaches to Explore Protein-Protein Interaction Space.
Matsuzaki, Yuri; Uchikoga, Nobuyuki; Ohue, Masahito; Akiyama, Yutaka
Protein-protein interactions play core roles in living cells, especially in the regulatory systems. As information on proteins has rapidly accumulated on publicly available databases, much effort has been made to obtain a better picture of protein-protein interaction networks using protein tertiary structure data. Predicting relevant interacting partners from their tertiary structure is a challenging task and computer science methods have the potential to assist with this. Protein-protein rigid docking has been utilized by several projects, docking-based approaches having the advantages that they can suggest binding poses of predicted binding partners which would help in understanding the interaction mechanisms and that comparing docking results of both non-binders and binders can lead to understanding the specificity of protein-protein interactions from structural viewpoints. In this review we focus on explaining current computational prediction methods to predict pairwise direct protein-protein interactions that form protein complexes.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
A Distributive, Non-Destructive, Real-Time Approach to Snowpack Monitoring
NASA Technical Reports Server (NTRS)
Frolik, Jeff; Skalka, Christian
2012-01-01
This invention is designed to ascertain the snow water equivalence (SWE) of snowpacks with better spatial and temporal resolutions than present techniques. The approach is ground-based, as opposed to some techniques that are air-based. In addition, the approach is compact, non-destructive, and can be communicated with remotely, and thus can be deployed in areas not possible with current methods. Presently there are two principal ground-based techniques for obtaining SWE measurements. The first is manual snow core measurements of the snowpack. This approach is labor-intensive, destructive, and has poor temporal resolution. The second approach is to deploy a large (e.g., 3x3 m) snowpillow, which requires significant infrastructure, is potentially hazardous [uses a approximately equal to 200-gallon (approximately equal to 760-L) antifreeze-filled bladder], and requires deployment in a large, flat area. High deployment costs necessitate few installations, thus yielding poor spatial resolution of data. Both approaches have limited usefulness in complex and/or avalanche-prone terrains. This approach is compact, non-destructive to the snowpack, provides high temporal resolution data, and due to potential low cost, can be deployed with high spatial resolution. The invention consists of three primary components: a robust wireless network and computing platform designed for harsh climates, new SWE sensing strategies, and algorithms for smart sampling, data logging, and SWE computation.
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
The state of the art in multidimensional combustor modeling as evidenced by the level of sophistication employed in terms of modeling and numerical accuracy considerations, is also dictated by the available computer memory and turnaround times afforded by present-day computers. With the aim of advancing the current multi-dimensional computational tools used in the design of advanced technology combustors, a solution procedure is developed that combines the novelty of the coupled CFD/spray/scalar Monte Carlo PDF (Probability Density Function) computations on unstructured grids with the ability to run on parallel architectures. In this approach, the mean gas-phase velocity and turbulence fields are determined from a standard turbulence model, the joint composition of species and enthalpy from the solution of a modeled PDF transport equation, and a Lagrangian-based dilute spray model is used for the liquid-phase representation. The gas-turbine combustor flows are often characterized by a complex interaction between various physical processes associated with the interaction between the liquid and gas phases, droplet vaporization, turbulent mixing, heat release associated with chemical kinetics, radiative heat transfer associated with highly absorbing and radiating species, among others. The rate controlling processes often interact with each other at various disparate time 1 and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and liquid phase evaporation in many practical combustion devices.
Aircraft Conflict Analysis and Real-Time Conflict Probing Using Probabilistic Trajectory Modeling
NASA Technical Reports Server (NTRS)
Yang, Lee C.; Kuchar, James K.
2000-01-01
Methods for maintaining separation between aircraft in the current airspace system have been built from a foundation of structured routes and evolved procedures. However, as the airspace becomes more congested and the chance of failures or operational error become more problematic, automated conflict alerting systems have been proposed to help provide decision support and to serve as traffic monitoring aids. The problem of conflict detection and resolution has been tackled from a number of different ways, but in this thesis, it is recast as a problem of prediction in the presence of uncertainties. Much of the focus is concentrated on the errors and uncertainties from the working trajectory model used to estimate future aircraft positions. The more accurate the prediction, the more likely an ideal (no false alarms, no missed detections) alerting system can be designed. Additional insights into the problem were brought forth by a review of current operational and developmental approaches found in the literature. An iterative, trial and error approach to threshold design was identified. When examined from a probabilistic perspective, the threshold parameters were found to be a surrogate to probabilistic performance measures. To overcome the limitations in the current iterative design method, a new direct approach is presented where the performance measures are directly computed and used to perform the alerting decisions. The methodology is shown to handle complex encounter situations (3-D, multi-aircraft, multi-intent, with uncertainties) with relative ease. Utilizing a Monte Carlo approach, a method was devised to perform the probabilistic computations in near realtime. Not only does this greatly increase the method's potential as an analytical tool, but it also opens up the possibility for use as a real-time conflict alerting probe. A prototype alerting logic was developed and has been utilized in several NASA Ames Research Center experimental studies.