Multigrid Methods for Aerodynamic Problems in Complex Geometries
NASA Technical Reports Server (NTRS)
Caughey, David A.
1995-01-01
Work has been directed at the development of efficient multigrid methods for the solution of aerodynamic problems involving complex geometries, including the development of computational methods for the solution of both inviscid and viscous transonic flow problems. The emphasis is on problems of complex, three-dimensional geometry. The methods developed are based upon finite-volume approximations to both the Euler and the Reynolds-Averaged Navier-Stokes equations. The methods are developed for use on multi-block grids using diagonalized implicit multigrid methods to achieve computational efficiency. The work is focused upon aerodynamic problems involving complex geometries, including advanced engine inlets.
Marr's levels and the minimalist program.
Johnson, Mark
2017-02-01
A simple change to a cognitive system at Marr's computational level may entail complex changes at the other levels of description of the system. The implementational level complexity of a change, rather than its computational level complexity, may be more closely related to the plausibility of a discrete evolutionary event causing that change. Thus the formal complexity of a change at the computational level may not be a good guide to the plausibility of an evolutionary event introducing that change. For example, while the Minimalist Program's Merge is a simple formal operation (Berwick & Chomsky, 2016), the computational mechanisms required to implement the language it generates (e.g., to parse the language) may be considerably more complex. This has implications for the theory of grammar: theories of grammar which involve several kinds of syntactic operations may be no less evolutionarily plausible than a theory of grammar that involves only one. A deeper understanding of human language at the algorithmic and implementational levels could strengthen Minimalist Program's account of the evolution of language.
Capturing, Codifying and Scoring Complex Data for Innovative, Computer-Based Items.
ERIC Educational Resources Information Center
Luecht, Richard M.
The Microsoft Certification Program (MCP) includes many new computer-based item types, based on complex cases involving the Windows 2000 (registered) operating system. This Innovative Item Technology (IIT) has presented challenges beyond traditional psychometric considerations such as capturing and storing the relevant response data from…
ERIC Educational Resources Information Center
Marcovitz, Alan B., Ed.
Described is the use of an analog/hybrid computer installation to study those physical phenomena that can be described through the evaluation of an algebraic function of a complex variable. This is an alternative way to study such phenomena on an interactive graphics terminal. The typical problem used, involving complex variables, is that of…
ASIC For Complex Fixed-Point Arithmetic
NASA Technical Reports Server (NTRS)
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
Use of a Computer Language in Teaching Dynamic Programming. Final Report.
ERIC Educational Resources Information Center
Trimble, C. J.; And Others
Most optimization problems of any degree of complexity must be solved using a computer. In the teaching of dynamic programing courses, it is often desirable to use a computer in problem solution. The solution process involves conceptual formulation and computational Solution. Generalized computer codes for dynamic programing problem solution…
ERIC Educational Resources Information Center
Wareham, Todd
2017-01-01
In human problem solving, there is a wide variation between individuals in problem solution time and success rate, regardless of whether or not this problem solving involves insight. In this paper, we apply computational and parameterized analysis to a plausible formalization of extended representation change theory (eRCT), an integration of…
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Utility Computing: Reality and Beyond
NASA Astrophysics Data System (ADS)
Ivanov, Ivan I.
Utility Computing is not a new concept. It involves organizing and providing a wide range of computing-related services as public utilities. Much like water, gas, electricity and telecommunications, the concept of computing as public utility was announced in 1955. Utility Computing remained a concept for near 50 years. Now some models and forms of Utility Computing are emerging such as storage and server virtualization, grid computing, and automated provisioning. Recent trends in Utility Computing as a complex technology involve business procedures that could profoundly transform the nature of companies' IT services, organizational IT strategies and technology infrastructure, and business models. In the ultimate Utility Computing models, organizations will be able to acquire as much IT services as they need, whenever and wherever they need them. Based on networked businesses and new secure online applications, Utility Computing would facilitate "agility-integration" of IT resources and services within and between virtual companies. With the application of Utility Computing there could be concealment of the complexity of IT, reduction of operational expenses, and converting of IT costs to variable `on-demand' services. How far should technology, business and society go to adopt Utility Computing forms, modes and models?
Undecidability and Irreducibility Conditions for Open-Ended Evolution and Emergence.
Hernández-Orozco, Santiago; Hernández-Quiroz, Francisco; Zenil, Hector
2018-01-01
Is undecidability a requirement for open-ended evolution (OEE)? Using methods derived from algorithmic complexity theory, we propose robust computational definitions of open-ended evolution and the adaptability of computable dynamical systems. Within this framework, we show that decidability imposes absolute limits on the stable growth of complexity in computable dynamical systems. Conversely, systems that exhibit (strong) open-ended evolution must be undecidable, establishing undecidability as a requirement for such systems. Complexity is assessed in terms of three measures: sophistication, coarse sophistication, and busy beaver logical depth. These three complexity measures assign low complexity values to random (incompressible) objects. As time grows, the stated complexity measures allow for the existence of complex states during the evolution of a computable dynamical system. We show, however, that finding these states involves undecidable computations. We conjecture that for similar complexity measures that assign low complexity values, decidability imposes comparable limits on the stable growth of complexity, and that such behavior is necessary for nontrivial evolutionary systems. We show that the undecidability of adapted states imposes novel and unpredictable behavior on the individuals or populations being modeled. Such behavior is irreducible. Finally, we offer an example of a system, first proposed by Chaitin, that exhibits strong OEE.
Reproducible research in vadose zone sciences
USDA-ARS?s Scientific Manuscript database
A significant portion of present-day soil and Earth science research is computational, involving complex data analysis pipelines, advanced mathematical and statistical models, and sophisticated computer codes. Opportunities for scientific progress are greatly diminished if reproducing and building o...
Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A; Robert, Christian P; Marin, Jean-Michel; Balding, David J; Guillemaud, Thomas; Estoup, Arnaud
2008-12-01
Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc.
Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach
Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...
Synthetic mixed-signal computation in living cells
Rubens, Jacob R.; Selvaggio, Gianluca; Lu, Timothy K.
2016-01-01
Living cells implement complex computations on the continuous environmental signals that they encounter. These computations involve both analogue- and digital-like processing of signals to give rise to complex developmental programs, context-dependent behaviours and homeostatic activities. In contrast to natural biological systems, synthetic biological systems have largely focused on either digital or analogue computation separately. Here we integrate analogue and digital computation to implement complex hybrid synthetic genetic programs in living cells. We present a framework for building comparator gene circuits to digitize analogue inputs based on different thresholds. We then demonstrate that comparators can be predictably composed together to build band-pass filters, ternary logic systems and multi-level analogue-to-digital converters. In addition, we interface these analogue-to-digital circuits with other digital gene circuits to enable concentration-dependent logic. We expect that this hybrid computational paradigm will enable new industrial, diagnostic and therapeutic applications with engineered cells. PMID:27255669
Kobayashi, M; Irino, T; Sweldens, W
2001-10-23
Multiscale computing (MSC) involves the computation, manipulation, and analysis of information at different resolution levels. Widespread use of MSC algorithms and the discovery of important relationships between different approaches to implementation were catalyzed, in part, by the recent interest in wavelets. We present two examples that demonstrate how MSC can help scientists understand complex data. The first is from acoustical signal processing and the second is from computer graphics.
Division in a Binary Representation for Complex Numbers
ERIC Educational Resources Information Center
Blest, David C.; Jamil, Tariq
2003-01-01
Computer operations involving complex numbers, essential in such applications as Fourier transforms or image processing, are normally performed in a "divide-and-conquer" approach dealing separately with real and imaginary parts. A number of proposals have treated complex numbers as a single unit but all have foundered on the problem of the…
Lytton, William W.
2009-01-01
Preface Epilepsy is a complex set of disorders that can involve many areas of cortex as well as underlying deep brain systems. The myriad manifestations of seizures, as varied as déjà vu and olfactory hallucination, can thereby give researchers insights into regional functions and relations. Epilepsy is also complex genetically and pathophysiologically, involving microscopic (ion channels, synaptic proteins), macroscopic (brain trauma and rewiring) and intermediate changes in a complex interplay of causality. It has long been recognized that computer modeling will be required to disentangle causality, to better understand seizure spread and to understand and eventually predict treatment efficacy. Over the past few years, substantial progress has been made modeling epilepsy at levels ranging from the molecular to the socioeconomic. We review these efforts and connect them to the medical goals of understanding and treating this disorder. PMID:18594562
Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A.; Robert, Christian P.; Marin, Jean-Michel; Balding, David J.; Guillemaud, Thomas; Estoup, Arnaud
2008-01-01
Summary: Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. Availability: The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc. Contact: j.cornuet@imperial.ac.uk Supplementary information: Supplementary data are also available at http://www.montpellier.inra.fr/CBGP/diyabc PMID:18842597
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1993-01-01
This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.
ERIC Educational Resources Information Center
Stredney, Donald Larry
An overview of computer animation and the techniques involved in its creation is provided in the introduction to this masters thesis, which focuses on the problems encountered by students in learning the forms and functions of complex anatomical structures and ways in which computer animation can address these problems. The objectives for,…
NASA Technical Reports Server (NTRS)
Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.
1993-01-01
The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.
Fiore, Vincenzo G; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank
2017-01-01
The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation.
Fiore, Vincenzo G.; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank
2017-01-01
The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation. PMID:28824390
NASA Technical Reports Server (NTRS)
Gordon, S.; Mcbride, B. J.
1976-01-01
A detailed description of the equations and computer program for computations involving chemical equilibria in complex systems is given. A free-energy minimization technique is used. The program permits calculations such as (1) chemical equilibrium for assigned thermodynamic states (T,P), (H,P), (S,P), (T,V), (U,V), or (S,V), (2) theoretical rocket performance for both equilibrium and frozen compositions during expansion, (3) incident and reflected shock properties, and (4) Chapman-Jouguet detonation properties. The program considers condensed species as well as gaseous species.
WE-D-303-00: Computational Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
Computer modelling of epilepsy.
Lytton, William W
2008-08-01
Epilepsy is a complex set of disorders that can involve many areas of the cortex, as well as underlying deep-brain systems. The myriad manifestations of seizures, which can be as varied as déjà vu and olfactory hallucination, can therefore give researchers insights into regional functions and relations. Epilepsy is also complex genetically and pathophysiologically: it involves microscopic (on the scale of ion channels and synaptic proteins), macroscopic (on the scale of brain trauma and rewiring) and intermediate changes in a complex interplay of causality. It has long been recognized that computer modelling will be required to disentangle causality, to better understand seizure spread and to understand and eventually predict treatment efficacy. Over the past few years, substantial progress has been made in modelling epilepsy at levels ranging from the molecular to the socioeconomic. We review these efforts and connect them to the medical goals of understanding and treating the disorder.
COMPUTER SIMULATIONS OF LUNG AIRWAY STRUCTURES USING DATA-DRIVEN SURFACE MODELING TECHNIQUES
ABSTRACT
Knowledge of human lung morphology is a subject critical to many areas of medicine. The visualization of lung structures naturally lends itself to computer graphics modeling due to the large number of airways involved and the complexities of the branching systems...
Computer program determines chemical composition of physical system at equilibrium
NASA Technical Reports Server (NTRS)
Kwong, S. S.
1966-01-01
FORTRAN 4 digital computer program calculates equilibrium composition of complex, multiphase chemical systems. This is a free energy minimization method with solution of the problem reduced to mathematical operations, without concern for the chemistry involved. Also certain thermodynamic properties are determined as byproducts of the main calculations.
Documentation Driven Development for Complex Real-Time Systems
2004-12-01
This paper presents a novel approach for development of complex real - time systems , called the documentation-driven development (DDD) approach. This... time systems . DDD will also support automated software generation based on a computational model and some relevant techniques. DDD includes two main...stakeholders to be easily involved in development processes and, therefore, significantly improve the agility of software development for complex real
Laghari, Samreen; Niazi, Muaz A
2016-01-01
Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.
A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering
Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani
2012-01-01
Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386
Design and Diagnosis Problem Solving with Multifunctional Technical Knowledge Bases
1992-09-29
STRUCTURE METHODOLOGY Design problem solving is a complex activity involving a number of subtasks. and a number of alternative methods potentially available...Conference on Artificial Intelligence. London: The British Computer Society, pp. 621-633. Friedland, P. (1979). Knowledge-based experimental design ...Computing Milieuxl: Management of Computing and Information Systems- -ty,*m man- agement General Terms: Design . Methodology Additional Key Words and Phrases
Liability for Personal Injury Caused by Defective Medical Computer Programs
Brannigan, Vincent M.
1980-01-01
Defective medical computer programs can cause personal injury. Financial responsibility for the injury under tort law will turn on several factors: whether the program is a product or a service, what types of defect exist in the product, and who produced the program. The factors involved in making these decisions are complex, but knowledge of the relevant issues can assist computer personnel in avoiding liability.
Mao, Keya; Xiao, Songhua; Liu, Zhengsheng; Zhang, Yonggang; Zhang, Xuesong; Wang, Zheng; Lu, Ning; Shourong, Zhu; Xifeng, Zhang; Geng, Cui; Baowei, Liu
2010-01-01
Surgical treatment of complex severe spinal deformity, involving a scoliosis Cobb angle of more than 90° and kyphosis or vertebral and rib deformity, is challenging. Preoperative two-dimensional images resulting from plain film radiography, computed tomography (CT) and magnetic resonance imaging provide limited morphometric information. Although the three-dimensional (3D) reconstruction CT with special software can view the stereo and rotate the spinal image on the screen, it cannot show the full-scale spine and cannot directly be used on the operation table. This study was conducted to investigate the application of computer-designed polystyrene models in the treatment of complex severe spinal deformity. The study involved 16 cases of complex severe spinal deformity treated in our hospital between 1 May 2004 and 31 December 2007; the mean ± SD preoperative scoliosis Cobb angle was 118° ± 27°. The CT scanning digital imaging and communication in medicine (DICOM) data sets of the affected spinal segments were collected for 3D digital reconstruction and rapid prototyping to prepare computer-designed polystyrene models, which were applied in the treatment of these cases. The computer-designed polystyrene models allowed 3D observation and measurement of the deformities directly, which helped the surgeon to perform morphological assessment and communicate with the patient and colleagues. Furthermore, the models also guided the choice and placement of pedicle screws. Moreover, the models were used to aid in virtual surgery and guide the actual surgical procedure. The mean ± SD postoperative scoliosis Cobb angle was 42° ± 32°, and no serious complications such as spinal cord or major vascular injury occurred. The use of computer-designed polystyrene models could provide more accurate morphometric information and facilitate surgical correction of complex severe spinal deformity. PMID:20213294
Four Ways to Skin a Definite Integral
ERIC Educational Resources Information Center
Dence, Thomas; Dence, Joseph
2010-01-01
The integral of 1/(1 + x[superscript 2]) is standard in elementary calculus, but the related integral 1/(1 + x[superscript 4]) rarely appears. In this article we examine the latter integral, computing its value by four different methods; several that involve standard elementary calculus techniques, and several involving complex integration.
Reducing the Complexity of an Agent-Based Local Heroin Market Model
Heard, Daniel; Bobashev, Georgiy V.; Morris, Robert J.
2014-01-01
This project explores techniques for reducing the complexity of an agent-based model (ABM). The analysis involved a model developed from the ethnographic research of Dr. Lee Hoffer in the Larimer area heroin market, which involved drug users, drug sellers, homeless individuals and police. The authors used statistical techniques to create a reduced version of the original model which maintained simulation fidelity while reducing computational complexity. This involved identifying key summary quantities of individual customer behavior as well as overall market activity and replacing some agents with probability distributions and regressions. The model was then extended to allow external market interventions in the form of police busts. Extensions of this research perspective, as well as its strengths and limitations, are discussed. PMID:25025132
Design and Development of a Web-Based Interactive Software Tool for Teaching Operating Systems
ERIC Educational Resources Information Center
Garmpis, Aristogiannis
2011-01-01
Operating Systems (OS) is an important and mandatory discipline in many Computer Science, Information Systems and Computer Engineering curricula. Some of its topics require a careful and detailed explanation from the instructor as they often involve theoretical concepts and somewhat complex mechanisms, demanding a certain degree of abstraction…
Smaller Satellite Operations Near Geostationary Orbit
2007-09-01
At the time, this was considered a very difficult task, due to the complexity involved with creating computer code to autonomously perform... computer systems and even permanently damage equipment. Depending on the solar cycle, solar weather will be properly characterized and modeled to...30 Wayne Tomasi. Electronic Communciations Systems. Upper Saddle River: Pearson Education, 2004. 1041
Micro-computed tomography of pupal metamorphosis in the solitary bee Megachile rotundata
USDA-ARS?s Scientific Manuscript database
Insect metamorphosis involves a complex change in form and function, but most of these changes are internal and treated as a black box. In this study, we examined development of the solitary bee, Megachile rotundata, using micro-computed tomography (µCT) and digital volume analysis. We describe deve...
Visualizing the Complex Process for Deep Learning with an Authentic Programming Project
ERIC Educational Resources Information Center
Peng, Jun; Wang, Minhong; Sampson, Demetrios
2017-01-01
Project-based learning (PjBL) has been increasingly used to connect abstract knowledge and authentic tasks in educational practice, including computer programming education. Despite its promising effects on improving learning in multiple aspects, PjBL remains a struggle due to its complexity. Completing an authentic programming project involves a…
Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim
2013-01-01
Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…
Collaborative Working Architecture for IoT-Based Applications.
Mora, Higinio; Signes-Pont, María Teresa; Gil, David; Johnsson, Magnus
2018-05-23
The new sensing applications need enhanced computing capabilities to handle the requirements of complex and huge data processing. The Internet of Things (IoT) concept brings processing and communication features to devices. In addition, the Cloud Computing paradigm provides resources and infrastructures for performing the computations and outsourcing the work from the IoT devices. This scenario opens new opportunities for designing advanced IoT-based applications, however, there is still much research to be done to properly gear all the systems for working together. This work proposes a collaborative model and an architecture to take advantage of the available computing resources. The resulting architecture involves a novel network design with different levels which combines sensing and processing capabilities based on the Mobile Cloud Computing (MCC) paradigm. An experiment is included to demonstrate that this approach can be used in diverse real applications. The results show the flexibility of the architecture to perform complex computational tasks of advanced applications.
2016-01-01
Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235
Connecting the virtual world of computers to the real world of medicinal chemistry.
Glen, Robert C
2011-03-01
Drug discovery involves the simultaneous optimization of chemical and biological properties, usually in a single small molecule, which modulates one of nature's most complex systems: the balance between human health and disease. The increased use of computer-aided methods is having a significant impact on all aspects of the drug-discovery and development process and with improved methods and ever faster computers, computer-aided molecular design will be ever more central to the discovery process.
WE-D-303-01: Development and Application of Digital Human Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segars, P.
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
Combining high performance simulation, data acquisition, and graphics display computers
NASA Technical Reports Server (NTRS)
Hickman, Robert J.
1989-01-01
Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.
EMILiO: a fast algorithm for genome-scale strain design.
Yang, Laurence; Cluett, William R; Mahadevan, Radhakrishnan
2011-05-01
Systems-level design of cell metabolism is becoming increasingly important for renewable production of fuels, chemicals, and drugs. Computational models are improving in the accuracy and scope of predictions, but are also growing in complexity. Consequently, efficient and scalable algorithms are increasingly important for strain design. Previous algorithms helped to consolidate the utility of computational modeling in this field. To meet intensifying demands for high-performance strains, both the number and variety of genetic manipulations involved in strain construction are increasing. Existing algorithms have experienced combinatorial increases in computational complexity when applied toward the design of such complex strains. Here, we present EMILiO, a new algorithm that increases the scope of strain design to include reactions with individually optimized fluxes. Unlike existing approaches that would experience an explosion in complexity to solve this problem, we efficiently generated numerous alternate strain designs producing succinate, l-glutamate and l-serine. This was enabled by successive linear programming, a technique new to the area of computational strain design. Copyright © 2011 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Carroll, Susanne E.
1995-01-01
Criticizes the computer modelling experiments conducted by Sokolik and Smith (1992), which involved the learning of French gender attribution using connectionist architecture. The article argues that the experiments greatly oversimplified the complexity of gender learning, in that they were designed in such a way that knowledge that must be…
Games as Artistic Medium: Interfacing Complexity Theory in Game-Based Art Pedagogy
ERIC Educational Resources Information Center
Patton, Ryan Matthew
2011-01-01
Having computer skills, let alone access to a personal computer, has become a necessary component of contemporary Western society and many parts of the world. Digital media literacy involves youth being able to view, participate in, and make creative works with technologies in personal and meaningful ways. Games, defined in this study as…
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2012-01-01
In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
NASA Astrophysics Data System (ADS)
Greisch, Jean Francois; Harding, Michael E.; Chmela, Jiri; Klopper, Willem M.; Schooss, Detlef; Kappes, Manfred M.
2016-06-01
The application of lanthanoid complexes ranges from photovoltaics and light-emitting diodes to quantum memories and biological assays. Rationalization of their design requires a thorough understanding of intramolecular processes such as energy transfer, charge transfer, and non-radiative decay involving their subunits. Characterization of the excited states of such complexes considerably benefits from mass spectrometric methods since the associated optical transitions and processes are strongly affected by stoichiometry, symmetry, and overall charge state. We report herein spectroscopic measurements on ensembles of ions trapped in the gas phase and soft-landed in neon matrices. Their interpretation is considerably facilitated by direct comparison with computations. The combination of energy- and time-resolved measurements on isolated species with density functional as well as ligand-field and Franck-Condon computations enables us to infer structural as well as dynamical information about the species studied. The approach is first illustrated for sets of model lanthanoid complexes whose structure and electronic properties are systematically varied via the substitution of one component (lanthanoid or alkali,alkali-earth ion): (i) systematic dependence of ligand-centered phosphorescence on the lanthanoid(III) promotion energy and its impact on sensitization, and (ii) structural changes induced by the substitution of alkali or alkali-earth ions in relation with structures inferred using ion mobility spectroscopy. The temperature dependence of sensitization is briefly discussed. The focus is then shifted to measurements involving europium complexes with doxycycline an antibiotic of the tetracycline family. Besides discussing the complexes' structural and electronic features, we report on their use to monitor enzymatic processes involving hydrogen peroxide or biologically relevant molecules such as adenosine triphosphate (ATP).
Dávid-Barrett, T.; Dunbar, R. I. M.
2013-01-01
Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses. PMID:23804623
Mathematical and Computational Modeling in Complex Biological Systems
Li, Wenyang; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558
Mathematical and Computational Modeling in Complex Biological Systems.
Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.
1978-09-12
the population. Only a socialist, planned economy can cope with such problems. However, the in- creasing complexity of the tasks faced’ by...the development of systems allowing man-machine dialogue does not decrease, but rather increase the complexity of the systems involved, simply...shifting the complexity to another sphere, where it is invisible to the human utilizing the system. Figures 5; refer- ences 3: 2 Russian, 1 Western
Computed intraoperative navigation guidance--a preliminary report on a new technique.
Enislidis, G; Wagner, A; Ploder, O; Ewers, R
1997-08-01
To assess the value of a computer-assisted three-dimensional guidance system (Virtual Patient System) in maxillofacial operations. Laboratory and open clinical study. Teaching Hospital, Austria. 6 patients undergoing various procedures including removal of foreign body (n=3) and biopsy, maxillary advancement, and insertion of implants (n=1 each). Storage of computed tomographic (CT) pictures on an optical disc, and imposition of intraoperative video images on to these. The resulting display is shown to the surgeon on a micromonitor in his head-up display for guidance during the operations. To improve orientation during complex or minimally invasive maxillofacial procedures and to make such operations easier and less traumatic. Successful transferral of computed navigation technology into an operation room environment and positive evaluation of the method by the surgeons involved. Computer-assisted three-dimensional guidance systems have the potential for making complex or minimally invasive procedures easier to do, thereby reducing postoperative morbidity.
ERIC Educational Resources Information Center
Hoffman, Gary G.
2015-01-01
A computational laboratory experiment is described, which involves the advanced study of an atomic system. The students use concepts and techniques typically covered in a physical chemistry course but extend those concepts and techniques to more complex situations. The students get a chance to explore the study of atomic states and perform…
Metagram Software - A New Perspective on the Art of Computation.
1981-10-01
numober) Computer Programming Information and Analysis Metagramming Philosophy Intelligence Information Systefs Abstraction & Metasystems Metagranmming...control would also serve well in the analysis of military and political intelligence, and in other areas where highly abstract methods of thought serve...needed in intelligence because several levels of abstraction are involved in a political or military system, because analysis entails a complex interplay
ERIC Educational Resources Information Center
Patterson, Janice H.; Smith, Marshall S.
This report presents a national agenda for research on the learning of thinking skills via computer technology which was developed at a National Academy of Sciences conference on educational, methodological, and practical issues involved in the use of computers to promote complex thought in grades K-12. The discussion of research topics agreed…
Perspective: Quantum mechanical methods in biochemistry and biophysics.
Cui, Qiang
2016-10-14
In this perspective article, I discuss several research topics relevant to quantum mechanical (QM) methods in biophysical and biochemical applications. Due to the immense complexity of biological problems, the key is to develop methods that are able to strike the proper balance of computational efficiency and accuracy for the problem of interest. Therefore, in addition to the development of novel ab initio and density functional theory based QM methods for the study of reactive events that involve complex motifs such as transition metal clusters in metalloenzymes, it is equally important to develop inexpensive QM methods and advanced classical or quantal force fields to describe different physicochemical properties of biomolecules and their behaviors in complex environments. Maintaining a solid connection of these more approximate methods with rigorous QM methods is essential to their transferability and robustness. Comparison to diverse experimental observables helps validate computational models and mechanistic hypotheses as well as driving further development of computational methodologies.
Social network extraction based on Web: 1. Related superficial methods
NASA Astrophysics Data System (ADS)
Khairuddin Matyuso Nasution, Mahyuddin
2018-01-01
Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.
NASA Astrophysics Data System (ADS)
Fritze, Matthew D.
Fluid-structure interaction (FSI) modeling of spacecraft parachutes involves a number of computational challenges. The canopy complexity created by the hundreds of gaps and slits and design-related modification of that geometric porosity by removal of some of the sails and panels are among the formidable challenges. Disreefing from one stage to another when the parachute is used in multiple stages is another formidable challenge. This thesis addresses the computational challenges involved in disreefing of spacecraft parachutes and fully-open and reefed stages of the parachutes with modified geometric porosity. The special techniques developed to address these challenges are described and the FSI computations are be reported. The thesis also addresses the modeling and computation challenges involved in very early stages, where the sudden separation of a cover jettisoned to the spacecraft wake needs to be modeled. Higher-order temporal representations used in modeling the separation motion are described, and the computed separation and wake-induced forces acting on the cover are reported.
NASA Astrophysics Data System (ADS)
Noble, J. H.; Lubasch, M.; Stevens, J.; Jentschura, U. D.
2017-12-01
We describe a matrix diagonalization algorithm for complex symmetric (not Hermitian) matrices, A ̲ =A̲T, which is based on a two-step algorithm involving generalized Householder reflections based on the indefinite inner product 〈 u ̲ , v ̲ 〉 ∗ =∑iuivi. This inner product is linear in both arguments and avoids complex conjugation. The complex symmetric input matrix is transformed to tridiagonal form using generalized Householder transformations (first step). An iterative, generalized QL decomposition of the tridiagonal matrix employing an implicit shift converges toward diagonal form (second step). The QL algorithm employs iterative deflation techniques when a machine-precision zero is encountered "prematurely" on the super-/sub-diagonal. The algorithm allows for a reliable and computationally efficient computation of resonance and antiresonance energies which emerge from complex-scaled Hamiltonians, and for the numerical determination of the real energy eigenvalues of pseudo-Hermitian and PT-symmetric Hamilton matrices. Numerical reference values are provided.
Development of X-TOOLSS: Preliminary Design of Space Systems Using Evolutionary Computation
NASA Technical Reports Server (NTRS)
Schnell, Andrew R.; Hull, Patrick V.; Turner, Mike L.; Dozier, Gerry; Alverson, Lauren; Garrett, Aaron; Reneau, Jarred
2008-01-01
Evolutionary computational (EC) techniques such as genetic algorithms (GA) have been identified as promising methods to explore the design space of mechanical and electrical systems at the earliest stages of design. In this paper the authors summarize their research in the use of evolutionary computation to develop preliminary designs for various space systems. An evolutionary computational solver developed over the course of the research, X-TOOLSS (Exploration Toolset for the Optimization of Launch and Space Systems) is discussed. With the success of early, low-fidelity example problems, an outline of work involving more computationally complex models is discussed.
Aeropropulsion 1987. Session 2: Aeropropulsion Structures Research
NASA Technical Reports Server (NTRS)
1987-01-01
Aeropropulsion systems present unique problems to the structural engineer. The extremes in operating temperatures, rotational effects, and behaviors of advanced material systems combine into complexities that require advances in many scientific disciplines involved in structural analysis and design procedures. This session provides an overview of the complexities of aeropropulsion structures and the theoretical, computational, and experimental research conducted to achieve the needed advances.
ERIC Educational Resources Information Center
Winkel, Brian
2008-01-01
A complex technology-based problem in visualization and computation for students in calculus is presented. Strategies are shown for its solution and the opportunities for students to put together sequences of concepts and skills to build for success are highlighted. The problem itself involves placing an object under water in order to actually see…
Schurdak, Mark E; Pei, Fen; Lezon, Timothy R; Carlisle, Diane; Friedlander, Robert; Taylor, D Lansing; Stern, Andrew M
2018-01-01
Designing effective therapeutic strategies for complex diseases such as cancer and neurodegeneration that involve tissue context-specific interactions among multiple gene products presents a major challenge for precision medicine. Safe and selective pharmacological modulation of individual molecular entities associated with a disease often fails to provide efficacy in the clinic. Thus, development of optimized therapeutic strategies for individual patients with complex diseases requires a more comprehensive, systems-level understanding of disease progression. Quantitative systems pharmacology (QSP) is an approach to drug discovery that integrates computational and experimental methods to understand the molecular pathogenesis of a disease at the systems level more completely. Described here is the chemogenomic component of QSP for the inference of biological pathways involved in the modulation of the disease phenotype. The approach involves testing sets of compounds of diverse mechanisms of action in a disease-relevant phenotypic assay, and using the mechanistic information known for the active compounds, to infer pathways and networks associated with the phenotype. The example used here is for monogenic Huntington's disease (HD), which due to the pleiotropic nature of the mutant phenotype has a complex pathogenesis. The overall approach, however, is applicable to any complex disease.
2014-12-01
Introduction 1.1 Background In today’s world of high -tech warfare, we have developed the ability to deploy virtually any type of ordnance quickly and... ANSI Std. 239–18 i THIS PAGE INTENTIONALLY LEFT BLANK ii Approved for public release; distribution is unlimited TEMPORALLY ADJUSTED COMPLEX AMBIGUITY...this time due to time constraints and the high computational complexity involved in the current implementation of the Moss algorithm. Full maps, with
C-N bond cleavage of anilines by a (salen)ruthenium(VI) nitrido complex.
Man, Wai-Lun; Xie, Jianhui; Pan, Yi; Lam, William W Y; Kwong, Hoi-Ki; Ip, Kwok-Wa; Yiu, Shek-Man; Lau, Kai-Chung; Lau, Tai-Chu
2013-04-17
We report experimental and computational studies of the facile oxidative C-N bond cleavage of anilines by a (salen)ruthenium(VI) nitrido complex. We provide evidence that the initial step involves nucleophilic attack of aniline at the nitrido ligand of the ruthenium complex, which is followed by proton and electron transfer to afford a (salen)ruthenium(II) diazonium intermediate. This intermediate then undergoes unimolecular decomposition to generate benzene and N2.
Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)
NASA Technical Reports Server (NTRS)
Dalton, Shelly D.; Daley, Philip C.
1988-01-01
As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.
Optimized Materials From First Principles Simulations: Are We There Yet?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galli, G; Gygi, F
2005-07-26
In the past thirty years, the use of scientific computing has become pervasive in all disciplines: collection and interpretation of most experimental data is carried out using computers, and physical models in computable form, with various degrees of complexity and sophistication, are utilized in all fields of science. However, full prediction of physical and chemical phenomena based on the basic laws of Nature, using computer simulations, is a revolution still in the making, and it involves some formidable theoretical and computational challenges. We illustrate the progress and successes obtained in recent years in predicting fundamental properties of materials in condensedmore » phases and at the nanoscale, using ab-initio, quantum simulations. We also discuss open issues related to the validation of the approximate, first principles theories used in large scale simulations, and the resulting complex interplay between computation and experiment. Finally, we describe some applications, with focus on nanostructures and liquids, both at ambient and under extreme conditions.« less
Ab initio atomic recombination reaction energetics on model heat shield surfaces
NASA Technical Reports Server (NTRS)
Senese, Fredrick; Ake, Robert
1992-01-01
Ab initio quantum mechanical calculations on small hydration complexes involving the nitrate anion are reported. The self-consistent field method with accurate basis sets has been applied to compute completely optimized equilibrium geometries, vibrational frequencies, thermochemical parameters, and stable site labilities of complexes involving 1, 2, and 3 waters. The most stable geometries in the first hydration shell involve in-plane waters bridging pairs of nitrate oxygens with two equal and bent hydrogen bonds. A second extremely labile local minimum involves out-of-plane waters with a single hydrogen bond and lies about 2 kcal/mol higher. The potential in the region of the second minimum is extremely flat and qualitatively sensitive to changes in the basis set; it does not correspond to a true equilibrium structure.
The fuzzy cube and causal efficacy: representation of concomitant mechanisms in stroke.
Jobe, Thomas H.; Helgason, Cathy M.
1998-04-01
Twentieth century medical science has embraced nineteenth century Boolean probability theory based upon two-valued Aristotelian logic. With the later addition of bit-based, von Neumann structured computational architectures, an epistemology based on randomness has led to a bivalent epidemiological methodology that dominates medical decision making. In contrast, fuzzy logic, based on twentieth century multi-valued logic, and computational structures that are content addressed and adaptively modified, has advanced a new scientific paradigm for the twenty-first century. Diseases such as stroke involve multiple concomitant causal factors that are difficult to represent using conventional statistical methods. We tested which paradigm best represented this complex multi-causal clinical phenomenon-stroke. We show that the fuzzy logic paradigm better represented clinical complexity in cerebrovascular disease than current probability theory based methodology. We believe this finding is generalizable to all of clinical science since multiple concomitant causal factors are involved in nearly all known pathological processes.
Exact posterior computation in non-conjugate Gaussian location-scale parameters models
NASA Astrophysics Data System (ADS)
Andrade, J. A. A.; Rathie, P. N.
2017-12-01
In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.
Chmela, Jiří; Greisch, Jean-François; Harding, Michael E; Klopper, Wim; Kappes, Manfred M; Schooss, Detlef
2018-03-08
The gas-phase laser-induced photoluminescence of cationic mononuclear gadolinium and lutetium complexes involving two 9-oxophenalen-1-one ligands is reported. Performing measurements at a temperature of 83 K enables us to resolve vibronic transitions. Via comparison to Franck-Condon computations, the main vibrational contributions to the ligand-centered phosphorescence are determined to involve rocking, wagging, and stretching of the 9-oxophenalen-1-one-lanthanoid coordination in the low-energy range, intraligand bending, and stretching in the medium- to high-energy range, rocking of the carbonyl and methine groups, and C-H stretching beyond. Whereas Franck-Condon calculations based on density-functional harmonic frequency computations reproduce the main features of the vibrationally resolved emission spectra, the absolute transition energies as determined by density functional theory are off by several thousand wavenumbers. This discrepancy is found to remain at higher computational levels. The relative energy of the Gd(III) and Lu(III) emission bands is only reproduced at the coupled-cluster singles and doubles level and beyond.
Multicore Architecture-aware Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinivasa, Avinash
Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a largemore » scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.« less
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
Park, Subok; Clarkson, Eric
2010-01-01
The Bayesian ideal observer is optimal among all observers and sets an absolute upper bound for the performance of any observer in classification tasks [Van Trees, Detection, Estimation, and Modulation Theory, Part I (Academic, 1968).]. Therefore, the ideal observer should be used for objective image quality assessment whenever possible. However, computation of ideal-observer performance is difficult in practice because this observer requires the full description of unknown, statistical properties of high-dimensional, complex data arising in real life problems. Previously, Markov-chain Monte Carlo (MCMC) methods were developed by Kupinski et al. [J. Opt. Soc. Am. A 20, 430(2003) ] and by Park et al. [J. Opt. Soc. Am. A 24, B136 (2007) and IEEE Trans. Med. Imaging 28, 657 (2009) ] to estimate the performance of the ideal observer and the channelized ideal observer (CIO), respectively, in classification tasks involving non-Gaussian random backgrounds. However, both algorithms had the disadvantage of long computation times. We propose a fast MCMC for real-time estimation of the likelihood ratio for the CIO. Our simulation results show that our method has the potential to speed up ideal-observer performance in tasks involving complex data when efficient channels are used for the CIO. PMID:19884916
2003-08-18
KENNEDY SPACE CENTER, FLA. - Dr. Grant Gilmore, Dynamac Corp., utilizes a laptop computer to explain aspects of the underwater acoustic research under way in the Launch Complex 39 turn basin. Several government agencies, including NASA, NOAA, the Navy, the Coast Guard, and the Florida Fish and Wildlife Commission are involved in the testing. The research involves demonstrations of passive and active sensor technologies, with applications in fields ranging from marine biological research to homeland security. The work is also serving as a pilot project to assess the cooperation between the agencies involved. Equipment under development includes a passive acoustic monitor developed by NASA’s Jet Propulsion Laboratory, and mobile robotic sensors from the Navy’s Mobile Diving and Salvage Unit.
The computational challenges of Earth-system science.
O'Neill, Alan; Steenman-Clark, Lois
2002-06-15
The Earth system--comprising atmosphere, ocean, land, cryosphere and biosphere--is an immensely complex system, involving processes and interactions on a wide range of space- and time-scales. To understand and predict the evolution of the Earth system is one of the greatest challenges of modern science, with success likely to bring enormous societal benefits. High-performance computing, along with the wealth of new observational data, is revolutionizing our ability to simulate the Earth system with computer models that link the different components of the system together. There are, however, considerable scientific and technical challenges to be overcome. This paper will consider four of them: complexity, spatial resolution, inherent uncertainty and time-scales. Meeting these challenges requires a significant increase in the power of high-performance computers. The benefits of being able to make reliable predictions about the evolution of the Earth system should, on their own, amply repay this investment.
Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak
2010-01-01
Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions
Ligand Binding: Molecular Mechanics Calculation of the Streptavidin-Biotin Rupture Force
NASA Astrophysics Data System (ADS)
Grubmuller, Helmut; Heymann, Berthold; Tavan, Paul
1996-02-01
The force required to rupture the streptavidin-biotin complex was calculated here by computer simulations. The computed force agrees well with that obtained by recent single molecule atomic force microscope experiments. These simulations suggest a detailed multiple-pathway rupture mechanism involving five major unbinding steps. Binding forces and specificity are attributed to a hydrogen bond network between the biotin ligand and residues within the binding pocket of streptavidin. During rupture, additional water bridges substantially enhance the stability of the complex and even dominate the binding inter-actions. In contrast, steric restraints do not appear to contribute to the binding forces, although conformational motions were observed.
Glisky, E L; Schacter, D L
1989-01-01
This study explored the limits of learning that could be achieved by an amnesic patient in a complex real-world domain. Using a cuing procedure known as the method of vanishing cues, a severely amnesic encephalitic patient was taught over 250 discrete pieces of new information concerning the rules and procedures for performing a task involving data entry into a computer. Subsequently, she was able to use this acquired knowledge to perform the task accurately and efficiently in the workplace. These results suggest that amnesic patients' preserved learning abilities can be extended well beyond what has been reported previously.
Jami-Alahmadi, Yasaman; Fridgen, Travis D
2016-01-21
M(Pro2-H)(+) complexes were electrosprayed and isolated in an FTICR cell where their unimolecular chemistries and structures were explored using SORI-CID and IRMPD spectroscopy. These experiments were augmented by computational methods such as electronic structure, simulated annealing, and atoms in molecules (AIM) calculations. The unimolecular chemistries of the larger metal cation (Ca(2+), Sr(2+) and Ba(2+)) complexes predominantly involve loss of neutral proline whereas the complexes involving the smaller Mg(2+) and transition metal dications tend to lose small neutral molecules such as water and carbon dioxide. Interestingly, all complexes involving transition metal dications except for Cu(Pro2-H)(+) lose H2 upon collisional or IRMPD activation. IRMPD spectroscopy shows that the intact proline in the transition metal complexes and Cu(Pro2-H)(+) is predominantly canonical (charge solvated) while for the Ca(2+), Sr(2+), and Ba(2+) complexes, proline is in its zwitterionic form. The IRMPD spectra for both Mg(Pro2-H)(+) and Mn(Pro2-H)(+) are concluded to have contributions from both charge-solvated and canonical structures.
Shen, Weifeng; Jiang, Libing; Zhang, Mao; Ma, Yuefeng; Jiang, Guanyu; He, Xiaojun
2014-01-01
To review the research methods of mass casualty incident (MCI) systematically and introduce the concept and characteristics of complexity science and artificial system, computational experiments and parallel execution (ACP) method. We searched PubMed, Web of Knowledge, China Wanfang and China Biology Medicine (CBM) databases for relevant studies. Searches were performed without year or language restrictions and used the combinations of the following key words: "mass casualty incident", "MCI", "research method", "complexity science", "ACP", "approach", "science", "model", "system" and "response". Articles were searched using the above keywords and only those involving the research methods of mass casualty incident (MCI) were enrolled. Research methods of MCI have increased markedly over the past few decades. For now, dominating research methods of MCI are theory-based approach, empirical approach, evidence-based science, mathematical modeling and computer simulation, simulation experiment, experimental methods, scenario approach and complexity science. This article provides an overview of the development of research methodology for MCI. The progresses of routine research approaches and complexity science are briefly presented in this paper. Furthermore, the authors conclude that the reductionism underlying the exact science is not suitable for MCI complex systems. And the only feasible alternative is complexity science. Finally, this summary is followed by a review that ACP method combining artificial systems, computational experiments and parallel execution provides a new idea to address researches for complex MCI.
NASA Technical Reports Server (NTRS)
Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.
1986-01-01
Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koniges, A.E.; Craddock, G.G.; Schnack, D.D.
The purpose of the workshop was to assemble workers, both within and outside of the fusion-related computations areas, for discussion regarding the issues of dynamically adaptive gridding. There were three invited talks related to adaptive gridding application experiences in various related fields of computational fluid dynamics (CFD), and nine short talks reporting on the progress of adaptive techniques in the specific areas of scrape-off-layer (SOL) modeling and magnetohydrodynamic (MHD) stability. Adaptive mesh methods have been successful in a number of diverse fields of CFD for over a decade. The method involves dynamic refinement of computed field profiles in a waymore » that disperses uniformly the numerical errors associated with discrete approximations. Because the process optimizes computational effort, adaptive mesh methods can be used to study otherwise the intractable physical problems that involve complex boundary shapes or multiple spatial/temporal scales. Recent results indicate that these adaptive techniques will be required for tokamak fluid-based simulations involving the diverted tokamak SOL modeling and MHD simulations problems related to the highest priority ITER relevant issues.Individual papers are indexed separately on the energy data bases.« less
An evaluation of superminicomputers for thermal analysis
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Vidal, J. B.; Jones, G. K.
1982-01-01
The use of superminicomputers for solving a series of increasingly complex thermal analysis problems is investigated. The approach involved (1) installation and verification of the SPAR thermal analyzer software on superminicomputers at Langley Research Center and Goddard Space Flight Center, (2) solution of six increasingly complex thermal problems on this equipment, and (3) comparison of solution (accuracy, CPU time, turnaround time, and cost) with solutions on large mainframe computers.
NASA Astrophysics Data System (ADS)
Chen, Hudong
2001-06-01
There have been considerable advances in Lattice Boltzmann (LB) based methods in the last decade. By now, the fundamental concept of using the approach as an alternative tool for computational fluid dynamics (CFD) has been substantially appreciated and validated in mainstream scientific research and in industrial engineering communities. Lattice Boltzmann based methods possess several major advantages: a) less numerical dissipation due to the linear Lagrange type advection operator in the Boltzmann equation; b) local dynamic interactions suitable for highly parallel processing; c) physical handling of boundary conditions for complicated geometries and accurate control of fluxes; d) microscopically consistent modeling of thermodynamics and of interface properties in complex multiphase flows. It provides a great opportunity to apply the method to practical engineering problems encountered in a wide range of industries from automotive, aerospace to chemical, biomedical, petroleum, nuclear, and others. One of the key challenges is to extend the applicability of this alternative approach to regimes of highly turbulent flows commonly encountered in practical engineering situations involving high Reynolds numbers. Over the past ten years, significant efforts have been made on this front at Exa Corporation in developing a lattice Boltzmann based commercial CFD software, PowerFLOW. It has become a useful computational tool for the simulation of turbulent aerodynamics in practical engineering problems involving extremely complex geometries and flow situations, such as in new automotive vehicle designs world wide. In this talk, we present an overall LB based algorithm concept along with certain key extensions in order to accurately handle turbulent flows involving extremely complex geometries. To demonstrate the accuracy of turbulent flow simulations, we provide a set of validation results for some well known academic benchmarks. These include straight channels, backward-facing steps, flows over a curved hill and typical NACA airfoils at various angles of attack including prediction of stall angle. We further provide numerous engineering cases, ranging from external aerodynamics around various car bodies to internal flows involved in various industrial devices. We conclude with a discussion of certain future extensions for complex fluids.
Digitized adiabatic quantum computing with a superconducting circuit.
Barends, R; Shabani, A; Lamata, L; Kelly, J; Mezzacapo, A; Las Heras, U; Babbush, R; Fowler, A G; Campbell, B; Chen, Yu; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Lucero, E; Megrant, A; Mutus, J Y; Neeley, M; Neill, C; O'Malley, P J J; Quintana, C; Roushan, P; Sank, D; Vainsencher, A; Wenner, J; White, T C; Solano, E; Neven, H; Martinis, John M
2016-06-09
Quantum mechanics can help to solve complex problems in physics and chemistry, provided they can be programmed in a physical device. In adiabatic quantum computing, a system is slowly evolved from the ground state of a simple initial Hamiltonian to a final Hamiltonian that encodes a computational problem. The appeal of this approach lies in the combination of simplicity and generality; in principle, any problem can be encoded. In practice, applications are restricted by limited connectivity, available interactions and noise. A complementary approach is digital quantum computing, which enables the construction of arbitrary interactions and is compatible with error correction, but uses quantum circuit algorithms that are problem-specific. Here we combine the advantages of both approaches by implementing digitized adiabatic quantum computing in a superconducting system. We tomographically probe the system during the digitized evolution and explore the scaling of errors with system size. We then let the full system find the solution to random instances of the one-dimensional Ising problem as well as problem Hamiltonians that involve more complex interactions. This digital quantum simulation of the adiabatic algorithm consists of up to nine qubits and up to 1,000 quantum logic gates. The demonstration of digitized adiabatic quantum computing in the solid state opens a path to synthesizing long-range correlations and solving complex computational problems. When combined with fault-tolerance, our approach becomes a general-purpose algorithm that is scalable.
Oligomerization of G protein-coupled receptors: computational methods.
Selent, J; Kaczor, A A
2011-01-01
Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
Sittig, Dean F.; Singh, Hardeep
2011-01-01
Conceptual models have been developed to address challenges inherent in studying health information technology (HIT). This manuscript introduces an 8-dimensional model specifically designed to address the socio-technical challenges involved in design, development, implementation, use, and evaluation of HIT within complex adaptive healthcare systems. The 8 dimensions are not independent, sequential, or hierarchical, but rather are interdependent and interrelated concepts similar to compositions of other complex adaptive systems. Hardware and software computing infrastructure refers to equipment and software used to power, support, and operate clinical applications and devices. Clinical content refers to textual or numeric data and images that constitute the “language” of clinical applications. The human computer interface includes all aspects of the computer that users can see, touch, or hear as they interact with it. People refers to everyone who interacts in some way with the system, from developer to end-user, including potential patient-users. Workflow and communication are the processes or steps involved in assuring that patient care tasks are carried out effectively. Two additional dimensions of the model are internal organizational features (e.g., policies, procedures, and culture) and external rules and regulations, both of which may facilitate or constrain many aspects of the preceding dimensions. The final dimension is measurement and monitoring, which refers to the process of measuring and evaluating both intended and unintended consequences of HIT implementation and use. We illustrate how our model has been successfully applied in real-world complex adaptive settings to understand and improve HIT applications at various stages of development and implementation. PMID:20959322
Sittig, Dean F; Singh, Hardeep
2010-10-01
Conceptual models have been developed to address challenges inherent in studying health information technology (HIT). This manuscript introduces an eight-dimensional model specifically designed to address the sociotechnical challenges involved in design, development, implementation, use and evaluation of HIT within complex adaptive healthcare systems. The eight dimensions are not independent, sequential or hierarchical, but rather are interdependent and inter-related concepts similar to compositions of other complex adaptive systems. Hardware and software computing infrastructure refers to equipment and software used to power, support and operate clinical applications and devices. Clinical content refers to textual or numeric data and images that constitute the 'language' of clinical applications. The human--computer interface includes all aspects of the computer that users can see, touch or hear as they interact with it. People refers to everyone who interacts in some way with the system, from developer to end user, including potential patient-users. Workflow and communication are the processes or steps involved in ensuring that patient care tasks are carried out effectively. Two additional dimensions of the model are internal organisational features (eg, policies, procedures and culture) and external rules and regulations, both of which may facilitate or constrain many aspects of the preceding dimensions. The final dimension is measurement and monitoring, which refers to the process of measuring and evaluating both intended and unintended consequences of HIT implementation and use. We illustrate how our model has been successfully applied in real-world complex adaptive settings to understand and improve HIT applications at various stages of development and implementation.
Giannini, Valentina; Bianchi, Veronica; Carabalona, Silvia; Mazzetti, Simone; Maggiorotto, Furio; Kubatzki, Franziska; Regge, Daniele; Ponzone, Riccardo; Martincich, Laura
2017-12-01
To assess the role in predicting nipple-areola complex (NAC) involvement of a newly developed automatic method which computes the 3D tumor-NAC distance. Ninety-nine patients scheduled to nipple sparing mastectomy (NSM) underwent magnetic resonance (MR) examination at 1.5 T, including sagittal T2w and dynamic contrast enhanced (DCE)-MR imaging. An automatic method was developed to segment the NAC and the tumor and to compute the 3D distance between them. The automatic measurement was compared with manual axial and sagittal 2D measurements. NAC involvement was defined by the presence of invasive ductal or lobular carcinoma and/or ductal carcinoma in situ or ductal intraepithelial neoplasia (DIN1c - DIN3). Tumor-NAC distance was computed on 95/99 patients (25 NAC+), as three tumors were not correctly segmented (sensitivity = 97%), and 1 NAC was not detected (sensitivity = 99%). The automatic 3D distance reached the highest area under the receiver operating characteristic (ROC) curve (0.830) with respect to the manual axial (0.676), sagittal (0.664), and minimum distances (0.664). At the best cut-off point of 21 mm, the 3D distance obtained sensitivity = 72%, specificity = 80%, positive predictive value = 56%, and negative predictive value = 89%. This method could provide a reproducible biomarker to preoperatively select breast cancer patients candidates to NSM, thus helping surgical planning and intraoperative management of patients. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Minnis, P.; Spangenberg, D.; Ayers, J. K.; Palikonda, R.; Vakhnin, A.; Dubois, R.; Murphy, P. R.
2014-12-01
The processing, storage and dissemination of satellite cloud and radiation products produced at NASA Langley Research Center are key activities for the Climate Science Branch. A constellation of systems operates in sync to accomplish these goals. Because of the complexity involved with operating such intricate systems, there are both high failure rates and high costs for hardware and system maintenance. Cloud computing has the potential to ameliorate cost and complexity issues. Over time, the cloud computing model has evolved and hybrid systems comprising off-site as well as on-site resources are now common. Towards our mission of providing the highest quality research products to the widest audience, we have explored the use of the Amazon Web Services (AWS) Cloud and Storage and present a case study of our results and efforts. This project builds upon NASA Langley Cloud and Radiation Group's experience with operating large and complex computing infrastructures in a reliable and cost effective manner to explore novel ways to leverage cloud computing resources in the atmospheric science environment. Our case study presents the project requirements and then examines the fit of AWS with the LaRC computing model. We also discuss the evaluation metrics, feasibility, and outcomes and close the case study with the lessons we learned that would apply to others interested in exploring the implementation of the AWS system in their own atmospheric science computing environments.
Computational Methods to Predict Protein Interaction Partners
NASA Astrophysics Data System (ADS)
Valencia, Alfonso; Pazos, Florencio
In the new paradigm for studying biological phenomena represented by Systems Biology, cellular components are not considered in isolation but as forming complex networks of relationships. Protein interaction networks are among the first objects studied from this new point of view. Deciphering the interactome (the whole network of interactions for a given proteome) has been shown to be a very complex task. Computational techniques for detecting protein interactions have become standard tools for dealing with this problem, helping and complementing their experimental counterparts. Most of these techniques use genomic or sequence features intuitively related with protein interactions and are based on "first principles" in the sense that they do not involve training with examples. There are also other computational techniques that use other sources of information (i.e. structural information or even experimental data) or are based on training with examples.
NASA Astrophysics Data System (ADS)
Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming
2016-12-01
A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.
Field Scale Monitoring and Modeling of Water and Chemical Transfer in the Vadose Zone
USDA-ARS?s Scientific Manuscript database
Natural resource systems involve highly complex interactions of soil-plant-atmosphere-management components that are extremely difficult to quantitatively describe. Computer simulations for prediction and management of watersheds, water supply areas, and agricultural fields and farms have become inc...
Microcomputers, Model Rockets, and Race Cars.
ERIC Educational Resources Information Center
Mirus, Edward A., Jr.
1985-01-01
The industrial education orientation program at Wisconsin School for the Deaf (WSD) presents problem-solving situations to all seventh- and eighth-grade hearing-impaired students. WSD developed user-friendly microcomputer software to guide students individually through complex computations involving model race cars and rockets while freeing…
NASA Technical Reports Server (NTRS)
Boyalakuntla, Kishore; Soni, Bharat K.; Thornburg, Hugh J.; Yu, Robert
1996-01-01
During the past decade, computational simulation of fluid flow around complex configurations has progressed significantly and many notable successes have been reported, however, unsteady time-dependent solutions are not easily obtainable. The present effort involves unsteady time dependent simulation of temporally deforming geometries. Grid generation for a complex configuration can be a time consuming process and temporally varying geometries necessitate the regeneration of such grids for every time step. Traditional grid generation techniques have been tried and demonstrated to be inadequate to such simulations. Non-Uniform Rational B-splines (NURBS) based techniques provide a compact and accurate representation of the geometry. This definition can be coupled with a distribution mesh for a user defined spacing. The present method greatly reduces cpu requirements for time dependent remeshing, facilitating the simulation of more complex unsteady problems. A thrust vectoring nozzle has been chosen to demonstrate the capability as it is of current interest in the aerospace industry for better maneuverability of fighter aircraft in close combat and in post stall regimes. This current effort is the first step towards multidisciplinary design optimization which involves coupling the aerodynamic heat transfer and structural analysis techniques. Applications include simulation of temporally deforming bodies and aeroelastic problems.
Manifesto of computational social science
NASA Astrophysics Data System (ADS)
Conte, R.; Gilbert, N.; Bonelli, G.; Cioffi-Revilla, C.; Deffuant, G.; Kertesz, J.; Loreto, V.; Moat, S.; Nadal, J.-P.; Sanchez, A.; Nowak, A.; Flache, A.; San Miguel, M.; Helbing, D.
2012-11-01
The increasing integration of technology into our lives has created unprecedented volumes of data on society's everyday behaviour. Such data opens up exciting new opportunities to work towards a quantitative understanding of our complex social systems, within the realms of a new discipline known as Computational Social Science. Against a background of financial crises, riots and international epidemics, the urgent need for a greater comprehension of the complexity of our interconnected global society and an ability to apply such insights in policy decisions is clear. This manifesto outlines the objectives of this new scientific direction, considering the challenges involved in it, and the extensive impact on science, technology and society that the success of this endeavour is likely to bring about.
Close to real life. [solving for transonic flow about lifting airfoils using supercomputers
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Bailey, F. Ron
1988-01-01
NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.
Community detection in complex networks using proximate support vector clustering
NASA Astrophysics Data System (ADS)
Wang, Feifan; Zhang, Baihai; Chai, Senchun; Xia, Yuanqing
2018-03-01
Community structure, one of the most attention attracting properties in complex networks, has been a cornerstone in advances of various scientific branches. A number of tools have been involved in recent studies concentrating on the community detection algorithms. In this paper, we propose a support vector clustering method based on a proximity graph, owing to which the introduced algorithm surpasses the traditional support vector approach both in accuracy and complexity. Results of extensive experiments undertaken on computer generated networks and real world data sets illustrate competent performances in comparison with the other counterparts.
Dynamical analysis of the global business-cycle synchronization
2018-01-01
This paper reports the dynamical analysis of the business cycles of 12 (developed and developing) countries over the last 56 years by applying computational techniques used for tackling complex systems. They reveal long-term convergence and country-level interconnections because of close contagion effects caused by bilateral networking exposure. Interconnectivity determines the magnitude of cross-border impacts. Local features and shock propagation complexity also may be true engines for local configuration of cycles. The algorithmic modeling proves to represent a solid approach to study the complex dynamics involved in the world economies. PMID:29408909
Dynamical analysis of the global business-cycle synchronization.
Lopes, António M; Tenreiro Machado, J A; Huffstot, John S; Mata, Maria Eugénia
2018-01-01
This paper reports the dynamical analysis of the business cycles of 12 (developed and developing) countries over the last 56 years by applying computational techniques used for tackling complex systems. They reveal long-term convergence and country-level interconnections because of close contagion effects caused by bilateral networking exposure. Interconnectivity determines the magnitude of cross-border impacts. Local features and shock propagation complexity also may be true engines for local configuration of cycles. The algorithmic modeling proves to represent a solid approach to study the complex dynamics involved in the world economies.
Warmann, Steven W; Schenk, Andrea; Schaefer, Juergen F; Ebinger, Martin; Blumenstock, Gunnar; Tsiflikas, Ilias; Fuchs, Joerg
2016-11-01
In complex malignant pediatric liver tumors there is an ongoing discussion regarding surgical strategy; for example, primary organ transplantation versus extended resection in hepatoblastoma involving 3 or 4 sectors of the liver. We evaluated the possible role of computer-assisted surgery planning in children with complex hepatic tumors. Between May 2004 and March 2016, 24 Children with complex liver tumors underwent standard multislice helical CT scan or MRI scan at our institution. Imaging data were processed using the software assistant LiverAnalyzer (Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany). Results were provided as Portable Document Format (PDF) with embedded interactive 3-dimensional surface mesh models. Median age of patients was 33months. Diagnoses were hepatoblastoma (n=14), sarcoma (n=3), benign parenchyma alteration (n=2), as well as hepatocellular carcinoma, rhabdoid tumor, focal nodular hyperplasia, hemangioendothelioma, or multiple hepatic metastases of a pancreas carcinoma (each n=1). Volumetry of liver segments identified remarkable variations and substantial aberrances from the Couinaud classification. Computer-assisted surgery planning was used to determine surgical strategies in 20/24 children; this was especially relevant in tumors affecting 3 or 4 liver sectors. Primary liver transplantation could be avoided in 12 of 14 hepaoblastoma patients who theoretically were candidates for this approach. Computer-assisted surgery planning substantially contributed to the decision for surgical strategies in children with complex hepatic tumors. This tool possibly allows determination of specific surgical procedures such as extended surgical resection instead of primary transplantation in certain conditions. Copyright © 2016. Published by Elsevier Inc.
The application of CFD to the modelling of fires in complex geometries
NASA Astrophysics Data System (ADS)
Burns, A. D.; Clarke, D. S.; Guilbert, P.; Jones, I. P.; Simcox, S.; Wilkes, N. S.
The application of Computational Fluid Dynamics (CFD) to industrial safety is a challenging activity. In particular it involves the interaction of several different physical processes, including turbulence, combustion, radiation, buoyancy, compressible flow and shock waves in complex three-dimensional geometries. In addition, there may be multi-phase effects arising, for example, from sprinkler systems for extinguishing fires. The FLOW3D software (1-3) from Computational Fluid Dynamics Services (CFDS) is in widespread use in industrial safety problems, both within AEA Technology, and also by CFDS's commercial customers, for example references (4-13). This paper discusses some other applications of FLOW3D to safety problems. These applications illustrate the coupling of the gas flows with radiation models and combustion models, particularly for complex geometries where simpler radiation models are not applicable.
Access to computer-based leisure for individuals with profound disabilities.
Bache, Jane; Derwent, Gary
2008-01-01
Advances in computer technology and the Internet have meant that more and more occupations can be made available to disabled individuals, including occupations generally considered to be leisure. However, computers and the Internet also provide barriers to access for these individuals. This article discusses some of these barriers, solutions to them and highlights the complexities involved in the provision of a computer-based assistive technology solution for access to leisure for a profoundly disabled young lady. It also points out the need for the input of a highly skilled, multi-disciplinary team in the assessment for and provision of such a system.
Semi-empirical quantum evaluation of peptide - MHC class II binding
NASA Astrophysics Data System (ADS)
González, Ronald; Suárez, Carlos F.; Bohórquez, Hugo J.; Patarroyo, Manuel A.; Patarroyo, Manuel E.
2017-01-01
Peptide presentation by the major histocompatibility complex (MHC) is a key process for triggering a specific immune response. Studying peptide-MHC (pMHC) binding from a structural-based approach has potential for reducing the costs of investigation into vaccine development. This study involved using two semi-empirical quantum chemistry methods (PM7 and FMO-DFTB) for computing the binding energies of peptides bonded to HLA-DR1 and HLA-DR2. We found that key stabilising water molecules involved in the peptide binding mechanism were required for finding high correlation with IC50 experimental values. Our proposal is computationally non-intensive, and is a reliable alternative for studying pMHC binding interactions.
Institute for Defense Analysis. Annual Report 1995.
1995-01-01
staff have been involved in the community-wide development of MPI as well as in its application to specific NSA problems. 35 Parallel Groebner ...Basis Code — Symbolic Computing on Parallel Machines The Groebner basis method is a set of algorithms for reformulating very complex algebraic expres
2007-06-01
information flow involved in network attacks. This kind of information can be invaluable in learning how to best setup and defend computer networks...administrators, and those interested in learning about securing networks a way to conceptualize this complex system of computing. NTAV3D will provide a three...teaching with visual and other components can make learning more effective” (Baxley et al, 2006). A hyperbox (Alpern and Carter, 1991) is
Using 3D computer simulations to enhance ophthalmic training.
Glittenberg, C; Binder, S
2006-01-01
To develop more effective methods of demonstrating and teaching complex topics in ophthalmology with the use of computer aided three-dimensional (3D) animation and interactive multimedia technologies. We created 3D animations and interactive computer programmes demonstrating the neuroophthalmological nature of the oculomotor system, including the anatomy, physiology and pathophysiology of the extra-ocular eye muscles and the oculomotor cranial nerves, as well as pupillary symptoms of neurological diseases. At the University of Vienna we compared their teaching effectiveness to conventional teaching methods in a comparative study involving 100 medical students, a multiple choice exam and a survey. The comparative study showed that our students achieved significantly better test results (80%) than the control group (63%) (diff. = 17 +/- 5%, p = 0.004). The survey showed a positive reaction to the software and a strong preference to have more subjects and techniques demonstrated in this fashion. Three-dimensional computer animation technology can significantly increase the quality and efficiency of the education and demonstration of complex topics in ophthalmology.
Fast calculation of the `ILC norm' in iterative learning control
NASA Astrophysics Data System (ADS)
Rice, Justin K.; van Wingerden, Jan-Willem
2013-06-01
In this paper, we discuss and demonstrate a method for the exploitation of matrix structure in computations for iterative learning control (ILC). In Barton, Bristow, and Alleyne [International Journal of Control, 83(2), 1-8 (2010)], a special insight into the structure of the lifted convolution matrices involved in ILC is used along with a modified Lanczos method to achieve very fast computational bounds on the learning convergence, by calculating the 'ILC norm' in ? computational complexity. In this paper, we show how their method is equivalent to a special instance of the sequentially semi-separable (SSS) matrix arithmetic, and thus can be extended to many other computations in ILC, and specialised in some cases to even faster methods. Our SSS-based methodology will be demonstrated on two examples: a linear time-varying example resulting in the same ? complexity as in Barton et al., and a linear time-invariant example where our approach reduces the computational complexity to ?, thus decreasing the computation time, for an example, from the literature by a factor of almost 100. This improvement is achieved by transforming the norm computation via a linear matrix inequality into a check of positive definiteness - which allows us to further exploit the almost-Toeplitz properties of the matrix, and additionally provides explicit upper and lower bounds on the norm of the matrix, instead of the indirect Ritz estimate. These methods are now implemented in a MATLAB toolbox, freely available on the Internet.
Multiple Hydrogen Bond Tethers for Grazing Formic Acid in Its Complexes with Phenylacetylene.
Karir, Ginny; Kumar, Gaurav; Kar, Bishnu Prasad; Viswanathan, K S
2018-03-01
Complexes of phenylacetylene (PhAc) and formic acid (FA) present an interesting picture, where the two submolecules are tethered, sometimes multiply, by hydrogen bonds. The multiple tentacles adopted by PhAc-FA complexes stem from the fact that both submolecules can, in the same complex, serve as proton acceptors and/or proton donors. The acetylenic and phenyl π systems of PhAc can serve as proton acceptors, while the ≡C-H or -C-H of the phenyl ring can act as a proton donor. Likewise, FA also is amphiprotic. Hence, more than 10 hydrogen-bonded structures, involving O-H···π, C-H···π, and C-H···O contacts, were indicated by our computations, some with multiple tentacles. Interestingly, despite the multiple contacts in the complexes, the barrier between some of the structures is small, and hence, FA grazes around PhAc, even while being tethered to it, with hydrogen bonds. We used matrix isolation infrared spectroscopy to experimentally study the PhAc-FA complexes, with which we located global and a few local minima, involving primarily an O-H···π interaction. Experiments were corroborated by ab initio computations, which were performed using MP2 and M06-2X methods, with 6-311++G (d,p) and aug-cc-pVDZ basis sets. Single-point energy calculations were also done at MP2/CBS and CCSD(T)/CBS levels. The nature, strength, and origin of these noncovalent interactions were studied using AIM, NBO, and LMO-EDA analysis.
Climate Modeling with a Million CPUs
NASA Astrophysics Data System (ADS)
Tobis, M.; Jackson, C. S.
2010-12-01
Michael Tobis, Ph.D. Research Scientist Associate University of Texas Institute for Geophysics Charles S. Jackson Research Scientist University of Texas Institute for Geophysics Meteorological, oceanographic, and climatological applications have been at the forefront of scientific computing since its inception. The trend toward ever larger and more capable computing installations is unabated. However, much of the increase in capacity is accompanied by an increase in parallelism and a concomitant increase in complexity. An increase of at least four additional orders of magnitude in the computational power of scientific platforms is anticipated. It is unclear how individual climate simulations can continue to make effective use of the largest platforms. Conversion of existing community codes to higher resolution, or to more complex phenomenology, or both, presents daunting design and validation challenges. Our alternative approach is to use the expected resources to run very large ensembles of simulations of modest size, rather than to await the emergence of very large simulations. We are already doing this in exploring the parameter space of existing models using the Multiple Very Fast Simulated Annealing algorithm, which was developed for seismic imaging. Our experiments have the dual intentions of tuning the model and identifying ranges of parameter uncertainty. Our approach is less strongly constrained by the dimensionality of the parameter space than are competing methods. Nevertheless, scaling up remains costly. Much could be achieved by increasing the dimensionality of the search and adding complexity to the search algorithms. Such ensemble approaches scale naturally to very large platforms. Extensions of the approach are anticipated. For example, structurally different models can be tuned to comparable effectiveness. This can provide an objective test for which there is no realistic precedent with smaller computations. We find ourselves inventing new code to manage our ensembles. Component computations involve tens to hundreds of CPUs and tens to hundreds of hours. The results of these moderately large parallel jobs influence the scheduling of subsequent jobs, and complex algorithms may be easily contemplated for this. The operating system concept of a "thread" re-emerges at a very coarse level, where each thread manages atomic computations of thousands of CPU-hours. That is, rather than multiple threads operating on a processor, at this level, multiple processors operate within a single thread. In collaboration with the Texas Advanced Computing Center, we are developing a software library at the system level, which should facilitate the development of computations involving complex strategies which invoke large numbers of moderately large multi-processor jobs. While this may have applications in other sciences, our key intent is to better characterize the coupled behavior of a very large set of climate model configurations.
Hilbert complexes of nonlinear elasticity
NASA Astrophysics Data System (ADS)
Angoshtari, Arzhang; Yavari, Arash
2016-12-01
We introduce some Hilbert complexes involving second-order tensors on flat compact manifolds with boundary that describe the kinematics and the kinetics of motion in nonlinear elasticity. We then use the general framework of Hilbert complexes to write Hodge-type and Helmholtz-type orthogonal decompositions for second-order tensors. As some applications of these decompositions in nonlinear elasticity, we study the strain compatibility equations of linear and nonlinear elasticity in the presence of Dirichlet boundary conditions and the existence of stress functions on non-contractible bodies. As an application of these Hilbert complexes in computational mechanics, we briefly discuss the derivation of a new class of mixed finite element methods for nonlinear elasticity.
Solving complex band structure problems with the FEAST eigenvalue algorithm
NASA Astrophysics Data System (ADS)
Laux, S. E.
2012-08-01
With straightforward extension, the FEAST eigenvalue algorithm [Polizzi, Phys. Rev. B 79, 115112 (2009)] is capable of solving the generalized eigenvalue problems representing traveling-wave problems—as exemplified by the complex band-structure problem—even though the matrices involved are complex, non-Hermitian, and singular, and hence outside the originally stated range of applicability of the algorithm. The obtained eigenvalues/eigenvectors, however, contain spurious solutions which must be detected and removed. The efficiency and parallel structure of the original algorithm are unaltered. The complex band structures of Si layers of varying thicknesses and InAs nanowires of varying radii are computed as test problems.
Reverse logistics system planning for recycling computers hardware: A case study
NASA Astrophysics Data System (ADS)
Januri, Siti Sarah; Zulkipli, Faridah; Zahari, Siti Meriam; Shamsuri, Siti Hajar
2014-09-01
This paper describes modeling and simulation of reverse logistics networks for collection of used computers in one of the company in Selangor. The study focuses on design of reverse logistics network for used computers recycling operation. Simulation modeling, presented in this work allows the user to analyze the future performance of the network and to understand the complex relationship between the parties involved. The findings from the simulation suggest that the model calculates processing time and resource utilization in a predictable manner. In this study, the simulation model was developed by using Arena simulation package.
Computational Psychiatry and the Challenge of Schizophrenia.
Krystal, John H; Murray, John D; Chekroud, Adam M; Corlett, Philip R; Yang, Genevieve; Wang, Xiao-Jing; Anticevic, Alan
2017-05-01
Schizophrenia research is plagued by enormous challenges in integrating and analyzing large datasets and difficulties developing formal theories related to the etiology, pathophysiology, and treatment of this disorder. Computational psychiatry provides a path to enhance analyses of these large and complex datasets and to promote the development and refinement of formal models for features of this disorder. This presentation introduces the reader to the notion of computational psychiatry and describes discovery-oriented and theory-driven applications to schizophrenia involving machine learning, reinforcement learning theory, and biophysically-informed neural circuit models. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center 2017.
Program Helps To Determine Chemical-Reaction Mechanisms
NASA Technical Reports Server (NTRS)
Bittker, D. A.; Radhakrishnan, K.
1995-01-01
General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.
Strategic control in decision-making under uncertainty.
Venkatraman, Vinod; Huettel, Scott A
2012-04-01
Complex economic decisions - whether investing money for retirement or purchasing some new electronic gadget - often involve uncertainty about the likely consequences of our choices. Critical for resolving that uncertainty are strategic meta-decision processes, which allow people to simplify complex decision problems, evaluate outcomes against a variety of contexts, and flexibly match behavior to changes in the environment. In recent years, substantial research has implicated the dorsomedial prefrontal cortex (dmPFC) in the flexible control of behavior. However, nearly all such evidence comes from paradigms involving executive function or response selection, not complex decision-making. Here, we review evidence that demonstrates that the dmPFC contributes to strategic control in complex decision-making. This region contains a functional topography such that the posterior dmPFC supports response-related control, whereas the anterior dmPFC supports strategic control. Activation in the anterior dmPFC signals changes in how a decision problem is represented, which in turn can shape computational processes elsewhere in the brain. Based on these findings, we argue for both generalized contributions of the dmPFC to cognitive control, and specific computational roles for its subregions depending upon the task demands and context. We also contend that these strategic considerations are likely to be critical for decision-making in other domains, including interpersonal interactions in social settings. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Strategic Control in Decision Making under Uncertainty
Venkatraman, Vinod; Huettel, Scott
2012-01-01
Complex economic decisions – whether investing money for retirement or purchasing some new electronic gadget – often involve uncertainty about the likely consequences of our choices. Critical for resolving that uncertainty are strategic meta-decision processes, which allow people to simplify complex decision problems, to evaluate outcomes against a variety of contexts, and to flexibly match behavior to changes in the environment. In recent years, substantial research implicates the dorsomedial prefrontal cortex (dmPFC) in the flexible control of behavior. However, nearly all such evidence comes from paradigms involving executive function or response selection, not complex decision making. Here, we review evidence that demonstrates that the dmPFC contributes to strategic control in complex decision making. This region contains a functional topography such that the posterior dmPFC supports response-related control while the anterior dmPFC supports strategic control. Activation in the anterior dmPFC signals changes in how a decision problem is represented, which in turn can shape computational processes elsewhere in the brain. Based on these findings, we argue both for generalized contributions of the dmPFC to cognitive control, and for specific computational roles for its subregions depending upon the task demands and context. We also contend that these strategic considerations are also likely to be critical for decision making in other domains, including interpersonal interactions in social settings. PMID:22487037
Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.
Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622
Exponential convergence through linear finite element discretization of stratified subdomains
NASA Astrophysics Data System (ADS)
Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali
2016-10-01
Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.
Massive parallelization of serial inference algorithms for a complex generalized linear model
Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David
2014-01-01
Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363
Identification and addressing reduction-related misconceptions
NASA Astrophysics Data System (ADS)
Gal-Ezer, Judith; Trakhtenbrot, Mark
2016-07-01
Reduction is one of the key techniques used for problem-solving in computer science. In particular, in the theory of computation and complexity (TCC), mapping and polynomial reductions are used for analysis of decidability and computational complexity of problems, including the core concept of NP-completeness. Reduction is a highly abstract technique that involves revealing close non-trivial connections between problems that often seem to have nothing in common. As a result, proper understanding and application of reduction is a serious challenge for students and a source of numerous misconceptions. The main contribution of this paper is detection of such misconceptions, analysis of their roots, and proposing a way to address them in an undergraduate TCC course. Our observations suggest that the main source of the misconceptions is the false intuitive rule "the bigger is a set/problem, the harder it is to solve". Accordingly, we developed a series of exercises for proactive prevention of these misconceptions.
Burger, Gerhard A.; Danen, Erik H. J.; Beltman, Joost B.
2017-01-01
Epithelial–mesenchymal transition (EMT), the process by which epithelial cells can convert into motile mesenchymal cells, plays an important role in development and wound healing but is also involved in cancer progression. It is increasingly recognized that EMT is a dynamic process involving multiple intermediate or “hybrid” phenotypes rather than an “all-or-none” process. However, the role of EMT in various cancer hallmarks, including metastasis, is debated. Given the complexity of EMT regulation, computational modeling has proven to be an invaluable tool for cancer research, i.e., to resolve apparent conflicts in experimental data and to guide experiments by generating testable hypotheses. In this review, we provide an overview of computational modeling efforts that have been applied to regulation of EMT in the context of cancer progression and its associated tumor characteristics. Moreover, we identify possibilities to bridge different modeling approaches and point out outstanding questions in which computational modeling can contribute to advance our understanding of pathological EMT. PMID:28824874
ERIC Educational Resources Information Center
Grimm, Kevin J.
2007-01-01
Recent advances in methods and computer software for longitudinal data analysis have pushed researchers to more critically examine developmental theories. In turn, researchers have also begun to push longitudinal methods by asking more complex developmental questions. One such question involves the relationships between two developmental…
Subscriber Response System. Progress Report.
ERIC Educational Resources Information Center
Callais, Richard T.
Results of preliminary tests made prior and subsequent to the installation of a two-way interactive communication system which involves a computer complex termed the Local Processing Center and subscriber terminals located in the home or business location are reported. This first phase of the overall test plan includes tests made at Theta-Com…
Spada, Lorenzo; Tasinato, Nicola; Vazart, Fanny; Barone, Vincenzo; Caminati, Walther; Puzzarini, Cristina
2017-04-06
The 1:1 complex of ammonia with pyridine is characterized by using state-of-the-art quantum-chemical computations combined with pulsed-jet Fourier-transform microwave spectroscopy. The computed potential energy landscape indicates the formation of a stable σ-type complex, which is confirmed experimentally: analysis of the rotational spectrum shows the presence of only one 1:1 pyridine-ammonia adduct. Each rotational transition is split into several components owing to the internal rotation of NH 3 around its C 3 axis and to the hyperfine structure of both 14 N quadrupolar nuclei, thus providing unequivocal proof that the two molecules form a σ-type complex involving both a N-H⋅⋅⋅N and a C-H⋅⋅⋅N hydrogen bond. The dissociation energy (BSSE- and ZPE-corrected) is estimated to be 11.5 kJ mol -1 . This work represents the first application of an accurate yet efficient computational scheme, designed for the investigation of small biomolecules, to a molecular cluster. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Spada, Lorenzo; Tasinato, Nicola; Vazart, Fanny; Barone, Vincenzo; Caminati, Walther; Puzzarini, Cristina
2017-06-01
The 1:1 complex of ammonia with pyridine has been characterized by using state-of-the-art quantum-chemical computations combined with pulsed-jet Fourier-Transform microwave spectroscopy. The computed potential energy landscape pointed out the formation of a stable σ-type complex, which has been confirmed experimentally: the analysis of the rotational spectrum showed the presence of only one 1:1 pyridine - ammonia adduct. Each rotational transition is split into several components due to the internal rotation of NH_3 around its C_3 axis and to the hyperfine structure of both ^{14}N quadrupolar nuclei, thus providing the unequivocal proof that the two molecules form a σ-type complex involving both a N-H\\cdotsN and a C-H\\cdotsN hydrogen bond. The dissociation energy (BSSE and ZPE corrected) has been estimated to be 11.5 kJ\\cdotmol^{-1}. This work represents the first application of an accurate, yet efficient computational scheme, designed for the investigation of small biomolecules, to a molecular cluster.
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
Scherzinger, William M.
2016-05-01
The numerical integration of constitutive models in computational solid mechanics codes allows for the solution of boundary value problems involving complex material behavior. Metal plasticity models, in particular, have been instrumental in the development of these codes. Here, most plasticity models implemented in computational codes use an isotropic von Mises yield surface. The von Mises, of J 2, yield surface has a simple predictor-corrector algorithm - the radial return algorithm - to integrate the model.
Cardiovascular system simulation in biomedical engineering education.
NASA Technical Reports Server (NTRS)
Rideout, V. C.
1972-01-01
Use of complex cardiovascular system models, in conjunction with a large hybrid computer, in biomedical engineering courses. A cardiovascular blood pressure-flow model, driving a compartment model for the study of dye transport, was set up on the computer for use as a laboratory exercise by students who did not have the computer experience or skill to be able to easily set up such a simulation involving some 27 differential equations running at 'real time' rate. The students were given detailed instructions regarding the model, and were then able to study effects such as those due to septal and valve defects upon the pressure, flow, and dye dilution curves. The success of this experiment in the use of involved models in engineering courses was such that it seems that this type of laboratory exercise might be considered for use in physiology courses as an adjunct to animal experiments.
Autonomous perception and decision making in cyber-physical systems
NASA Astrophysics Data System (ADS)
Sarkar, Soumik
2011-07-01
The cyber-physical system (CPS) is a relatively new interdisciplinary technology area that includes the general class of embedded and hybrid systems. CPSs require integration of computation and physical processes that involves the aspects of physical quantities such as time, energy and space during information processing and control. The physical space is the source of information and the cyber space makes use of the generated information to make decisions. This dissertation proposes an overall architecture of autonomous perception-based decision & control of complex cyber-physical systems. Perception involves the recently developed framework of Symbolic Dynamic Filtering for abstraction of physical world in the cyber space. For example, under this framework, sensor observations from a physical entity are discretized temporally and spatially to generate blocks of symbols, also called words that form a language. A grammar of a language is the set of rules that determine the relationships among words to build sentences. Subsequently, a physical system is conjectured to be a linguistic source that is capable of generating a specific language. The proposed technology is validated on various (experimental and simulated) case studies that include health monitoring of aircraft gas turbine engines, detection and estimation of fatigue damage in polycrystalline alloys, and parameter identification. Control of complex cyber-physical systems involve distributed sensing, computation, control as well as complexity analysis. A novel statistical mechanics-inspired complexity analysis approach is proposed in this dissertation. In such a scenario of networked physical systems, the distribution of physical entities determines the underlying network topology and the interaction among the entities forms the abstract cyber space. It is envisioned that the general contributions, made in this dissertation, will be useful for potential application areas such as smart power grids and buildings, distributed energy systems, advanced health care procedures and future ground and air transportation systems.
Rodríguez, Guillermo López; Weber, Joshua; Sandhu, Jaswinder Singh; Anastasio, Mark A.
2011-01-01
We propose and experimentally demonstrate a new method for complex-valued wavefield retrieval in off-axis acoustic holography. The method involves use of an intensity-sensitive acousto-optic (AO) sensor, optimized for use at 3.3 MHz, to record the acoustic hologram and a computational method for reconstruction of the object wavefield. The proposed method may circumvent limitations of conventional implementations of acoustic holography and may facilitate the development of acoustic-holography-based biomedical imaging methods. PMID:21669451
A surprising role for conformational entropy in protein function
Wand, A. Joshua; Moorman, Veronica R.; Harpole, Kyle W.
2014-01-01
Formation of high-affinity complexes is critical for the majority of enzymatic reactions involving proteins. The creation of the family of Michaelis and other intermediate complexes during catalysis clearly involves a complicated manifold of interactions that are diverse and complex. Indeed, computing the energetics of interactions between proteins and small molecule ligands using molecular structure alone remains a grand challenge. One of the most difficult contributions to the free energy of protein-ligand complexes to experimentally access is that due to changes in protein conformational entropy. Fortunately, recent advances in solution nuclear magnetic resonance (NMR) relaxation methods have enabled the use of measures-of-motion between conformational states of a protein as a proxy for conformational entropy. This review briefly summarizes the experimental approaches currently employed to characterize fast internal motion in proteins, how this information is used to gain insight into conformational entropy, what has been learned and what the future may hold for this emerging view of protein function. PMID:23478875
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolverton, Christopher; Ozolins, Vidvuds; Kung, Harold H.
The objective of the proposed program is to discover novel mixed hydrides for hydrogen storage, which enable the DOE 2010 system-level goals. Our goal is to find a material that desorbs 8.5 wt.% H 2 or more at temperatures below 85°C. The research program will combine first-principles calculations of reaction thermodynamics and kinetics with material and catalyst synthesis, testing, and characterization. We will combine materials from distinct categories (e.g., chemical and complex hydrides) to form novel multicomponent reactions. Systems to be studied include mixtures of complex hydrides and chemical hydrides [e.g. LiNH 2+NH 3BH 3] and nitrogen-hydrogen based borohydrides [e.g.more » Al(BH 4) 3(NH 3) 3]. The 2010 and 2015 FreedomCAR/DOE targets for hydrogen storage systems are very challenging, and cannot be met with existing materials. The vast majority of the work to date has delineated materials into various classes, e.g., complex and metal hydrides, chemical hydrides, and sorbents. However, very recent studies indicate that mixtures of storage materials, particularly mixtures between various classes, hold promise to achieve technological attributes that materials within an individual class cannot reach. Our project involves a systematic, rational approach to designing novel multicomponent mixtures of materials with fast hydrogenation/dehydrogenation kinetics and favorable thermodynamics using a combination of state-of-the-art scientific computing and experimentation. We will use the accurate predictive power of first-principles modeling to understand the thermodynamic and microscopic kinetic processes involved in hydrogen release and uptake and to design new material/catalyst systems with improved properties. Detailed characterization and atomic-scale catalysis experiments will elucidate the effect of dopants and nanoscale catalysts in achieving fast kinetics and reversibility. And, state-of-the-art storage experiments will give key storage attributes of the investigated reactions, validate computational predictions, and help guide and improve computational methods. In sum, our approach involves a powerful blend of: 1) H2 Storage measurements and characterization, 2) State-of-the-art computational modeling, 3) Detailed catalysis experiments, 4) In-depth automotive perspective.« less
Biomechanics of metastatic disease in the vertebral column.
Whyne, Cari M
2014-06-01
Metastatic disease in the vertebral column compromises the structural stability of the spine leading to increased risk of fracture. The complex patterns of osteolytic and osteoblastic disease within the bony spine have motivated a multimodal approach to better characterize the biomechanics of tumor-involved bone. This review presents our current understanding of the biomechanical behavior of metastatically involved vertebrae, and experimental and computational image-based approaches that have been employed to quantify structural integrity in preclinical models with translation to clinical data sets.
Modeling of the Global Water Cycle - Analytical Models
Yongqiang Liu; Roni Avissar
2005-01-01
Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...
ERIC Educational Resources Information Center
Agnello, Armelinda; Carre, Cyril; Billen, Roland; Leyh, Bernard; De Pauw, Edwin; Damblon, Christian
2018-01-01
The analysis of spectroscopic data to solve chemical structures requires practical skills and drills. In this context, we have developed ULg Spectra, a computer-based tool designed to improve the ability of learners to perform complex reasoning. The identification of organic chemical compounds involves gathering and interpreting complementary…
ERIC Educational Resources Information Center
Zambon, Franco
This study sought to determine a useful frequency for refreshing students' memories of complex procedures that involved a formal computer language. Students were required to execute the Microsoft Disc Operating System (MS-DOS) commands for "copy,""backup," and "restore." A total of 126 college students enrolled in six…
A Simpli ed, General Approach to Simulating from Multivariate Copula Functions
Barry Goodwin
2012-01-01
Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses \\probability{...
ERIC Educational Resources Information Center
Ledbetter, Alexander K.
2017-01-01
People with acquired brain injury (ABI) present with impairments in working memory and executive functions, and these cognitive deficits contribute to difficulty self-regulating the production of expository writing. Cognitive processes involved in carrying out complex writing tasks include planning, generating text, and reviewing or revising text…
Clifford support vector machines for classification, regression, and recurrence.
Bayro-Corrochano, Eduardo Jose; Arana-Daniel, Nancy
2010-11-01
This paper introduces the Clifford support vector machines (CSVM) as a generalization of the real and complex-valued support vector machines using the Clifford geometric algebra. In this framework, we handle the design of kernels involving the Clifford or geometric product. In this approach, one redefines the optimization variables as multivectors. This allows us to have a multivector as output. Therefore, we can represent multiple classes according to the dimension of the geometric algebra in which we work. We show that one can apply CSVM for classification and regression and also to build a recurrent CSVM. The CSVM is an attractive approach for the multiple input multiple output processing of high-dimensional geometric entities. We carried out comparisons between CSVM and the current approaches to solve multiclass classification and regression. We also study the performance of the recurrent CSVM with experiments involving time series. The authors believe that this paper can be of great use for researchers and practitioners interested in multiclass hypercomplex computing, particularly for applications in complex and quaternion signal and image processing, satellite control, neurocomputation, pattern recognition, computer vision, augmented virtual reality, robotics, and humanoids.
Scarfe, William C; Azevedo, Bruno; Pinheiro, Lucas R; Priaminiarti, Menik; Sales, Marcelo A O
2017-06-01
Contemporary periodontal therapy has evolved to become more interdisciplinary and increasingly involves more complex treatments, including bone and soft-tissue regenerative procedures. Therapeutic options require an imaging modality or combination of techniques that are capable of providing a diagnostic osseous baseline and facilitating quantification of smaller increments of bony change, both loss and additive, which are comparable over time. Intra-oral and panoramic radiography are the modalities most commonly used to identify the location, quantify the amount and the pattern of alveolar bone loss and determine response to therapy. Cone-beam computed tomography imaging offers specific advantages for periodontal diagnosis in that three-dimensional images of dental and alveolar bone structures can be rendered with accuracy. Cone-beam computed tomography has been shown to be clinically efficacious in demonstrating localized defects, such as furcation involvement and intrabony vertical and buccal/lingual defects, and in assessing the effects of regenerative therapy. In these situations, limited-field-of-view, high-resolution protocols are indicated. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Complex-energy approach to sum rules within nuclear density functional theory
Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...
2015-04-27
The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less
Water-assisted dehalogenation of thionyl chloride in the presence of water molecules.
Yeung, Chi Shun; Ng, Ping Leung; Guan, Xiangguo; Phillips, David Lee
2010-04-01
A second-order Møller-Plesset perturbation theory (MP2) and density functional theory (DFT) investigation of the dehalogenation reactions of thionyl chloride is reported, in which water molecules (up to seven) were explicitly involved in the reaction complex. The dehalogenation processes of thionyl chloride were found to be dramatically catalyzed by water molecules. The reaction rate became significantly faster as more water molecules became involved in the reaction complex. The dehalogenation processes can be reasonably simulated by the gas-phase water cluster models, which reveals that water molecules can help to solvate the thionyl chloride molecules and activate the release of the Cl(-) leaving group. The computed activation energies were used to compare the calculations to available experimental data.
Artificial intelligence support for scientific model-building
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1992-01-01
Scientific model-building can be a time-intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientific development team to understand. We believe that artificial intelligence techniques can facilitate both the model-building and model-sharing process. In this paper, we overview our effort to build a scientific modeling software tool that aids the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high-level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities.
Statistical mechanics of complex neural systems and high dimensional data
NASA Astrophysics Data System (ADS)
Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya
2013-03-01
Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.
Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers
NASA Technical Reports Server (NTRS)
Guruswamy, Guru; VanDalsem, William (Technical Monitor)
1994-01-01
Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.
Computations of Drop Collision and Coalescence
NASA Technical Reports Server (NTRS)
Tryggvason, Gretar; Juric, Damir; Nas, Selman; Mortazavi, Saeed
1996-01-01
Computations of drops collisions, coalescence, and other problems involving drops are presented. The computations are made possible by a finite difference/front tracking technique that allows direct solutions of the Navier-Stokes equations for a multi-fluid system with complex, unsteady internal boundaries. This method has been used to examine the various collision modes for binary collisions of drops of equal size, mixing of two drops of unequal size, behavior of a suspension of drops in linear and parabolic shear flows, and the thermal migration of several drops. The key results from these simulations are reviewed. Extensions of the method to phase change problems and preliminary results for boiling are also shown.
NASA Technical Reports Server (NTRS)
Clancey, William J.
2003-01-01
A human-centered approach to computer systems design involves reframing analysis in terms of people interacting with each other, not only human-machine interaction. The primary concern is not how people can interact with computers, but how shall we design computers to help people work together? An analysis of astronaut interactions with CapCom on Earth during one traverse of Apollo 17 shows what kind of information was conveyed and what might be automated today. A variety of agent and robotic technologies are proposed that deal with recurrent problems in communication and coordination during the analyzed traverse.
Bortfeldt, Ralf H; Schuster, Stefan; Koch, Ina
2011-01-01
Spliceosomes are macro-complexes involving hundreds of proteins with many functional interactions. Spliceosome assembly belongs to the key processes that enable splicing of mRNA and modulate alternative splicing. A detailed list of factors involved in spliceosomal reactions has been assorted over the past decade, but, their functional interplay is often unknown and most of the present biological models cover only parts of the complete assembly process. It is a challenging task to build a computational model that integrates dispersed knowledge and combines a multitude of reaction schemes proposed earlier. Because for most reactions involved in spliceosome assembly kinetic parameters are not available, we propose a discrete modeling using Petri nets, through which we are enabled to get insights into the system's behavior via computation of structural and dynamic properties. In this paper, we compile and examine reactions from experimental reports that contribute to a functional spliceosome. All these reactions form a network, which describes the inventory and conditions necessary to perform the splicing process. The analysis is mainly based on system invariants. Transition invariants (T-invariants) can be interpreted as signaling routes through the network. Due to the huge number of T-invariants that arise with increasing network size and complexity, maximal common transition sets (MCTS) and T-clusters were used for further analysis. Additionally, we introduce a false color map representation, which allows a quick survey of network modules and the visual detection of single reactions or reaction sequences, which participate in more than one signaling route. We designed a structured model of spliceosome assembly, which combines the demands on a platform that i) can display involved factors and concurrent processes, ii) offers the possibility to run computational methods for knowledge extraction, and iii) is successively extendable as new insights into spliceosome function are reported by experimental reports. The network consists of 161 transitions (reactions) and 140 places (reactants). All reactions are part of at least one of the 71 T-invariants. These T-invariants define pathways, which are in good agreement with the current knowledge and known hypotheses on reaction sequences during spliceosome assembly, hence contributing to a functional spliceosome. We demonstrate that present knowledge, in particular of the initial part of the assembly process, describes parallelism and interaction of signaling routes, which indicate functional redundancy and reflect the dependency of spliceosome assembly initiation on different cellular conditions. The complexity of the network is further increased by two switches, which introduce alternative routes during A-complex formation in early spliceosome assembly and upon transition from the B-complex to the C-complex. By compiling known reactions into a complete network, the combinatorial nature of invariant computation leads to pathways that have previously not been described as connected routes, although their constituents were known. T-clusters divide the network into modules, which we interpret as building blocks in spliceosome maturation. We conclude that Petri net representations of large biological networks and system invariants, are well-suited as a means for validating the integration of experimental knowledge into a consistent model. Based on this network model, the design of further experiments is facilitated.
Bortfeldt, Ralf H; Schuster, Stefan; Koch, Ina
2010-01-01
Spliceosomes are macro-complexes involving hundreds of proteins with many functional interactions. Spliceosome assembly belongs to the key processes that enable splicing of mRNA and modulate alternative splicing. A detailed list of factors involved in spliceosomal reactions has been assorted over the past decade, but, their functional interplay is often unknown and most of the present biological models cover only parts of the complete assembly process. It is a challenging task to build a computational model that integrates dispersed knowledge and combines a multitude of reaction schemes proposed earlier.Because for most reactions involved in spliceosome assembly kinetic parameters are not available, we propose a discrete modeling using Petri nets, through which we are enabled to get insights into the system's behavior via computation of structural and dynamic properties. In this paper, we compile and examine reactions from experimental reports that contribute to a functional spliceosome. All these reactions form a network, which describes the inventory and conditions necessary to perform the splicing process. The analysis is mainly based on system invariants. Transition invariants (T-invariants) can be interpreted as signaling routes through the network. Due to the huge number of T-invariants that arise with increasing network size and complexity, maximal common transition sets (MCTS) and T-clusters were used for further analysis. Additionally, we introduce a false color map representation, which allows a quick survey of network modules and the visual detection of single reactions or reaction sequences, which participate in more than one signaling route. We designed a structured model of spliceosome assembly, which combines the demands on a platform that i) can display involved factors and concurrent processes, ii) offers the possibility to run computational methods for knowledge extraction, and iii) is successively extendable as new insights into spliceosome function are reported by experimental reports. The network consists of 161 transitions (reactions) and 140 places (reactants). All reactions are part of at least one of the 71 T-invariants. These T-invariants define pathways, which are in good agreement with the current knowledge and known hypotheses on reaction sequences during spliceosome assembly, hence contributing to a functional spliceosome. We demonstrate that present knowledge, in particular of the initial part of the assembly process, describes parallelism and interaction of signaling routes, which indicate functional redundancy and reflect the dependency of spliceosome assembly initiation on different cellular conditions. The complexity of the network is further increased by two switches, which introduce alternative routes during A-complex formation in early spliceosome assembly and upon transition from the B-complex to the C-complex. By compiling known reactions into a complete network, the combinatorial nature of invariant computation leads to pathways that have previously not been described as connected routes, although their constituents were known. T-clusters divide the network into modules, which we interpret as building blocks in spliceosome maturation. We conclude that Petri net representations of large biological networks and system invariants, are well-suited as a means for validating the integration of experimental knowledge into a consistent model. Based on this network model, the design of further experiments is facilitated.
The Movable Type Method Applied to Protein-Ligand Binding.
Zheng, Zheng; Ucisik, Melek N; Merz, Kenneth M
2013-12-10
Accurately computing the free energy for biological processes like protein folding or protein-ligand association remains a challenging problem. Both describing the complex intermolecular forces involved and sampling the requisite configuration space make understanding these processes innately difficult. Herein, we address the sampling problem using a novel methodology we term "movable type". Conceptually it can be understood by analogy with the evolution of printing and, hence, the name movable type. For example, a common approach to the study of protein-ligand complexation involves taking a database of intact drug-like molecules and exhaustively docking them into a binding pocket. This is reminiscent of early woodblock printing where each page had to be laboriously created prior to printing a book. However, printing evolved to an approach where a database of symbols (letters, numerals, etc.) was created and then assembled using a movable type system, which allowed for the creation of all possible combinations of symbols on a given page, thereby, revolutionizing the dissemination of knowledge. Our movable type (MT) method involves the identification of all atom pairs seen in protein-ligand complexes and then creating two databases: one with their associated pairwise distant dependent energies and another associated with the probability of how these pairs can combine in terms of bonds, angles, dihedrals and non-bonded interactions. Combining these two databases coupled with the principles of statistical mechanics allows us to accurately estimate binding free energies as well as the pose of a ligand in a receptor. This method, by its mathematical construction, samples all of configuration space of a selected region (the protein active site here) in one shot without resorting to brute force sampling schemes involving Monte Carlo, genetic algorithms or molecular dynamics simulations making the methodology extremely efficient. Importantly, this method explores the free energy surface eliminating the need to estimate the enthalpy and entropy components individually. Finally, low free energy structures can be obtained via a free energy minimization procedure yielding all low free energy poses on a given free energy surface. Besides revolutionizing the protein-ligand docking and scoring problem this approach can be utilized in a wide range of applications in computational biology which involve the computation of free energies for systems with extensive phase spaces including protein folding, protein-protein docking and protein design.
Multiplexed Predictive Control of a Large Commercial Turbofan Engine
NASA Technical Reports Server (NTRS)
Richter, hanz; Singaraju, Anil; Litt, Jonathan S.
2008-01-01
Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.
FPGA-based coprocessor for matrix algorithms implementation
NASA Astrophysics Data System (ADS)
Amira, Abbes; Bensaali, Faycal
2003-03-01
Matrix algorithms are important in many types of applications including image and signal processing. These areas require enormous computing power. A close examination of the algorithms used in these, and related, applications reveals that many of the fundamental actions involve matrix operations such as matrix multiplication which is of O (N3) on a sequential computer and O (N3/p) on a parallel system with p processors complexity. This paper presents an investigation into the design and implementation of different matrix algorithms such as matrix operations, matrix transforms and matrix decompositions using an FPGA based environment. Solutions for the problem of processing large matrices have been proposed. The proposed system architectures are scalable, modular and require less area and time complexity with reduced latency when compared with existing structures.
Modeling of an intelligent pressure sensor using functional link artificial neural networks.
Patra, J C; van den Bos, A
2000-01-01
A capacitor pressure sensor (CPS) is modeled for accurate readout of applied pressure using a novel artificial neural network (ANN). The proposed functional link ANN (FLANN) is a computationally efficient nonlinear network and is capable of complex nonlinear mapping between its input and output pattern space. The nonlinearity is introduced into the FLANN by passing the input pattern through a functional expansion unit. Three different polynomials such as, Chebyschev, Legendre and power series have been employed in the FLANN. The FLANN offers computational advantage over a multilayer perceptron (MLP) for similar performance in modeling of the CPS. The prime aim of the present paper is to develop an intelligent model of the CPS involving less computational complexity, so that its implementation can be economical and robust. It is shown that, over a wide temperature variation ranging from -50 to 150 degrees C, the maximum error of estimation of pressure remains within +/- 3%. With the help of computer simulation, the performance of the three types of FLANN models has been compared to that of an MLP based model.
NASA Astrophysics Data System (ADS)
He, Xingyu; Tong, Ningning; Hu, Xiaowei
2018-01-01
Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.
Comparing DNA damage-processing pathways by computer analysis of chromosome painting data.
Levy, Dan; Vazquez, Mariel; Cornforth, Michael; Loucas, Bradford; Sachs, Rainer K; Arsuaga, Javier
2004-01-01
Chromosome aberrations are large-scale illegitimate rearrangements of the genome. They are indicative of DNA damage and informative about damage processing pathways. Despite extensive investigations over many years, the mechanisms underlying aberration formation remain controversial. New experimental assays such as multiplex fluorescent in situ hybridyzation (mFISH) allow combinatorial "painting" of chromosomes and are promising for elucidating aberration formation mechanisms. Recently observed mFISH aberration patterns are so complex that computer and graph-theoretical methods are needed for their full analysis. An important part of the analysis is decomposing a chromosome rearrangement process into "cycles." A cycle of order n, characterized formally by the cyclic graph with 2n vertices, indicates that n chromatin breaks take part in a single irreducible reaction. We here describe algorithms for computing cycle structures from experimentally observed or computer-simulated mFISH aberration patterns. We show that analyzing cycles quantitatively can distinguish between different aberration formation mechanisms. In particular, we show that homology-based mechanisms do not generate the large number of complex aberrations, involving higher-order cycles, observed in irradiated human lymphocytes.
NASA Technical Reports Server (NTRS)
Lockard, David P.
2011-01-01
Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.
NASA Technical Reports Server (NTRS)
Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III
1991-01-01
Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.
1979-12-01
because of the use of complex computational algorithms (Ref 25). Another important factor effecting the cost of soft- ware is the size of the development...involved the alignment and navigational algorithm portions of the software. The second avionics system application was the development of an inertial...001 1 COAT CONL CREA CINT CMAT CSTR COPR CAPP New Code .001 .001 .001 .001 1001 ,OO .00 Device TDAT T03NL TREA TINT Types o * Quantity OGAT OONL OREA
NASA Technical Reports Server (NTRS)
Scott, Robert C.; Perry, Boyd, III; Pototzky, Anthony S.
1991-01-01
This paper describes and illustrates two matched-filter-theory based schemes for obtaining maximized and time-correlated gust-loads for a nonlinear airplane. The first scheme is computationally fast because it uses a simple one-dimensional search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multidimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.
Menopause on the Internet: building knowledge and community on-line.
MacPherson, K I
1997-09-01
Computers are ubiquitous throughout the developed world. Diverse discourses address the pros and cons of using this technology in higher education. Nursing has extensively used informatics but has not, as yet, been involved to any extent in teaching on the Internet. I argue that nurse educators should use computer technology to present substantive and rigorous courses that deal with complex issues, using menopause as an example. A for-credit menopause course I taught via e-mail is used to illustrate the possibility of building knowledge and a sense of community on the Internet.
NASA Technical Reports Server (NTRS)
Nosenchuck, D. M.; Littman, M. G.
1986-01-01
The Navier-Stokes computer (NSC) has been developed for solving problems in fluid mechanics involving complex flow simulations that require more speed and capacity than provided by current and proposed Class VI supercomputers. The machine is a parallel processing supercomputer with several new architectural elements which can be programmed to address a wide range of problems meeting the following criteria: (1) the problem is numerically intensive, and (2) the code makes use of long vectors. A simulation of two-dimensional nonsteady viscous flows is presented to illustrate the architecture, programming, and some of the capabilities of the NSC.
Murder, insanity, and medical expert witnesses.
Ciccone, J R
1992-06-01
Recent advances in the ability to study brain anatomy and function and attempts to link these findings with human behavior have captured the attention of the legal system. This had led to the increasing use of the "neurological defense" to support a plea of not guilty by reason of insanity. This article explores the history of the insanity defense and explores the role of the medical expert witnesses in integrating clinical and laboratory findings, eg, computed tomographic scans, magnetic resonance scans, and single-photon emission computed tomographic scans. Three cases involving murder and brain dysfunction are discussed: the first case involves a subarachnoid hemorrhage resulting in visual perceptual and memory impairment; the second case, a diagnosis of Alzheimer's disease; and the third case, the controverted diagnosis of complex partial seizures in a serial killer.
Fully probabilistic control design in an adaptive critic framework.
Herzallah, Randa; Kárný, Miroslav
2011-12-01
Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hoi, Ka Hou; Çalimsiz, Selçuk; Froese, Robert D J; Hopkinson, Alan C; Organ, Michael G
2012-01-02
The amination of aryl chlorides with various aniline derivatives using the N-heterocyclic carbene-based Pd complexes Pd-PEPPSI-IPr and Pd-PEPPSI-IPent (PEPPSI=pyridine, enhanced precatalyst, preparation, stabilization, and initiation; IPr=diisopropylphenylimidazolium derivative; IPent= diisopentylphenylimidazolium derivative) has been studied. Rate studies have shown a reliance on the aryl chloride to be electron poor, although oxidative addition is not rate limiting. Anilines couple best when they are electron rich, which would seem to discount deprotonation of the intermediate metal ammonium complex as being rate limiting in favour of reductive elimination. In previous studies with secondary amines using PEPPSI complexes, deprotonation was proposed to be the slow step in the cycle. These experimental findings relating to mechanism were corroborated by computation. Pd-PEPPSI-IPr and the more hindered Pd-PEPPSI-IPent catalysts were used to couple deactivated aryl chlorides with electron poor anilines; while the IPr catalysis was sluggish, the IPent catalyst performed extremely well, again showing the high reactivity of this broadly useful catalyst. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Students' explanations in complex learning of disciplinary programming
NASA Astrophysics Data System (ADS)
Vieira, Camilo
Computational Science and Engineering (CSE) has been denominated as the third pillar of science and as a set of important skills to solve the problems of a global society. Along with the theoretical and the experimental approaches, computation offers a third alternative to solve complex problems that require processing large amounts of data, or representing complex phenomena that are not easy to experiment with. Despite the relevance of CSE, current professionals and scientists are not well prepared to take advantage of this set of tools and methods. Computation is usually taught in an isolated way from engineering disciplines, and therefore, engineers do not know how to exploit CSE affordances. This dissertation intends to introduce computational tools and methods contextualized within the Materials Science and Engineering curriculum. Considering that learning how to program is a complex task, the dissertation explores effective pedagogical practices that can support student disciplinary and computational learning. Two case studies will be evaluated to identify the characteristics of effective worked examples in the context of CSE. Specifically, this dissertation explores students explanations of these worked examples in two engineering courses with different levels of transparency: a programming course in materials science and engineering glass box and a thermodynamics course involving computational representations black box. Results from this study suggest that students benefit in different ways from writing in-code comments. These benefits include but are not limited to: connecting xv individual lines of code to the overall problem, getting familiar with the syntax, learning effective algorithm design strategies, and connecting computation with their discipline. Students in the glass box context generate higher quality explanations than students in the black box context. These explanations are related to students prior experiences. Specifically, students with low ability to do programming engage in a more thorough explanation process than students with high ability. This dissertation concludes proposing an adaptation to the instructional principles of worked-examples for the context of CSE education.
Architectures for Quantum Simulation Showing a Quantum Speedup
NASA Astrophysics Data System (ADS)
Bermejo-Vega, Juan; Hangleiter, Dominik; Schwarz, Martin; Raussendorf, Robert; Eisert, Jens
2018-04-01
One of the main aims in the field of quantum simulation is to achieve a quantum speedup, often referred to as "quantum computational supremacy," referring to the experimental realization of a quantum device that computationally outperforms classical computers. In this work, we show that one can devise versatile and feasible schemes of two-dimensional, dynamical, quantum simulators showing such a quantum speedup, building on intermediate problems involving nonadaptive, measurement-based, quantum computation. In each of the schemes, an initial product state is prepared, potentially involving an element of randomness as in disordered models, followed by a short-time evolution under a basic translationally invariant Hamiltonian with simple nearest-neighbor interactions and a mere sampling measurement in a fixed basis. The correctness of the final-state preparation in each scheme is fully efficiently certifiable. We discuss experimental necessities and possible physical architectures, inspired by platforms of cold atoms in optical lattices and a number of others, as well as specific assumptions that enter the complexity-theoretic arguments. This work shows that benchmark settings exhibiting a quantum speedup may require little control, in contrast to universal quantum computing. Thus, our proposal puts a convincing experimental demonstration of a quantum speedup within reach in the near term.
Energy efficient hybrid computing systems using spin devices
NASA Astrophysics Data System (ADS)
Sharad, Mrigank
Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
1994-01-01
Strong interactions can occur between the flow about an aerospace vehicle and its structural components resulting in several important aeroelastic phenomena. These aeroelastic phenomena can significantly influence the performance of the vehicle. At present, closed-form solutions are available for aeroelastic computations when flows are in either the linear subsonic or supersonic range. However, for aeroelasticity involving complex nonlinear flows with shock waves, vortices, flow separations, and aerodynamic heating, computational methods are still under development. These complex aeroelastic interactions can be dangerous and limit the performance of aircraft. Examples of these detrimental effects are aircraft with highly swept wings experiencing vortex-induced aeroelastic oscillations, transonic regime at which the flutter speed is low, aerothermoelastic loads that play a critical role in the design of high-speed vehicles, and flow separations that often lead to buffeting with undesirable structural oscillations. The simulation of these complex aeroelastic phenomena requires an integrated analysis of fluids and structures. This report presents a summary of the development, applications, and procedures to use the multidisciplinary computer code ENSAERO. This code is based on the Euler/Navier-Stokes flow equations and modal/finite-element structural equations.
An Integrated Crustal Dynamics Simulator
NASA Astrophysics Data System (ADS)
Xing, H. L.; Mora, P.
2007-12-01
Numerical modelling offers an outstanding opportunity to gain an understanding of the crustal dynamics and complex crustal system behaviour. This presentation provides our long-term and ongoing effort on finite element based computational model and software development to simulate the interacting fault system for earthquake forecasting. A R-minimum strategy based finite-element computational model and software tool, PANDAS, for modelling 3-dimensional nonlinear frictional contact behaviour between multiple deformable bodies with the arbitrarily-shaped contact element strategy has been developed by the authors, which builds up a virtual laboratory to simulate interacting fault systems including crustal boundary conditions and various nonlinearities (e.g. from frictional contact, materials, geometry and thermal coupling). It has been successfully applied to large scale computing of the complex nonlinear phenomena in the non-continuum media involving the nonlinear frictional instability, multiple material properties and complex geometries on supercomputers, such as the South Australia (SA) interacting fault system, South California fault model and Sumatra subduction model. It has been also extended and to simulate the hot fractured rock (HFR) geothermal reservoir system in collaboration of Geodynamics Ltd which is constructing the first geothermal reservoir system in Australia and to model the tsunami generation induced by earthquakes. Both are supported by Australian Research Council.
Potential Flow Theory and Operation Guide for the Panel Code PMARC. Version 14
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1999-01-01
The theoretical basis for PMARC, a low-order panel code for modeling complex three-dimensional bodies, in potential flow, is outlined. PMARC can be run on a wide variety of computer platforms, including desktop machines, workstations, and supercomputers. Execution times for PMARC vary tremendously depending on the computer resources used, but typically range from several minutes for simple or moderately complex cases to several hours for very large complex cases. Several of the advanced features currently included in the code, such as internal flow modeling, boundary layer analysis, and time-dependent flow analysis, including problems involving relative motion, are discussed in some detail. The code is written in Fortran77, using adjustable-size arrays so that it can be easily redimensioned to match problem requirements and computer hardware constraints. An overview of the program input is presented. A detailed description of the input parameters is provided in the appendices. PMARC results for several test cases are presented along with analytic or experimental data, where available. The input files for these test cases are given in the appendices. PMARC currently supports plotfile output formats for several commercially available graphics packages. The supported graphics packages are Plot3D, Tecplot, and PmarcViewer.
ERIC Educational Resources Information Center
Goodman, William J.
Developed in response to the complex problems involved in providing equal educational opportunities for the intellectually alert orthopedically handicapped, the PLATO Programmable Terminal Keyset (PPTK) system makes the resources of PLATO compatible to the functional problems of a wide range of orthopedic conditions. This report describes the…
A First Approach to Filament Dynamics
ERIC Educational Resources Information Center
Silva, P. E. S.; de Abreu, F. Vistulo; Simoes, R.; Dias, R. G.
2010-01-01
Modelling elastic filament dynamics is a topic of high interest due to the wide range of applications. However, it has reached a high level of complexity in the literature, making it unaccessible to a beginner. In this paper we explain the main steps involved in the computational modelling of the dynamics of an elastic filament. We first derive…
Application of Intelligent Tutoring Technology to an Apparently Mechanical Task.
ERIC Educational Resources Information Center
Newman, Denis
The increasing automation of many occupations leads to jobs that involve understanding and monitoring the operation of complex computer systems. One case is PATRIOT, an air defense surface-to-air missile system deployed by the U.S. Army. Radar information is processed and presented to the operators in highly abstract form. The system identifies…
ERIC Educational Resources Information Center
Steif, Paul S.; Fu, Luoting; Kara, Levent Burak
2016-01-01
Problems faced by engineering students involve multiple pathways to solution. Students rarely receive effective formative feedback on handwritten homework. This paper examines the potential for computer-based formative assessment of student solutions to multipath engineering problems. In particular, an intelligent tutor approach is adopted and…
ERIC Educational Resources Information Center
Giacobe, Nicklaus A.
2013-01-01
Cyber-security involves the monitoring a complex network of inter-related computers to prevent, identify and remediate from undesired actions. This work is performed in organizations by human analysts. These analysts monitor cyber-security sensors to develop and maintain situation awareness (SA) of both normal and abnormal activities that occur on…
Computational reacting gas dynamics
NASA Technical Reports Server (NTRS)
Lam, S. H.
1993-01-01
In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
Proposal for constructing an advanced software tool for planetary atmospheric modeling
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Sims, Michael H.; Podolak, Esther; Mckay, Christopher P.; Thompson, David E.
1990-01-01
Scientific model building can be a time intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing and using models. The proposed tool will include an interactive intelligent graphical interface and a high level, domain specific, modeling language. As a testbed for this research, we propose development of a software prototype in the domain of planetary atmospheric modeling.
NASA Astrophysics Data System (ADS)
Rosso, Osvaldo A.; Craig, Hugh; Moscato, Pablo
2009-03-01
We introduce novel Information Theory quantifiers in a computational linguistic study that involves a large corpus of English Renaissance literature. The 185 texts studied (136 plays and 49 poems in total), with first editions that range from 1580 to 1640, form a representative set of its period. Our data set includes 30 texts unquestionably attributed to Shakespeare; in addition we also included A Lover’s Complaint, a poem which generally appears in Shakespeare collected editions but whose authorship is currently in dispute. Our statistical complexity quantifiers combine the power of Jensen-Shannon’s divergence with the entropy variations as computed from a probability distribution function of the observed word use frequencies. Our results show, among other things, that for a given entropy poems display higher complexity than plays, that Shakespeare’s work falls into two distinct clusters in entropy, and that his work is remarkable for its homogeneity and for its closeness to overall means.
Hu, Jin; Wang, Jun
2015-06-01
In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
DockTrina: docking triangular protein trimers.
Popov, Petr; Ritchie, David W; Grudinin, Sergei
2014-01-01
In spite of the abundance of oligomeric proteins within a cell, the structural characterization of protein-protein interactions is still a challenging task. In particular, many of these interactions involve heteromeric complexes, which are relatively difficult to determine experimentally. Hence there is growing interest in using computational techniques to model such complexes. However, assembling large heteromeric complexes computationally is a highly combinatorial problem. Nonetheless the problem can be simplified greatly by considering interactions between protein trimers. After dimers and monomers, triangular trimers (i.e. trimers with pair-wise contacts between all three pairs of proteins) are the most frequently observed quaternary structural motifs according to the three-dimensional (3D) complex database. This article presents DockTrina, a novel protein docking method for modeling the 3D structures of nonsymmetrical triangular trimers. The method takes as input pair-wise contact predictions from a rigid body docking program. It then scans and scores all possible combinations of pairs of monomers using a very fast root mean square deviation test. Finally, it ranks the predictions using a scoring function which combines triples of pair-wise contact terms and a geometric clash penalty term. The overall approach takes less than 2 min per complex on a modern desktop computer. The method is tested and validated using a benchmark set of 220 bound and seven unbound protein trimer structures. DockTrina will be made available at http://nano-d.inrialpes.fr/software/docktrina. Copyright © 2013 Wiley Periodicals, Inc.
Routine Discovery of Complex Genetic Models using Genetic Algorithms
Moore, Jason H.; Hahn, Lance W.; Ritchie, Marylyn D.; Thornton, Tricia A.; White, Bill C.
2010-01-01
Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes. PMID:20948983
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Proffitt, Dennis R.
1992-01-01
Recent developments in microelectronics have encouraged the use of 3D data bases to create compelling volumetric renderings of graphical objects. However, even with the computational capabilities of current-generation graphical systems, real-time displays of such objects are difficult, particularly when dynamic spatial transformations are involved. In this paper we discuss a type of visual stimulus (the stereokinetic effect display) that is computationally far less complex than a true three-dimensional transformation but yields an equally compelling depth impression, often perceptually indiscriminable from the true spatial transformation. Several possible applications for this technique are discussed (e.g., animating contour maps and air traffic control displays so as to evoke accurate depth percepts).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derek Lovley; Maddalena Coppi; Stacy Ciufo
Analysis of the Genetic Potential and Gene Expression of Microbial Communities Involved in the In Situ Bioremediation of Uranium and Harvesting Electrical Energy from Organic Matter The primary goal of this research is to develop conceptual and computational models that can describe the functioning of complex microbial communities involved in microbial processes of interest to the Department of Energy. Microbial Communities to be Investigated: (1) Microbial community associated with the in situ bioremediation of uranium-contaminated groundwater; and (2) Microbial community that is capable of harvesting energy from waste organic matter in the form of electricity.
O'Neill, M A; Hilgetag, C C
2001-08-29
Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement.
O'Neill, M A; Hilgetag, C C
2001-01-01
Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement. PMID:11545702
Computational identification of microRNAs and their targets in cassava (Manihot esculenta Crantz.).
Patanun, Onsaya; Lertpanyasampatha, Manassawe; Sojikul, Punchapat; Viboonjun, Unchera; Narangajavana, Jarunya
2013-03-01
MicroRNAs (miRNAs) are a newly discovered class of noncoding endogenous small RNAs involved in plant growth and development as well as response to environmental stresses. miRNAs have been extensively studied in various plant species, however, only few information are available in cassava, which serves as one of the staple food crops, a biofuel crop, animal feed and industrial raw materials. In this study, the 169 potential cassava miRNAs belonging to 34 miRNA families were identified by computational approach. Interestingly, mes-miR319b was represented as the first putative mirtron demonstrated in cassava. A total of 15 miRNA clusters involving 7 miRNA families, and 12 pairs of sense and antisense strand cassava miRNAs belonging to six different miRNA families were discovered. Prediction of potential miRNA target genes revealed their functions involved in various important plant biological processes. The cis-regulatory elements relevant to drought stress and plant hormone response were identified in the promoter regions of those miRNA genes. The results provided a foundation for further investigation of the functional role of known transcription factors in the regulation of cassava miRNAs. The better understandings of the complexity of miRNA-mediated genes network in cassava would unravel cassava complex biology in storage root development and in coping with environmental stresses, thus providing more insights for future exploitation in cassava improvement.
Zevin, Jason D; Miller, Brett
Reading research is increasingly a multi-disciplinary endeavor involving more complex, team-based science approaches. These approaches offer the potential of capturing the complexity of reading development, the emergence of individual differences in reading performance over time, how these differences relate to the development of reading difficulties and disability, and more fully understanding the nature of skilled reading in adults. This special issue focuses on the potential opportunities and insights that early and richly integrated advanced statistical and computational modeling approaches can provide to our foundational (and translational) understanding of reading. The issue explores how computational and statistical modeling, using both observed and simulated data, can serve as a contact point among research domains and topics, complement other data sources and critically provide analytic advantages over current approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This technical note describes the current capabilities and availability of the Automated Dredging and Disposal Alternatives Management System (ADDAMS). The technical note replaces the earlier Technical Note EEDP-06-12, which should be discarded. Planning, design, and management of dredging and dredged material disposal projects often require complex or tedious calculations or involve complex decision-making criteria. In addition, the evaluations often must be done for several disposal alternatives or disposal sites. ADDAMS is a personal computer (PC)-based system developed to assist in making such evaluations in a timely manner. ADDAMS contains a collection of computer programs (applications) designed to assist in managingmore » dredging projects. This technical note describes the system, currently available applications, mechanisms for acquiring and running the system, and provisions for revision and expansion.« less
On Convergence of Development Costs and Cost Models for Complex Spaceflight Instrument Electronics
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Patel, Umeshkumar D.; Kasa, Robert L.; Hestnes, Phyllis; Brown, Tammy; Vootukuru, Madhavi
2008-01-01
Development costs of a few recent spaceflight instrument electrical and electronics subsystems have diverged from respective heritage cost model predictions. The cost models used are Grass Roots, Price-H and Parametric Model. These cost models originated in the military and industry around 1970 and were successfully adopted and patched by NASA on a mission-by-mission basis for years. However, the complexity of new instruments recently changed rapidly by orders of magnitude. This is most obvious in the complexity of representative spaceflight instrument electronics' data system. It is now required to perform intermediate processing of digitized data apart from conventional processing of science phenomenon signals from multiple detectors. This involves on-board instrument formatting of computational operands from row data for example, images), multi-million operations per second on large volumes of data in reconfigurable hardware (in addition to processing on a general purpose imbedded or standalone instrument flight computer), as well as making decisions for on-board system adaptation and resource reconfiguration. The instrument data system is now tasked to perform more functions, such as forming packets and instrument-level data compression of more than one data stream, which are traditionally performed by the spacecraft command and data handling system. It is furthermore required that the electronics box for new complex instruments is developed for one-digit watt power consumption, small size and that it is light-weight, and delivers super-computing capabilities. The conflict between the actual development cost of newer complex instruments and its electronics components' heritage cost model predictions seems to be irreconcilable. This conflict and an approach to its resolution are addressed in this paper by determining the complexity parameters, complexity index, and their use in enhanced cost model.
NASA Astrophysics Data System (ADS)
Greene, Casey S.; Hill, Douglas P.; Moore, Jason H.
The relationship between interindividual variation in our genomes and variation in our susceptibility to common diseases is expected to be complex with multiple interacting genetic factors. A central goal of human genetics is to identify which DNA sequence variations predict disease risk in human populations. Our success in this endeavour will depend critically on the development and implementation of computational intelligence methods that are able to embrace, rather than ignore, the complexity of the genotype to phenotype relationship. To this end, we have developed a computational evolution system (CES) to discover genetic models of disease susceptibility involving complex relationships between DNA sequence variations. The CES approach is hierarchically organized and is capable of evolving operators of any arbitrary complexity. The ability to evolve operators distinguishes this approach from artificial evolution approaches using fixed operators such as mutation and recombination. Our previous studies have shown that a CES that can utilize expert knowledge about the problem in evolved operators significantly outperforms a CES unable to use this knowledge. This environmental sensing of external sources of biological or statistical knowledge is important when the search space is both rugged and large as in the genetic analysis of complex diseases. We show here that the CES is also capable of evolving operators which exploit one of several sources of expert knowledge to solve the problem. This is important for both the discovery of highly fit genetic models and because the particular source of expert knowledge used by evolved operators may provide additional information about the problem itself. This study brings us a step closer to a CES that can solve complex problems in human genetics in addition to discovering genetic models of disease.
Ivanisenko, Nikita V; Tregubchak, Tatiana V; Saik, Olga V; Ivanisenko, Vladimir A; Shchelkunov, Sergei N
2014-01-01
Inhibition of the activity of the tumor necrosis factor (TNF) has become the main strategy for treating inflammatory diseases. The orthopoxvirus TNF-binding proteins can bind and efficiently neutralize TNF. To analyze the mechanisms of the interaction between human (hTNF) or mouse (mTNF) TNF and the cowpox virus N-terminal binding domain (TNFBD-CPXV), also the variola virus N-terminal binding domain (TNFBD-VARV) and to define the amino acids most importantly involved in the formation of complexes, computer models, derived from the X-ray structure of a homologous hTNF/TNFRII complex, were used together with experiments. The hTNF/TNFBD-CPXV, hTNF/TNFBD-VARV, mTNF/TNFBD-CPXV, and mTNF/TNFBD-VARV complexes were used in the molecular dynamics (MD) simulations and MM/GBSA free energy calculations. The complexes were ordered as hTNF/TNFBD-CPXV, hTNF/TNFBD-VARV, mTNF/TNFBD-CPXV and mTNF/TNFBD-VARV according to increase in the binding affinity. The calculations were in agreement with surface plasmon resonance (SPR) measurements of the binding constants. Key residues involved in complex formation were identified.
Wu, Bian; Wang, Minhong; Grotzer, Tina A; Liu, Jun; Johnson, Janice M
2016-08-22
Practical experience with clinical cases has played an important role in supporting the learning of clinical reasoning. However, learning through practical experience involves complex processes difficult to be captured by students. This study aimed to examine the effects of a computer-based cognitive-mapping approach that helps students to externalize the reasoning process and the knowledge underlying the reasoning process when they work with clinical cases. A comparison between the cognitive-mapping approach and the verbal-text approach was made by analyzing their effects on learning outcomes. Fifty-two third-year or higher students from two medical schools participated in the study. Students in the experimental group used the computer-base cognitive-mapping approach, while the control group used the verbal-text approach, to make sense of their thinking and actions when they worked with four simulated cases over 4 weeks. For each case, students in both groups reported their reasoning process (involving data capture, hypotheses formulation, and reasoning with justifications) and the underlying knowledge (involving identified concepts and the relationships between the concepts) using the given approach. The learning products (cognitive maps or verbal text) revealed that students in the cognitive-mapping group outperformed those in the verbal-text group in the reasoning process, but not in making sense of the knowledge underlying the reasoning process. No significant differences were found in a knowledge posttest between the two groups. The computer-based cognitive-mapping approach has shown a promising advantage over the verbal-text approach in improving students' reasoning performance. Further studies are needed to examine the effects of the cognitive-mapping approach in improving the construction of subject-matter knowledge on the basis of practical experience.
ERIC Educational Resources Information Center
Lee, Chwee Beng
2013-01-01
The use of computers for learning is often a complex issue which involves cognitive and metacognitive concerns. This gives rise to our interest in examining the intention to use technology with relation to regulation of cognition. The use of technology for learning would necessarily require learners to exercise a certain level of regulation over…
ERIC Educational Resources Information Center
Chung, C-W.; Lee, C-C.; Liu, C-C.
2013-01-01
Mobile computers are now increasingly applied to facilitate face-to-face collaborative learning. However, the factors affecting face-to-face peer interactions are complex as they involve rich communication media. In particular, non-verbal interactions are necessary to convey critical communication messages in face-to-face communication. Through…
A note on a simplified and general approach to simulating from multivariate copula functions
Barry K. Goodwin
2013-01-01
Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses âProbability-...
Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping
ERIC Educational Resources Information Center
McDougall, Sine; Tyrer, Victoria; Folkard, Simon
2006-01-01
Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…
Making Graphical Inferences: A Hierarchical Framework
2004-08-01
from graphs is considered one of the more complex skills graph readers should possess. According to the National Council of Teachers of Mathematics ...understanding graphical perception. Human Computer Interaction, 8, 353-388. NCTM : Standards for Mathematics . (2003, 2003). Pinker, S. (1990). A theory... NCTM ) the simplest type of question involves the extraction or comparison of a few explicitly represented data points (read-offs) ( NCTM : Standards
Alborghetti, Marcos Rodrigo; Furlan, Ariane da Silva; da Silva, Júlio César; Sforça, Maurício Luís; Honorato, Rodrigo Vargas; Granato, Daniela Campos; dos Santos Migueleti, Deivid Lucas; Neves, Jorge L; de Oliveira, Paulo Sergio Lopes; Paes-Leme, Adriana Franco; Zeri, Ana Carolina de Mattos; de Torriani, Iris Concepcion Linares; Kobarg, Jörg
2013-01-01
Cytoskeleton and protein trafficking processes, including vesicle transport to synapses, are key processes in neuronal differentiation and axon outgrowth. The human protein FEZ1 (fasciculation and elongation protein zeta 1 / UNC-76, in C. elegans), SCOCO (short coiled-coil protein / UNC-69) and kinesins (e.g. kinesin heavy chain / UNC116) are involved in these processes. Exploiting the feature of FEZ1 protein as a bivalent adapter of transport mediated by kinesins and FEZ1 protein interaction with SCOCO (proteins involved in the same path of axonal growth), we investigated the structural aspects of intermolecular interactions involved in this complex formation by NMR (Nuclear Magnetic Resonance), cross-linking coupled with mass spectrometry (MS), SAXS (Small Angle X-ray Scattering) and molecular modelling. The topology of homodimerization was accessed through NMR (Nuclear Magnetic Resonance) studies of the region involved in this process, corresponding to FEZ1 (92-194). Through studies involving the protein in its monomeric configuration (reduced) and dimeric state, we propose that homodimerization occurs with FEZ1 chains oriented in an anti-parallel topology. We demonstrate that the interaction interface of FEZ1 and SCOCO defined by MS and computational modelling is in accordance with that previously demonstrated for UNC-76 and UNC-69. SAXS and literature data support a heterotetrameric complex model. These data provide details about the interaction interfaces probably involved in the transport machinery assembly and open perspectives to understand and interfere in this assembly and its involvement in neuronal differentiation and axon outgrowth.
da Silva, Júlio César; Sforça, Maurício Luís; Honorato, Rodrigo Vargas; Granato, Daniela Campos; dos Santos Migueleti, Deivid Lucas; Neves, Jorge L.; de Oliveira, Paulo Sergio Lopes; Paes-Leme, Adriana Franco; Zeri, Ana Carolina de Mattos; de Torriani, Iris Concepcion Linares; Kobarg, Jörg
2013-01-01
Cytoskeleton and protein trafficking processes, including vesicle transport to synapses, are key processes in neuronal differentiation and axon outgrowth. The human protein FEZ1 (fasciculation and elongation protein zeta 1 / UNC-76, in C. elegans), SCOCO (short coiled-coil protein / UNC-69) and kinesins (e.g. kinesin heavy chain / UNC116) are involved in these processes. Exploiting the feature of FEZ1 protein as a bivalent adapter of transport mediated by kinesins and FEZ1 protein interaction with SCOCO (proteins involved in the same path of axonal growth), we investigated the structural aspects of intermolecular interactions involved in this complex formation by NMR (Nuclear Magnetic Resonance), cross-linking coupled with mass spectrometry (MS), SAXS (Small Angle X-ray Scattering) and molecular modelling. The topology of homodimerization was accessed through NMR (Nuclear Magnetic Resonance) studies of the region involved in this process, corresponding to FEZ1 (92-194). Through studies involving the protein in its monomeric configuration (reduced) and dimeric state, we propose that homodimerization occurs with FEZ1 chains oriented in an anti-parallel topology. We demonstrate that the interaction interface of FEZ1 and SCOCO defined by MS and computational modelling is in accordance with that previously demonstrated for UNC-76 and UNC-69. SAXS and literature data support a heterotetrameric complex model. These data provide details about the interaction interfaces probably involved in the transport machinery assembly and open perspectives to understand and interfere in this assembly and its involvement in neuronal differentiation and axon outgrowth. PMID:24116125
Network representations of immune system complexity
Subramanian, Naeha; Torabi-Parizi, Parizad; Gottschalk, Rachel A.; Germain, Ronald N.; Dutta, Bhaskar
2015-01-01
The mammalian immune system is a dynamic multi-scale system composed of a hierarchically organized set of molecular, cellular and organismal networks that act in concert to promote effective host defense. These networks range from those involving gene regulatory and protein-protein interactions underlying intracellular signaling pathways and single cell responses to increasingly complex networks of in vivo cellular interaction, positioning and migration that determine the overall immune response of an organism. Immunity is thus not the product of simple signaling events but rather non-linear behaviors arising from dynamic, feedback-regulated interactions among many components. One of the major goals of systems immunology is to quantitatively measure these complex multi-scale spatial and temporal interactions, permitting development of computational models that can be used to predict responses to perturbation. Recent technological advances permit collection of comprehensive datasets at multiple molecular and cellular levels while advances in network biology support representation of the relationships of components at each level as physical or functional interaction networks. The latter facilitate effective visualization of patterns and recognition of emergent properties arising from the many interactions of genes, molecules, and cells of the immune system. We illustrate the power of integrating ‘omics’ and network modeling approaches for unbiased reconstruction of signaling and transcriptional networks with a focus on applications involving the innate immune system. We further discuss future possibilities for reconstruction of increasingly complex cellular and organism-level networks and development of sophisticated computational tools for prediction of emergent immune behavior arising from the concerted action of these networks. PMID:25625853
NASA Technical Reports Server (NTRS)
Shooman, Martin L.
1991-01-01
Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.
Acute aortic syndromes: new insights from electrocardiographically gated computed tomography.
Fleischmann, Dominik; Mitchell, R Scott; Miller, D Craig
2008-01-01
The development of retrospective electrocardiographic (ECG)-gating has proved to be a diagnostic and therapeutic boon for computed tomography (CT) imaging of patients with acute thoracic aortic diseases, such as aortic dissection/intramural hematoma (AD/IMH), penetrating atherosclerotic ulcer (APU), and ruptured/leaking aneurysm. The notorious pulsation motion artifacts in the ascending aorta confounding regular CT scanning can be eliminated, and involvement of the sinuses of Valsalva, the valve cusps, the aortic annulus, and the coronary arteries in aortic dissection can be clearly depicted or excluded. Motion-free images also allow reliable identification of the site of the primary intimal tear, the location, and extent of the intimomedial flap, and branch artery involvement. ECG-gated CTA also allows the detection of more subtle lesions and variants of aortic dissection, which may ultimately expand our understanding of these complex, life-threatening disorders.
NASA Astrophysics Data System (ADS)
Friedrich, J.
1999-08-01
As lecturers, our main concern and goal is to develop more attractive and efficient ways of communicating up-to-date scientific knowledge to our students and facilitate an in-depth understanding of physical phenomena. Computer-based instruction is very promising to help both teachers and learners in their difficult task, which involves complex cognitive psychological processes. This complexity is reflected in high demands on the design and implementation methods used to create computer-assisted learning (CAL) programs. Due to their concepts, flexibility, maintainability and extended library resources, object-oriented modeling techniques are very suitable to produce this type of pedagogical tool. Computational fluid dynamics (CFD) enjoys not only a growing importance in today's research, but is also very powerful for teaching and learning fluid dynamics. For this purpose, an educational PC program for university level called 'CFDLab 1.1' for Windows™ was developed with an interactive graphical user interface (GUI) for multitasking and point-and-click operations. It uses the dual reciprocity boundary element method as a versatile numerical scheme, allowing to handle a variety of relevant governing equations in two dimensions on personal computers due to its simple pre- and postprocessing including 2D Laplace, Poisson, diffusion, transient convection-diffusion.
Metrics for comparing dynamic earthquake rupture simulations
Barall, Michael; Harris, Ruth A.
2014-01-01
Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.
Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Gatski, Thomas B.
1997-01-01
A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.
The role of water molecules in computational drug design.
de Beer, Stephanie B A; Vermeulen, Nico P E; Oostenbrink, Chris
2010-01-01
Although water molecules are small and only consist of two different atom types, they play various roles in cellular systems. This review discusses their influence on the binding process between biomacromolecular targets and small molecule ligands and how this influence can be modeled in computational drug design approaches. Both the structure and the thermodynamics of active site waters will be discussed as these influence the binding process significantly. Structurally conserved waters cannot always be determined experimentally and if observed, it is not clear if they will be replaced upon ligand binding, even if sufficient space is available. Methods to predict the presence of water in protein-ligand complexes will be reviewed. Subsequently, we will discuss methods to include water in computational drug research. Either as an additional factor in automated docking experiments, or explicitly in detailed molecular dynamics simulations, the effect of water on the quality of the simulations is significant, but not easily predicted. The most detailed calculations involve estimates of the free energy contribution of water molecules to protein-ligand complexes. These calculations are computationally demanding, but give insight in the versatility and importance of water in ligand binding.
García-Guerrero, Estefanía; Pérez-Simón, José Antonio; Sánchez-Abarca, Luis Ignacio; Díaz-Moreno, Irene; De la Rosa, Miguel A; Díaz-Quintana, Antonio
2016-01-01
Generating the immune response requires the discrimination of peptides presented by the human leukocyte antigen complex (HLA) through the T-cell receptor (TCR). However, how a single amino acid substitution in the antigen bonded to HLA affects the response of T cells remains uncertain. Hence, we used molecular dynamics computations to analyze the molecular interactions between peptides, HLA and TCR. We compared immunologically reactive complexes with non-reactive and weakly reactive complexes. MD trajectories were produced to simulate the behavior of isolated components of the various p-HLA-TCR complexes. Analysis of the fluctuations showed that p-HLA binding barely restrains TCR motions, and mainly affects the CDR3 loops. Conversely, inactive p-HLA complexes displayed significant drop in their dynamics when compared with its free versus ternary forms (p-HLA-TCR). In agreement, the free non-reactive p-HLA complexes showed a lower amount of salt bridges than the responsive ones. This resulted in differences between the electrostatic potentials of reactive and inactive p-HLA species and larger vibrational entropies in non-elicitor complexes. Analysis of the ternary p-HLA-TCR complexes also revealed a larger number of salt bridges in the responsive complexes. To summarize, our computations indicate that the affinity of each p-HLA complex towards TCR is intimately linked to both, the dynamics of its free species and its ability to form specific intermolecular salt-bridges in the ternary complexes. Of outstanding interest is the emerging concept of antigen reactivity involving its interplay with the HLA head sidechain dynamics by rearranging its salt-bridges.
Quantum Vertex Model for Reversible Classical Computing
NASA Astrophysics Data System (ADS)
Chamon, Claudio; Mucciolo, Eduardo; Ruckenstein, Andrei; Yang, Zhicheng
We present a planar vertex model that encodes the result of a universal reversible classical computation in its ground state. The approach involves Boolean variables (spins) placed on links of a two-dimensional lattice, with vertices representing logic gates. Large short-ranged interactions between at most two spins implement the operation of each gate. The lattice is anisotropic with one direction corresponding to computational time, and with transverse boundaries storing the computation's input and output. The model displays no finite temperature phase transitions, including no glass transitions, independent of circuit. The computational complexity is encoded in the scaling of the relaxation rate into the ground state with the system size. We use thermal annealing and a novel and more efficient heuristic \\x9Dannealing with learning to study various computational problems. To explore faster relaxation routes, we construct an explicit mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating a novel approach to reversible classical computation based on quantum annealing.
Development of a change management system
NASA Technical Reports Server (NTRS)
Parks, Cathy Bonifas
1993-01-01
The complexity and interdependence of software on a computer system can create a situation where a solution to one problem causes failures in dependent software. In the computer industry, software problems arise and are often solved with 'quick and dirty' solutions. But in implementing these solutions, documentation about the solution or user notification of changes is often overlooked, and new problems are frequently introduced because of insufficient review or testing. These problems increase when numerous heterogeneous systems are involved. Because of this situation, a change management system plays an integral part in the maintenance of any multisystem computing environment. At the NASA Ames Advanced Computational Facility (ACF), the Online Change Management System (OCMS) was designed and developed to manage the changes being applied to its multivendor computing environment. This paper documents the research, design, and modifications that went into the development of this change management system (CMS).
Design Requirements for Communication-Intensive Interactive Applications
NASA Astrophysics Data System (ADS)
Bolchini, Davide; Garzotto, Franca; Paolini, Paolo
Online interactive applications call for new requirements paradigms to capture the growing complexity of computer-mediated communication. Crafting successful interactive applications (such as websites and multimedia) involves modeling the requirements for the user experience, including those leading to content design, usable information architecture and interaction, in profound coordination with the communication goals of all stakeholders involved, ranging from persuasion to social engagement, to call for action. To face this grand challenge, we propose a methodology for modeling communication requirements and provide a set of operational conceptual tools to be used in complex projects with multiple stakeholders. Through examples from real-life projects and lessons-learned from direct experience, we draw on the concepts of brand, value, communication goals, information and persuasion requirements to systematically guide analysts to master the multifaceted connections of these elements as drivers to inform successful communication designs.
Liang, Jie; Qian, Hong
2010-01-01
Modern molecular biology has always been a great source of inspiration for computational science. Half a century ago, the challenge from understanding macromolecular dynamics has led the way for computations to be part of the tool set to study molecular biology. Twenty-five years ago, the demand from genome science has inspired an entire generation of computer scientists with an interest in discrete mathematics to join the field that is now called bioinformatics. In this paper, we shall lay out a new mathematical theory for dynamics of biochemical reaction systems in a small volume (i.e., mesoscopic) in terms of a stochastic, discrete-state continuous-time formulation, called the chemical master equation (CME). Similar to the wavefunction in quantum mechanics, the dynamically changing probability landscape associated with the state space provides a fundamental characterization of the biochemical reaction system. The stochastic trajectories of the dynamics are best known through the simulations using the Gillespie algorithm. In contrast to the Metropolis algorithm, this Monte Carlo sampling technique does not follow a process with detailed balance. We shall show several examples how CMEs are used to model cellular biochemical systems. We shall also illustrate the computational challenges involved: multiscale phenomena, the interplay between stochasticity and nonlinearity, and how macroscopic determinism arises from mesoscopic dynamics. We point out recent advances in computing solutions to the CME, including exact solution of the steady state landscape and stochastic differential equations that offer alternatives to the Gilespie algorithm. We argue that the CME is an ideal system from which one can learn to understand “complex behavior” and complexity theory, and from which important biological insight can be gained. PMID:24999297
Liang, Jie; Qian, Hong
2010-01-01
Modern molecular biology has always been a great source of inspiration for computational science. Half a century ago, the challenge from understanding macromolecular dynamics has led the way for computations to be part of the tool set to study molecular biology. Twenty-five years ago, the demand from genome science has inspired an entire generation of computer scientists with an interest in discrete mathematics to join the field that is now called bioinformatics. In this paper, we shall lay out a new mathematical theory for dynamics of biochemical reaction systems in a small volume (i.e., mesoscopic) in terms of a stochastic, discrete-state continuous-time formulation, called the chemical master equation (CME). Similar to the wavefunction in quantum mechanics, the dynamically changing probability landscape associated with the state space provides a fundamental characterization of the biochemical reaction system. The stochastic trajectories of the dynamics are best known through the simulations using the Gillespie algorithm. In contrast to the Metropolis algorithm, this Monte Carlo sampling technique does not follow a process with detailed balance. We shall show several examples how CMEs are used to model cellular biochemical systems. We shall also illustrate the computational challenges involved: multiscale phenomena, the interplay between stochasticity and nonlinearity, and how macroscopic determinism arises from mesoscopic dynamics. We point out recent advances in computing solutions to the CME, including exact solution of the steady state landscape and stochastic differential equations that offer alternatives to the Gilespie algorithm. We argue that the CME is an ideal system from which one can learn to understand "complex behavior" and complexity theory, and from which important biological insight can be gained.
Complex energies and the polyelectronic Stark problem
NASA Astrophysics Data System (ADS)
Themelis, Spyros I.; Nicolaides, Cleanthes A.
2000-12-01
The problem of computing the energy shifts and widths of ground or excited N-electron atomic states perturbed by weak or strong static electric fields is dealt with by formulating a state-specific complex eigenvalue Schrödinger equation (CESE), where the complex energy contains the field-induced shift and width. The CESE is solved to all orders nonperturbatively, by using separately optimized N-electron function spaces, composed of real and complex one-electron functions, the latter being functions of a complex coordinate. The use of such spaces is a salient characteristic of the theory, leading to economy and manageability of calculation in terms of a two-step computational procedure. The first step involves only Hermitian matrices. The second adds complex functions and the overall computation becomes non-Hermitian. Aspects of the formalism and of computational strategy are compared with those of the complex absorption potential (CAP) method, which was recently applied for the calculation of field-induced complex energies in H and Li. Also compared are the numerical results of the two methods, and the questions of accuracy and convergence that were posed by Sahoo and Ho (Sahoo S and Ho Y K 2000 J. Phys. B: At. Mol. Opt. Phys. 33 2195) are explored further. We draw attention to the fact that, because in the region where the field strength is weak the tunnelling rate (imaginary part of the complex eigenvalue) diminishes exponentially, it is possible for even large-scale nonperturbative complex eigenvalue calculations either to fail completely or to produce seemingly stable results which, however, are wrong. It is in this context that the discrepancy in the width of Li 1s22s 2S between results obtained by the CAP method and those obtained by the CESE method is interpreted. We suggest that the very-weak-field regime must be computed by the golden rule, provided the continuum is represented accurately. In this respect, existing one-particle semiclassical formulae seem to be sufficient. In addition to the aforementioned comparisons and conclusions, we present a number of new results from the application of the state-specific CESE theory to the calculation of field-induced shifts and widths of the H n = 3 levels and of the prototypical Be 1s22s2 1S state, for a range of field strengths. Using the H n = 3 manifold as the example, it is shown how errors may occur for small values of the field, unless the function spaces are optimized carefully for each level.
COMPUTATIONAL MITRAL VALVE EVALUATION AND POTENTIAL CLINICAL APPLICATIONS
Chandran, Krishnan B.; Kim, Hyunggun
2014-01-01
The mitral valve (MV) apparatus consists of the two asymmetric leaflets, the saddle-shaped annulus, the chordae tendineae, and the papillary muscles. MV function over the cardiac cycle involves complex interaction between the MV apparatus components for efficient blood circulation. Common diseases of the MV include valvular stenosis, regurgitation, and prolapse. MV repair is the most popular and most reliable surgical treatment for early MV pathology. One of the unsolved problems in MV repair is to predict the optimal repair strategy for each patient. Although experimental studies have provided valuable information to improve repair techniques, computational simulations are increasingly playing an important role in understanding the complex MV dynamics, particularly with the availability of patient-specific real-time imaging modalities. This work presents a review of computational simulation studies of MV function employing finite element (FE) structural analysis and fluid-structure interaction (FSI) approach reported in the literature to date. More recent studies towards potential applications of computational simulation approaches in the assessment of valvular repair techniques and potential pre-surgical planning of repair strategies are also discussed. It is anticipated that further advancements in computational techniques combined with the next generations of clinical imaging modalities will enable physiologically more realistic simulations. Such advancement in imaging and computation will allow for patient-specific, disease-specific, and case-specific MV evaluation and virtual prediction of MV repair. PMID:25134487
Patel, Trushar R; Chojnowski, Grzegorz; Astha; Koul, Amit; McKenna, Sean A; Bujnicki, Janusz M
2017-04-15
The diverse functional cellular roles played by ribonucleic acids (RNA) have emphasized the need to develop rapid and accurate methodologies to elucidate the relationship between the structure and function of RNA. Structural biology tools such as X-ray crystallography and Nuclear Magnetic Resonance are highly useful methods to obtain atomic-level resolution models of macromolecules. However, both methods have sample, time, and technical limitations that prevent their application to a number of macromolecules of interest. An emerging alternative to high-resolution structural techniques is to employ a hybrid approach that combines low-resolution shape information about macromolecules and their complexes from experimental hydrodynamic (e.g. analytical ultracentrifugation) and solution scattering measurements (e.g., solution X-ray or neutron scattering), with computational modeling to obtain atomic-level models. While promising, scattering methods rely on aggregation-free, monodispersed preparations and therefore the careful development of a quality control pipeline is fundamental to an unbiased and reliable structural determination. This review article describes hydrodynamic techniques that are highly valuable for homogeneity studies, scattering techniques useful to study the low-resolution shape, and strategies for computational modeling to obtain high-resolution 3D structural models of RNAs, proteins, and RNA-protein complexes. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
A special purpose silicon compiler for designing supercomputing VLSI systems
NASA Technical Reports Server (NTRS)
Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.
1991-01-01
Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.
[Influence of mental rotation of objects on psychophysiological functions of women].
Chikina, L V; Fedorchuk, S V; Trushina, V A; Ianchuk, P I; Makarchuk, M Iu
2012-01-01
An integral part of activity of modern human beings is an involvement to work with the computer systems which, in turn, produces a nervous - emotional tension. Hence, a problem of control of the psychophysiological state of workmen with the purpose of health preservation and success of their activity and the problem of application of rehabilitational actions are actual. At present it is known that the efficiency of rehabilitational procedures rises following application of the complex of regenerative programs. Previously performed by us investigation showed that mental rotation is capable to compensate the consequences of a nervous - emotional tension. Therefore, in the present work we investigated how the complex of spatial tasks developed by us influences psychophysiological performances of tested women for which the psycho-emotional tension with the usage of computer technologies is more essential, and the procedure of mental rotation is more complex task for them, than for men. The complex of spatial tasks applied in the given work included: mental rotation of simple objects (letters and digits), mental rotation of complex objects (geometrical figures) and mental rotation of complex objects with the usage of a short-term memory. Execution of the complex of spatial tasks reduces the time of simple and complex sensomotor response, raises parameters of a short-term memory, brain work capacity and improves nervous processes. Collectively, mental rotation of objects can be recommended as a rehabilitational resource for compensation of consequences of any psycho-emotional strain, both for men, and for women.
Nagula, Narsimha; Kunche, Sudeepa; Jaheer, Mohmed; Mudavath, Ravi; Sivan, Sreekanth; Ch, Sarala Devi
2018-01-01
Some novel transition metal [Cu (II), Ni (II) and Co (II)] complexes of nalidixic acid hydrazone have been prepared and characterized by employing spectro-analytical techniques viz: elemental analysis, 1 H-NMR, Mass, UV-Vis, IR, TGA-DTA, SEM-EDX, ESR and Spectrophotometry studies. The HyperChem 7.5 software was used for geometry optimization of title compound in its molecular and ionic forms. Quantum mechanical parameters, contour maps of highest occupied molecular orbitals (HOMO) and lowest unoccupied molecular orbitals (LUMO) and corresponding binding energy values were computed using semi empirical single point PM3 method. The stoichiometric equilibrium studies of metal complexes carried out spectrophotometrically using Job's continuous variation and mole ratio methods inferred formation of 1:2 (ML 2 ) metal complexes in respective systems. The title compound and its metal complexes screened for antibacterial and antifungal properties, exemplified improved activity in metal complexes. The studies of nuclease activity for the cleavage of CT- DNA and MTT assay for in vitro cytotoxic properties involving metal complexes exhibited high activity. In addition, the DNA binding properties of Cu (II), Ni (II) and Co (II) complexes investigated by electronic absorption and fluorescence measurements revealed their good binding ability and commended agreement of K b values obtained from both the techniques. Molecular docking studies were also performed to find the binding affinity of synthesized compounds with DNA (PDB ID: 1N37) and "Thymidine phosphorylase from E.coli" (PDB ID: 4EAF) protein targets.
Complex space monofilar approximation of diffraction currents on a conducting half plane
NASA Technical Reports Server (NTRS)
Lindell, I. V.
1987-01-01
Simple approximation of diffraction surface currents on a conducting half plane, due to an incoming plane wave, is obtained with a line current (monofile) in complex space. When compared to an approximating current at the edge, the diffraction pattern is seen to improve by an order of magnitude for a minimal increase of computation effort. Thus, the inconvient Fresnel integral functions can be avoided for quick calculations of diffracted fields and the accuracy is good in other directions than along the half plane. The method can be applied to general problems involving planar metal edges.
The Computational Complexity of the Kakuro Puzzle, Revisited
NASA Astrophysics Data System (ADS)
Ruepp, Oliver; Holzer, Markus
We present a new proof of NP-completeness for the problem of solving instances of the Japanese pencil puzzle Kakuro (also known as Cross-Sum). While the NP-completeness of Kakuro puzzles has been shown before [T. Seta. The complexity of CROSS SUM. IPSJ SIG Notes, AL-84:51-58, 2002], there are still two interesting aspects to our proof: we show NP-completeness for a new variant of Kakuro that has not been investigated before and thus improves the aforementioned result. Moreover some parts of the proof have been generated automatically, using an interesting technique involving SAT solvers.
Enforcing compatibility and constraint conditions and information retrieval at the design action
NASA Technical Reports Server (NTRS)
Woodruff, George W.
1990-01-01
The design of complex entities is a multidisciplinary process involving several interacting groups and disciplines. There is a need to integrate the data in such environments to enhance the collaboration between these groups and to enforce compatibility between dependent data entities. This paper discusses the implementation of a workstation based CAD system that is integrated with a DBMS and an expert system, CLIPS, (both implemented on a mini computer) to provide such collaborative and compatibility enforcement capabilities. The current implementation allows for a three way link between the CAD system, the DBMS and CLIPS. The engineering design process associated with the design and fabrication of sheet metal housing for computers in a large computer manufacturing facility provides the basis for this prototype system.
Optical analysis of laser systems using interferometry
NASA Astrophysics Data System (ADS)
Viswanathan, V. K.; Liberman, I.; Lawrence, G.; Seery, B. D.
1980-06-01
It is noted that previous approaches of predicting focal spot parameters involved the digitization of interference patterns of the optical components and propagation of the complex amplitude and phase of the wave front throughout the system. The present paper describes an approach in which the computational procedure is extended to produce computer plots of the final emerging wave front. It is shown that this enables direct comparison with the experimentally produced wave front of the total system and makes possible the optical analysis, design, and possible optimization of laser systems. A description is given of the computational procedure and the Twyman-Green and Smartt IR interferometers constructed to verify this approach. Finally, consideration is given to the implications of the results.
NASA Technical Reports Server (NTRS)
Tezduyar, Tayfun E.
1998-01-01
This is a final report as far as our work at University of Minnesota is concerned. The report describes our research progress and accomplishments in development of high performance computing methods and tools for 3D finite element computation of aerodynamic characteristics and fluid-structure interactions (FSI) arising in airdrop systems, namely ram-air parachutes and round parachutes. This class of simulations involves complex geometries, flexible structural components, deforming fluid domains, and unsteady flow patterns. The key components of our simulation toolkit are a stabilized finite element flow solver, a nonlinear structural dynamics solver, an automatic mesh moving scheme, and an interface between the fluid and structural solvers; all of these have been developed within a parallel message-passing paradigm.
The exact analysis of contingency tables in medical research.
Mehta, C R
1994-01-01
A unified view of exact nonparametric inference, with special emphasis on data in the form of contingency tables, is presented. While the concept of exact tests has been in existence since the early work of RA Fisher, the computational complexity involved in actually executing such tests precluded their use until fairly recently. Modern algorithmic advances, combined with the easy availability of inexpensive computing power, has renewed interest in exact methods of inference, especially because they remain valid in the face of small, sparse, imbalanced, or heavily tied data. After defining exact p-values in terms of the permutation principle, we reference algorithms for computing them. Several data sets are then analysed by both exact and asymptotic methods. We end with a discussion of the available software.
Adding computationally efficient realism to Monte Carlo turbulence simulation
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1985-01-01
Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.
Improved ALE mesh velocities for complex flows
Bakosi, Jozsef; Waltz, Jacob I.; Morgan, Nathaniel Ray
2017-05-31
A key choice in the development of arbitrary Lagrangian-Eulerian solution algorithms is how to move the computational mesh. The most common approaches are smoothing and relaxation techniques, or to compute a mesh velocity field that produces smooth mesh displacements. We present a method in which the mesh velocity is specified by the irrotational component of the fluid velocity as computed from a Helmholtz decomposition, and excess compression of mesh cells is treated through a noniterative, local spring-force model. This approach allows distinct and separate control over rotational and translational modes. In conclusion, the utility of the new mesh motion algorithmmore » is demonstrated on a number of 3D test problems, including problems that involve both shocks and significant amounts of vorticity.« less
Lefkoff, L.J.; Gorelick, S.M.
1987-01-01
A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)
2011-01-01
Knowledge of the relative stabilities of alane (AlH3) complexes with electron donors is essential for identifying hydrogen storage materials for vehicular applications that can be regenerated by off-board methods; however, almost no thermodynamic data are available to make this assessment. To fill this gap, we employed the G4(MP2) method to determine heats of formation, entropies, and Gibbs free energies of formation for 38 alane complexes with NH3−nRn (R = Me, Et; n = 0−3), pyridine, pyrazine, triethylenediamine (TEDA), quinuclidine, OH2−nRn (R = Me, Et; n = 0−2), dioxane, and tetrahydrofuran (THF). Monomer, bis, and selected dimer complex geometries were considered. Using these data, we computed the thermodynamics of the key formation and dehydrogenation reactions that would occur during hydrogen delivery and alane regeneration, from which trends in complex stability were identified. These predictions were tested by synthesizing six amine−alane complexes involving trimethylamine, triethylamine, dimethylethylamine, TEDA, quinuclidine, and hexamine and obtaining upper limits of ΔG° for their formation from metallic aluminum. Combining these computational and experimental results, we establish a criterion for complex stability relevant to hydrogen storage that can be used to assess potential ligands prior to attempting synthesis of the alane complex. On the basis of this, we conclude that only a subset of the tertiary amine complexes considered and none of the ether complexes can be successfully formed by direct reaction with aluminum and regenerated in an alane-based hydrogen storage system. PMID:22962624
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
A CFD study of complex missile and store configurations in relative motion
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
An investigation was conducted from May 16, 1990 to August 31, 1994 on the development of computational fluid dynamics (CFD) methodologies for complex missiles and the store separation problem. These flowfields involved multiple-component configurations, where at least one of the objects was engaged in relative motion. The two most important issues that had to be addressed were: (1) the unsteadiness of the flowfields (time-accurate and efficient CFD algorithms for the unsteady equations), and (2) the generation of grid systems which would permit multiple and moving bodies in the computational domain (dynamic domain decomposition). The study produced two competing and promising methodologies, and their proof-of-concept cases, which have been reported in the open literature: (1) Unsteady solutions on dynamic, overlapped grids, which may also be perceived as moving, locally-structured grids, and (2) Unsteady solutions on dynamic, unstructured grids.
Digital templating for THA: a simple computer-assisted application for complex hip arthritis cases.
Hafez, Mahmoud A; Ragheb, Gad; Hamed, Adel; Ali, Amr; Karim, Said
2016-10-01
Total hip arthroplasty (THA) is the standard procedure for end-stage arthritis of the hip. Its technical success relies on preoperative planning of the surgical procedure and virtual setup of the operative performance. Digital hip templating is one methodology of preoperative planning for THA which requires a digital preoperative radiograph and a computer with special software. This is a prospective study involving 23 patients (25 hips) who were candidates for complex THA surgery (unilateral or bilateral). Digital templating is done by radiographic assessment using radiographic magnification correction, leg length discrepancy and correction measurements, acetabular component and femoral component templating as well as neck resection measurement. The overall accuracy for templating the stem implant's exact size is 81%. This percentage increased to 94% when considering sizing within 1 size. Digital templating has proven effective, reliable and essential technique for preoperative planning and accurate prediction of THA sizing and alignment.
Lamb wave propagation in a restricted geometry composite pi-joint specimen
NASA Astrophysics Data System (ADS)
Blackshire, James L.; Soni, Som
2012-05-01
The propagation of elastic waves in a material can involve a number of complex physical phenomena, resulting in both subtle and dramatic effects on detected signal content. In recent years, the use of advanced methods for characterizing and imaging elastic wave propagation and scattering processes has increased, where for example the use of scanning laser vibrometry and advanced computational models have been used very effectively to identify propagating modes, scattering phenomena, and damage feature interactions. In the present effort, the propagation of Lamb waves within a narrow, constrained geometry composite pi-joint structure are studied using 3D finite element models and scanning laser vibrometry measurements, where the effects of varying sample thickness, complex joint curvatures, and restricted structure geometries are highlighted, and a direct comparison of computational and experimental results are provided for simulated and realistic geometry composite pi-joint samples.
Pupillary dynamics reveal computational cost in sentence planning.
Sevilla, Yamila; Maldonado, Mora; Shalóm, Diego E
2014-01-01
This study investigated the computational cost associated with grammatical planning in sentence production. We measured people's pupillary responses as they produced spoken descriptions of depicted events. We manipulated the syntactic structure of the target by training subjects to use different types of sentences following a colour cue. The results showed higher increase in pupil size for the production of passive and object dislocated sentences than for active canonical subject-verb-object sentences, indicating that more cognitive effort is associated with more complex noncanonical thematic order. We also manipulated the time at which the cue that triggered structure-building processes was presented. Differential increase in pupil diameter for more complex sentences was shown to rise earlier as the colour cue was presented earlier, suggesting that the observed pupillary changes are due to differential demands in relatively independent structure-building processes during grammatical planning. Task-evoked pupillary responses provide a reliable measure to study the cognitive processes involved in sentence production.
Optical Computers and Space Technology
NASA Technical Reports Server (NTRS)
Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela
1995-01-01
The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
Assurance Evaluation for OSS Adoption in a Telco Context
NASA Astrophysics Data System (ADS)
Ardagna, Claudio A.; Banzi, Massimo; Damiani, Ernesto; El Ioini, Nabil; Frati, Fulvio
Software Assurance (SwA) is a complex concept that involves different stages of a software development process and may be defined differently depending on its focus, as for instance software quality, security, or dependability. In Computer Science, the term assurance is referred to all activities necessary to provide enough confidence that a software product will satisfy its users’ functional and non-functional requirements.
ERIC Educational Resources Information Center
Lee, Kerry
2011-01-01
Although the term "technology" means different things to different people, most would generally agree that it is about "stuff." For some it may be more complex than this, and for others it may simply involve using or studying high-tech gadgetry, such as computers and iPhones. Understanding the interdependence between design and culture is a…
NASA Technical Reports Server (NTRS)
1997-01-01
Session MP4 includes short reports on: (1) Face Recognition in Microgravity: Is Gravity Direction Involved in the Inversion Effect?; (2) Motor Timing under Microgravity; (3) Perceived Self-Motion Assessed by Computer-Generated Animations: Complexity and Reliability; (4) Prolonged Weightlessness Reference Frames and Visual Symmetry Detection; (5) Mental Representation of Gravity During a Locomotor Task; and (6) Haptic Perception in Weightlessness: A Sense of Force or a Sense of Effort?
Approaches and possible improvements in the area of multibody dynamics modeling
NASA Technical Reports Server (NTRS)
Lips, K. W.; Singh, R.
1987-01-01
A wide ranging look is taken at issues involved in the dynamic modeling of complex, multibodied orbiting space systems. Capabilities and limitations of two major codes (DISCOS, TREETOPS) are assessed and possible extensions to the CONTOPS software are outlined. In addition, recommendations are made concerning the direction future development should take in order to achieve higher fidelity, more computationally efficient multibody software solutions.
Hull, Emily A; West, Aaron C; Pestovsky, Oleg; Kristian, Kathleen E; Ellern, Arkady; Dunne, James F; Carraher, Jack M; Bakac, Andreja; Windus, Theresa L
2015-02-28
Transition metal complexes (NH3)5CoX(2+) (X = CH3, Cl) and L(H2O)MX(2+), where M = Rh or Co, X = CH3, NO, or Cl, and L is a macrocyclic N4 ligand are examined by both experiment and computation to better understand their electronic spectra and associated photochemistry. Specifically, irradiation into weak visible bands of nitrosyl and alkyl complexes (NH3)5CoCH3(2+) and L(H2O)M(III)X(2+) (X = CH3 or NO) leads to photohomolysis that generates the divalent metal complex and ˙CH3 or ˙NO, respectively. On the other hand, when X = halide or NO2, visible light photolysis leads to dissociation of X(-) and/or cis/trans isomerization. Computations show that visible bands for alkyl and nitrosyl complexes involve transitions from M-X bonding orbitals and/or metal d orbitals to M-X antibonding orbitals. In contrast, complexes with X = Cl or NO2 exhibit only d-d bands in the visible, so that homolytic cleavage of the M-X bond requires UV photolysis. UV-Vis spectra are not significantly dependent on the structure of the equatorial ligands, as shown by similar spectral features for (NH3)5CoCH3(2+) and L(1)(H2O)CoCH3(2+).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hull, Emily A.; West, Aaron C.; Pestovsky, Oleg
2015-01-22
In this paper, transition metal complexes (NH 3) 5CoX2 + (X = CH 3, Cl) and L(H 2O)MX 2+, where M = Rh or Co, X = CH 3, NO, or Cl, and L is a macrocyclic N 4 ligand are examined by both experiment and computation to better understand their electronic spectra and associated photochemistry. Specifically, irradiation into weak visible bands of nitrosyl and alkyl complexes (NH 3) 5CoCH 3 2+ and L(H 2O)M IIIX 2+ (X = CH 3 or NO) leads to photohomolysis that generates the divalent metal complex and ˙CH3 or ˙NO, respectively. On the othermore » hand, when X = halide or NO 2, visible light photolysis leads to dissociation of X – and/or cis/trans isomerization. Computations show that visible bands for alkyl and nitrosyl complexes involve transitions from M–X bonding orbitals and/or metal d orbitals to M–X antibonding orbitals. In contrast, complexes with X = Cl or NO 2 exhibit only d–d bands in the visible, so that homolytic cleavage of the M–X bond requires UV photolysis. UV-Vis spectra are not significantly dependent on the structure of the equatorial ligands, as shown by similar spectral features for (NH 3) 5CoCH 3 2+ and L 1(H 2O)CoCH 3 2+.« less
Multilayer modeling and analysis of human brain networks
2017-01-01
Abstract Understanding how the human brain is structured, and how its architecture is related to function, is of paramount importance for a variety of applications, including but not limited to new ways to prevent, deal with, and cure brain diseases, such as Alzheimer’s or Parkinson’s, and psychiatric disorders, such as schizophrenia. The recent advances in structural and functional neuroimaging, together with the increasing attitude toward interdisciplinary approaches involving computer science, mathematics, and physics, are fostering interesting results from computational neuroscience that are quite often based on the analysis of complex network representation of the human brain. In recent years, this representation experienced a theoretical and computational revolution that is breaching neuroscience, allowing us to cope with the increasing complexity of the human brain across multiple scales and in multiple dimensions and to model structural and functional connectivity from new perspectives, often combined with each other. In this work, we will review the main achievements obtained from interdisciplinary research based on magnetic resonance imaging and establish de facto, the birth of multilayer network analysis and modeling of the human brain. PMID:28327916
Vazart, Fanny; Calderini, Danilo; Puzzarini, Cristina; Skouteris, Dimitrios
2017-01-01
We propose an integrated computational strategy aimed at providing reliable thermochemical and kinetic information on the formation processes of astrochemical complex organic molecules. The approach involves state-of-the-art quantum-mechanical computations, second-order vibrational perturbation theory, and kinetic models based on capture and transition state theory together with the master equation approach. Notably, tunneling, quantum reflection, and leading anharmonic contributions are accounted for in our model. Formamide has been selected as a case study in view of its interest as a precursor in the abiotic amino acid synthesis. After validation of the level of theory chosen for describing the potential energy surface, we have investigated several pathways of the OH+CH2NH and NH2+HCHO reaction channels. Our results indicate that both reaction channels are essentially barrier-less (in the sense that all relevant transition states lie below or only marginally above the reactants) and can, therefore, occur under the low temperature conditions of interstellar objects provided that tunneling is taken into the proper account. PMID:27689448
Numerical solution of the Navier-Stokes equations by discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Krasnov, M. M.; Kuchugov, P. A.; E Ladonkina, M.; E Lutsky, A.; Tishkin, V. F.
2017-02-01
Detailed unstructured grids and numerical methods of high accuracy are frequently used in the numerical simulation of gasdynamic flows in areas with complex geometry. Galerkin method with discontinuous basis functions or Discontinuous Galerkin Method (DGM) works well in dealing with such problems. This approach offers a number of advantages inherent to both finite-element and finite-difference approximations. Moreover, the present paper shows that DGM schemes can be viewed as Godunov method extension to piecewise-polynomial functions. As is known, DGM involves significant computational complexity, and this brings up the question of ensuring the most effective use of all the computational capacity available. In order to speed up the calculations, operator programming method has been applied while creating the computational module. This approach makes possible compact encoding of mathematical formulas and facilitates the porting of programs to parallel architectures, such as NVidia CUDA and Intel Xeon Phi. With the software package, based on DGM, numerical simulations of supersonic flow past solid bodies has been carried out. The numerical results are in good agreement with the experimental ones.
On some stochastic formulations and related statistical moments of pharmacokinetic models.
Matis, J H; Wehrly, T E; Metzler, C M
1983-02-01
This paper presents the deterministic and stochastic model for a linear compartment system with constant coefficients, and it develops expressions for the mean residence times (MRT) and the variances of the residence times (VRT) for the stochastic model. The expressions are relatively simple computationally, involving primarily matrix inversion, and they are elegant mathematically, in avoiding eigenvalue analysis and the complex domain. The MRT and VRT provide a set of new meaningful response measures for pharmacokinetic analysis and they give added insight into the system kinetics. The new analysis is illustrated with an example involving the cholesterol turnover in rats.
NASA Astrophysics Data System (ADS)
Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd
2009-05-01
Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.
A solution to the surface intersection problem. [Boolean functions in geometric modeling
NASA Technical Reports Server (NTRS)
Timer, H. G.
1977-01-01
An application-independent geometric model within a data base framework should support the use of Boolean operators which allow the user to construct a complex model by appropriately combining a series of simple models. The use of these operators leads to the concept of implicitly and explicitly defined surfaces. With an explicitly defined model, the surface area may be computed by simply summing the surface areas of the bounding surfaces. For an implicitly defined model, the surface area computation must deal with active and inactive regions. Because the surface intersection problem involves four unknowns and its solution is a space curve, the parametric coordinates of each surface must be determined as a function of the arc length. Various subproblems involved in the general intersection problem are discussed, and the mathematical basis for their solution is presented along with a program written in FORTRAN IV for implementation on the IBM 370 TSO system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
[INVITED] Computational intelligence for smart laser materials processing
NASA Astrophysics Data System (ADS)
Casalino, Giuseppe
2018-03-01
Computational intelligence (CI) involves using a computer algorithm to capture hidden knowledge from data and to use them for training ;intelligent machine; to make complex decisions without human intervention. As simulation is becoming more prevalent from design and planning to manufacturing and operations, laser material processing can also benefit from computer generating knowledge through soft computing. This work is a review of the state-of-the-art on the methodology and applications of CI in laser materials processing (LMP), which is nowadays receiving increasing interest from world class manufacturers and 4.0 industry. The focus is on the methods that have been proven effective and robust in solving several problems in welding, cutting, drilling, surface treating and additive manufacturing using the laser beam. After a basic description of the most common computational intelligences employed in manufacturing, four sections, namely, laser joining, machining, surface, and additive covered the most recent applications in the already extensive literature regarding the CI in LMP. Eventually, emerging trends and future challenges were identified and discussed.
Baresic, Mario; Salatino, Silvia; Kupr, Barbara
2014-01-01
Skeletal muscle tissue shows an extraordinary cellular plasticity, but the underlying molecular mechanisms are still poorly understood. Here, we use a combination of experimental and computational approaches to unravel the complex transcriptional network of muscle cell plasticity centered on the peroxisome proliferator-activated receptor γ coactivator 1α (PGC-1α), a regulatory nexus in endurance training adaptation. By integrating data on genome-wide binding of PGC-1α and gene expression upon PGC-1α overexpression with comprehensive computational prediction of transcription factor binding sites (TFBSs), we uncover a hitherto-underestimated number of transcription factor partners involved in mediating PGC-1α action. In particular, principal component analysis of TFBSs at PGC-1α binding regions predicts that, besides the well-known role of the estrogen-related receptor α (ERRα), the activator protein 1 complex (AP-1) plays a major role in regulating the PGC-1α-controlled gene program of the hypoxia response. Our findings thus reveal the complex transcriptional network of muscle cell plasticity controlled by PGC-1α. PMID:24912679
Regulation of the protein-conducting channel by a bound ribosome
Gumbart, James; Trabuco, Leonardo G.; Schreiner, Eduard; Villa, Elizabeth; Schulten, Klaus
2009-01-01
Summary During protein synthesis, it is often necessary for the ribosome to form a complex with a membrane-bound channel, the SecY/Sec61 complex, in order to translocate nascent proteins across a cellular membrane. Structural data on the ribosome-channel complex are currently limited to low-resolution cryo-electron microscopy maps, including one showing a bacterial ribosome bound to a monomeric SecY complex. Using that map along with available atomic-level models of the ribosome and SecY, we have determined, through molecular dynamics flexible fitting (MDFF), an atomic-resolution model of the ribosome-channel complex. We characterized computationally the sites of ribosome-SecY interaction within the complex and determined the effect of ribosome binding on the SecY channel. We also constructed a model of a ribosome in complex with a SecY dimer by adding a second copy of SecY to the MDFF-derived model. The study involved 2.7-million-atom simulations over altogether nearly 50 ns. PMID:19913480
Structured analysis and modeling of complex systems
NASA Technical Reports Server (NTRS)
Strome, David R.; Dalrymple, Mathieu A.
1992-01-01
The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.
Two-dimensional nonsteady viscous flow simulation on the Navier-Stokes computer miniNode
NASA Technical Reports Server (NTRS)
Nosenchuck, Daniel M.; Littman, Michael G.; Flannery, William
1986-01-01
The needs of large-scale scientific computation are outpacing the growth in performance of mainframe supercomputers. In particular, problems in fluid mechanics involving complex flow simulations require far more speed and capacity than that provided by current and proposed Class VI supercomputers. To address this concern, the Navier-Stokes Computer (NSC) was developed. The NSC is a parallel-processing machine, comprised of individual Nodes, each comparable in performance to current supercomputers. The global architecture is that of a hypercube, and a 128-Node NSC has been designed. New architectural features, such as a reconfigurable many-function ALU pipeline and a multifunction memory-ALU switch, have provided the capability to efficiently implement a wide range of algorithms. Efficient algorithms typically involve numerically intensive tasks, which often include conditional operations. These operations may be efficiently implemented on the NSC without, in general, sacrificing vector-processing speed. To illustrate the architecture, programming, and several of the capabilities of the NSC, the simulation of two-dimensional, nonsteady viscous flows on a prototype Node, called the miniNode, is presented.
A New Approach for Constructing Highly Stable High Order CESE Schemes
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2010-01-01
A new approach is devised to construct high order CESE schemes which would avoid the common shortcomings of traditional high order schemes including: (a) susceptibility to computational instabilities; (b) computational inefficiency due to their local implicit nature (i.e., at each mesh points, need to solve a system of linear/nonlinear equations involving all the mesh variables associated with this mesh point); (c) use of large and elaborate stencils which complicates boundary treatments and also makes efficient parallel computing much harder; (d) difficulties in applications involving complex geometries; and (e) use of problem-specific techniques which are needed to overcome stability problems but often cause undesirable side effects. In fact it will be shown that, with the aid of a conceptual leap, one can build from a given 2nd-order CESE scheme its 4th-, 6th-, 8th-,... order versions which have the same stencil and same stability conditions of the 2nd-order scheme, and also retain all other advantages of the latter scheme. A sketch of multidimensional extensions will also be provided.
NASA Astrophysics Data System (ADS)
Singh, Th. David; Sumitra, Ch.; Yaiphaba, N.; Devi, H. Debecca; Devi, M. Indira; Singh, N. Rajmuhon
2005-04-01
The coordination chemistry of glutathione reduced (GSH) is of great importance as it acts as excellent model system for the binding of metal ions. The GSH complexation with metal ions is involved in the toxicology of different metal ions. Its coordination behaviour for soft metal ions and hard metal ions is found different because of the structure of GSH and its different potential binding sites. In our work we have studied two chemically dissimilar metal ions viz. Pr(III), which prefer hard donor site like carboxylic groups and Zn(II) the soft metal ion which prefer peptide-NH and sulphydryl groups. The absorption difference and comparative absorption spectroscopy involving 4f-4f transitions of the heterobimetallic Complexation of GSH with Pr(III) and Zn(II) has been explored in aqueous and aquated organic solvents. The variation in the energy parameters like Slater-Condon ( F K), Racah ( E K) and Lande ( ξ4f), Nephelauxetic parameter ( β) and bonding parameter ( b1/2) are computed to explain the nature of complexation.
Martinez-Macias, Claudia; Chen, Mingyang; Dixon, David A.; ...
2015-07-03
We formed a family of HY zeolite-supported cationic organoiridium carbonyl complexes by reaction of Ir(CO) 2(acac) (acac=acetylacetonate) to form supported Ir(CO) 2 complexes, which were treated at 298K and 1atm with flowing gas-phase reactants, including C 2H 4, H 2, (CO)-C-12, (CO)-C-13, and D 2O. Mass spectrometry was used to identify effluent gases, and infrared and X-ray absorption spectroscopies were used to characterize the supported species, with the results bolstered by DFT calculations. The support is crystalline and presents a nearly uniform array of bonding sites for the iridium species, so these were characterized by a high degree of uniformity,more » which allowed a precise determination of the species involved in the replacement, for example, of one CO ligand of each Ir(CO) 2 complex with ethylene. The supported species include the following: Ir(CO) 2, Ir(CO)(C 2H 4) 2, Ir(CO)(C 2H 4), Ir(CO)(C 2H 5), and (tentatively) Ir(CO)(H). The data determine a reaction network involving all of these species.« less
Hernández-Valdés, Daniel; Rodríguez-Riera, Zalua; Díaz-García, Alicia; Benoist, Eric; Jáuregui-Haza, Ulises
2016-08-01
The development of novel radiopharmaceuticals for nuclear medicine based on M(CO)3 (M = Tc, Re) complexes has attracted great attention. The versatility of this core and the easy production of the fac-[M(CO)3(H2O)3](+) precursor could explain this interest. The main characteristics of these tricarbonyl complexes are the high substitution stability of the three CO ligands and the corresponding lability of the coordinated water molecules, yielding, via easy exchange of a variety of bi- and tridentate ligands, complexes xof very high kinetic stability. Here, a computational study of different tricarbonyl complexes of Re(I) and Tc(I) was performed using density functional theory. The solvent effect was simulated using the polarizable continuum model. These structures were used as a starting point to investigate the relative stabilities of tricarbonyl complexes with various tridentate ligands. These complexes included an iminodiacetic acid unit for tridentate coordination to the fac-[M(CO)3](+) moiety (M = Re, Tc), an aromatic ring system bearing a functional group (-NO2, -NH2, and -Cl) as a linking site model, and a tethering moiety (a methylene, ethylene, propylene butylene, or pentylene bridge) between the linking and coordinating sites. The optimized complexes showed geometries comparable to those inferred from X-ray data. In general, the Re complexes were more stable than the corresponding Tc complexes. Furthermore, using NH2 as the functional group, a medium length carbon chain, and ortho substitution increased complex stability. All of the bonds involving the metal center presented a closed shell interaction with dative or covalent character, and the strength of these bonds decreased in the sequence Tc-CO > Tc-O > Tc-N.
Reddy Chichili, Vishnu Priyanka; Kumar, Veerendra; Sivaraman, J.
2016-01-01
Protein-protein interactions are key events controlling several biological processes. We have developed and employed a method to trap transiently interacting protein complexes for structural studies using glycine-rich linkers to fuse interacting partners, one of which is unstructured. Initial steps involve isothermal titration calorimetry to identify the minimum binding region of the unstructured protein in its interaction with its stable binding partner. This is followed by computational analysis to identify the approximate site of the interaction and to design an appropriate linker length. Subsequently, fused constructs are generated and characterized using size exclusion chromatography and dynamic light scattering experiments. The structure of the chimeric protein is then solved by crystallization, and validated both in vitro and in vivo by substituting key interacting residues of the full length, unlinked proteins with alanine. This protocol offers the opportunity to study crucial and currently unattainable transient protein interactions involved in various biological processes. PMID:26985443
EPIBLASTER-fast exhaustive two-locus epistasis detection strategy using graphical processing units
Kam-Thong, Tony; Czamara, Darina; Tsuda, Koji; Borgwardt, Karsten; Lewis, Cathryn M; Erhardt-Lehmann, Angelika; Hemmer, Bernhard; Rieckmann, Peter; Daake, Markus; Weber, Frank; Wolf, Christiane; Ziegler, Andreas; Pütz, Benno; Holsboer, Florian; Schölkopf, Bernhard; Müller-Myhsok, Bertram
2011-01-01
Detection of epistatic interaction between loci has been postulated to provide a more in-depth understanding of the complex biological and biochemical pathways underlying human diseases. Studying the interaction between two loci is the natural progression following traditional and well-established single locus analysis. However, the added costs and time duration required for the computation involved have thus far deterred researchers from pursuing a genome-wide analysis of epistasis. In this paper, we propose a method allowing such analysis to be conducted very rapidly. The method, dubbed EPIBLASTER, is applicable to case–control studies and consists of a two-step process in which the difference in Pearson's correlation coefficients is computed between controls and cases across all possible SNP pairs as an indication of significant interaction warranting further analysis. For the subset of interactions deemed potentially significant, a second-stage analysis is performed using the likelihood ratio test from the logistic regression to obtain the P-value for the estimated coefficients of the individual effects and the interaction term. The algorithm is implemented using the parallel computational capability of commercially available graphical processing units to greatly reduce the computation time involved. In the current setup and example data sets (211 cases, 222 controls, 299468 SNPs; and 601 cases, 825 controls, 291095 SNPs), this coefficient evaluation stage can be completed in roughly 1 day. Our method allows for exhaustive and rapid detection of significant SNP pair interactions without imposing significant marginal effects of the single loci involved in the pair. PMID:21150885
NASA Astrophysics Data System (ADS)
Buddala, Raviteja; Mahapatra, Siba Sankar
2017-11-01
Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.
Computational Investigation of Amine–Oxygen Exciplex Formation
Haupert, Levi M.; Simpson, Garth J.; Slipchenko, Lyudmila V.
2012-01-01
It has been suggested that fluorescence from amine-containing dendrimer compounds could be the result of a charge transfer between amine groups and molecular oxygen [Chu, C.-C.; Imae, T. Macromol. Rapid Commun. 2009, 30, 89.]. In this paper we employ equation-of-motion coupled cluster computational methods to study the electronic structure of an ammonia–oxygen model complex to examine this possibility. The results reveal several bound electronic states with charge transfer character with emission energies generally consistent with previous observations. However, further work involving confinement, solvent, and amine structure effects will be necessary for more rigorous examination of the charge transfer fluorescence hypothesis. PMID:21812447
Ho, Pang-Yen; Chuang, Guo-Syong; Chao, An-Chong; Li, Hsing-Ya
2005-05-01
The capacity of complex biochemical reaction networks (consisting of 11 coupled non-linear ordinary differential equations) to show multiple steady states, was investigated. The system involved esterification of ethanol and oleic acid by lipase in an isothermal continuous stirred tank reactor (CSTR). The Deficiency One Algorithm and the Subnetwork Analysis were applied to determine the steady state multiplicity. A set of rate constants and two corresponding steady states are computed. The phenomena of bistability, hysteresis and bifurcation are discussed. Moreover, the capacity of steady state multiplicity is extended to the family of the studied reaction networks.
Aeroelastic-Acoustics Simulation of Flight Systems
NASA Technical Reports Server (NTRS)
Gupta, kajal K.; Choi, S.; Ibrahim, A.
2009-01-01
This paper describes the details of a numerical finite element (FE) based analysis procedure and a resulting code for the simulation of the acoustics phenomenon arising from aeroelastic interactions. Both CFD and structural simulations are based on FE discretization employing unstructured grids. The sound pressure level (SPL) on structural surfaces is calculated from the root mean square (RMS) of the unsteady pressure and the acoustic wave frequencies are computed from a fast Fourier transform (FFT) of the unsteady pressure distribution as a function of time. The resulting tool proves to be unique as it is designed to analyze complex practical problems, involving large scale computations, in a routine fashion.
Developing the human-computer interface for Space Station Freedom
NASA Technical Reports Server (NTRS)
Holden, Kritina L.
1991-01-01
For the past two years, the Human-Computer Interaction Laboratory (HCIL) at the Johnson Space Center has been involved in prototyping and prototype reviews of in support of the definition phase of the Space Station Freedom program. On the Space Station, crew members will be interacting with multi-monitor workstations where interaction with several displays at one time will be common. The HCIL has conducted several experiments to begin to address design issues for this complex system. Experiments have dealt with design of ON/OFF indicators, the movement of the cursor across multiple monitors, and the importance of various windowing capabilities for users performing multiple tasks simultaneously.
The design of multiplayer online video game systems
NASA Astrophysics Data System (ADS)
Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.
2003-11-01
The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.
Dynamic Deployment Simulations of Inflatable Space Structures
NASA Technical Reports Server (NTRS)
Wang, John T.
2005-01-01
The feasibility of using Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method in LSDYNA to simulate the dynamic deployment of inflatable space structures is investigated. The CV and ALE methods were used to predict the inflation deployments of three folded tube configurations. The CV method was found to be a simple and computationally efficient method that may be adequate for modeling slow inflation deployment sine the inertia of the inflation gas can be neglected. The ALE method was found to be very computationally intensive since it involves the solving of three conservative equations of fluid as well as dealing with complex fluid structure interactions.
NASA Astrophysics Data System (ADS)
Victory Devi, Ch.; Rajmuhon Singh, N.
2011-10-01
The interaction of uracil with Nd(III) has been explored in presence and absence of Zn(II) using the comparative absorption spectroscopy involving the 4f-4f transitions in different solvents. The complexation of uracil with Nd(III) is indicated by the change in intensity of 4f-4f bands expressing in terms of significant change in oscillator strength and Judd-Ofelt parameters. Intensification of this bands became more prominent in presence of Zn(II) suggesting the stimulative effect of Zn(II) towards the complexation of Nd(III) with uracil. Other spectral parameters namely Slator-Condon ( Fk's), nephelauxetic effect ( β), bonding ( b1/2) and percent covalency ( δ) parameters are computed to correlate their simultaneous binding of metal ions with uracil. The sensitivities of the observed 4f-4f transitions towards the minor coordination changes around Nd(III) has been used to monitor the simultaneous coordination of uracil with Nd(III) and Zn(II). The variation of intensities (oscillator strengths and Judd-Ofelt parameters) of 4f-4f bands during the complexation has helped in following the heterobimetallic complexation of uracil. Rate of complexation with respect to hypersensitive transition was evaluated. Energy of activation and thermodynamic parameters for the complexation reaction were also determined.
Extending IPsec for Efficient Remote Attestation
NASA Astrophysics Data System (ADS)
Sadeghi, Ahmad-Reza; Schulz, Steffen
When establishing a VPN to connect different sites of a network, the integrity of the involved VPN endpoints is often a major security concern. Based on the Trusted Platform Module (TPM), available in many computing platforms today, remote attestation mechanisms can be used to evaluate the internal state of remote endpoints automatically. However, existing protocols and extensions are either unsuited for use with IPsec or impose considerable additional implementation complexity and protocol overhead.
Unsteady, one-dimensional gas dynamics computations using a TVD type sequential solver
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1992-01-01
The efficacy of high resolution convection schemes to resolve sharp gradient in unsteady, 1D flows is examined using the TVD concept based on a sequential solution algorithm. Two unsteady flow problems are considered which include the problem involving the interaction of the various waves in a shock tube with closed reflecting ends and the problem involving the unsteady gas dynamics in a tube with closed ends subject to an initial pressure perturbation. It is concluded that high accuracy convection schemes in a sequential solution framework are capable of resolving discontinuities in unsteady flows involving complex gas dynamics. However, a sufficient amount of dissipation is required to suppress oscillations near discontinuities in the sequential approach, which leads to smearing of the solution profiles.
Fluid-Structure Interaction Modeling of the Reefed Stages of the Orion Spacecraft Main Parachutes
NASA Astrophysics Data System (ADS)
Boswell, Cody W.
Spacecraft parachutes are typically used in multiple stages, starting with a "reefed" stage where a cable along the parachute skirt constrains the diameter to be less than the diameter in the subsequent stage. After a certain period of time during the descent, the cable is cut and the parachute "disreefs" (i.e. expands) to the next stage. Computing the parachute shape at the reefed stage and fluid-structure interaction (FSI) modeling during the disreefing involve computational challenges beyond those we have in FSI modeling of fully-open spacecraft parachutes. These additional challenges are created by the increased geometric complexities and by the rapid changes in the parachute geometry. The computational challenges are further increased because of the added geometric porosity of the latest design, where the "windows" created by the removal of panels and the wider gaps created by the removal of sails compound the geometric and flow complexity. Orion spacecraft main parachutes will have three stages, with computation of the Stage 1 shape and FSI modeling of disreefing from Stage 1 to Stage 2 being the most challenging. We present the special modeling techniques we devised to address the computational challenges and the results from the computations carried out. We also present the methods we devised to calculate for a parachute gore the radius of curvature in the circumferential direction. The curvature values are intended for quick and simple engineering analysis in estimating the structural stresses.
Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B
2010-02-01
Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.
Gpu Implementation of a Viscous Flow Solver on Unstructured Grids
NASA Astrophysics Data System (ADS)
Xu, Tianhao; Chen, Long
2016-06-01
Graphics processing units have gained popularities in scientific computing over past several years due to their outstanding parallel computing capability. Computational fluid dynamics applications involve large amounts of calculations, therefore a latest GPU card is preferable of which the peak computing performance and memory bandwidth are much better than a contemporary high-end CPU. We herein focus on the detailed implementation of our GPU targeting Reynolds-averaged Navier-Stokes equations solver based on finite-volume method. The solver employs a vertex-centered scheme on unstructured grids for the sake of being capable of handling complex topologies. Multiple optimizations are carried out to improve the memory accessing performance and kernel utilization. Both steady and unsteady flow simulation cases are carried out using explicit Runge-Kutta scheme. The solver with GPU acceleration in this paper is demonstrated to have competitive advantages over the CPU targeting one.
Challenging Density Functional Theory Calculations with Hemes and Porphyrins.
de Visser, Sam P; Stillman, Martin J
2016-04-07
In this paper we review recent advances in computational chemistry and specifically focus on the chemical description of heme proteins and synthetic porphyrins that act as both mimics of natural processes and technological uses. These are challenging biochemical systems involved in electron transfer as well as biocatalysis processes. In recent years computational tools have improved considerably and now can reproduce experimental spectroscopic and reactivity studies within a reasonable error margin (several kcal·mol(-1)). This paper gives recent examples from our groups, where we investigated heme and synthetic metal-porphyrin systems. The four case studies highlight how computational modelling can correctly reproduce experimental product distributions, predicted reactivity trends and guide interpretation of electronic structures of complex systems. The case studies focus on the calculations of a variety of spectroscopic features of porphyrins and show how computational modelling gives important insight that explains the experimental spectra and can lead to the design of porphyrins with tuned properties.
The development and application of CFD technology in mechanical engineering
NASA Astrophysics Data System (ADS)
Wei, Yufeng
2017-12-01
Computational Fluid Dynamics (CFD) is an analysis of the physical phenomena involved in fluid flow and heat conduction by computer numerical calculation and graphical display. The numerical method simulates the complexity of the physical problem and the precision of the numerical solution, which is directly related to the hardware speed of the computer and the hardware such as memory. With the continuous improvement of computer performance and CFD technology, it has been widely applied to the field of water conservancy engineering, environmental engineering and industrial engineering. This paper summarizes the development process of CFD, the theoretical basis, the governing equations of fluid mechanics, and introduces the various methods of numerical calculation and the related development of CFD technology. Finally, CFD technology in the mechanical engineering related applications are summarized. It is hoped that this review will help researchers in the field of mechanical engineering.
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2012-01-01
Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446
Decision support methods for the detection of adverse events in post-marketing data.
Hauben, M; Bate, A
2009-04-01
Spontaneous reporting is a crucial component of post-marketing drug safety surveillance despite its significant limitations. The size and complexity of some spontaneous reporting system databases represent a challenge for drug safety professionals who traditionally have relied heavily on the scientific and clinical acumen of the prepared mind. Computer algorithms that calculate statistical measures of reporting frequency for huge numbers of drug-event combinations are increasingly used to support pharamcovigilance analysts screening large spontaneous reporting system databases. After an overview of pharmacovigilance and spontaneous reporting systems, we discuss the theory and application of contemporary computer algorithms in regular use, those under development, and the practical considerations involved in the implementation of computer algorithms within a comprehensive and holistic drug safety signal detection program.
High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
2014-07-28
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less
NASA Astrophysics Data System (ADS)
Kissinger, Alexander; Noack, Vera; Knopf, Stefan; Konrad, Wilfried; Scheer, Dirk; Class, Holger
2017-06-01
Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the hazards associated with the geological storage of CO2. Thus, in a site-specific risk assessment, models for predicting the fate of the displaced brine are required. Practical simulation of brine displacement involves decisions regarding the complexity of the model. The choice of an appropriate level of model complexity depends on multiple criteria: the target variable of interest, the relevant physical processes, the computational demand, the availability of data, and the data uncertainty. In this study, we set up a regional-scale geological model for a realistic (but not real) onshore site in the North German Basin with characteristic geological features for that region. A major aim of this work is to identify the relevant parameters controlling saltwater intrusion in a complex structural setting and to test the applicability of different model simplifications. The model that is used to identify relevant parameters fully couples flow in shallow freshwater aquifers and deep saline aquifers. This model also includes variable-density transport of salt and realistically incorporates surface boundary conditions with groundwater recharge. The complexity of this model is then reduced in several steps, by neglecting physical processes (two-phase flow near the injection well, variable-density flow) and by simplifying the complex geometry of the geological model. The results indicate that the initial salt distribution prior to the injection of CO2 is one of the key parameters controlling shallow aquifer salinization. However, determining the initial salt distribution involves large uncertainties in the regional-scale hydrogeological parameterization and requires complex and computationally demanding models (regional-scale variable-density salt transport). In order to evaluate strategies for minimizing leakage into shallow aquifers, other target variables can be considered, such as the volumetric leakage rate into shallow aquifers or the pressure buildup in the injection horizon. Our results show that simplified models, which neglect variable-density salt transport, can reach an acceptable agreement with more complex models.
Thorium–phosphorus triamidoamine complexes containing Th–P single- and multiple-bond interactions
Wildman, Elizabeth P.; Balázs, Gábor; Wooles, Ashley J.; Scheer, Manfred; Liddle, Stephen T.
2016-01-01
Despite the burgeoning field of uranium-ligand multiple bonds, analogous complexes involving other actinides remain scarce. For thorium, under ambient conditions only a few multiple bonds to carbon, nitrogen, oxygen, sulfur, selenium and tellurium are reported, and no multiple bonds to phosphorus are known, reflecting a general paucity of synthetic methodologies and also problems associated with stabilising these linkages at the large thorium ion. Here we report structurally authenticated examples of a parent thorium(IV)–phosphanide (Th–PH2), a terminal thorium(IV)–phosphinidene (Th=PH), a parent dithorium(IV)–phosphinidiide (Th–P(H)-Th) and a discrete actinide–phosphido complex under ambient conditions (Th=P=Th). Although thorium is traditionally considered to have dominant 6d-orbital contributions to its bonding, contrasting to majority 5f-orbital character for uranium, computational analyses suggests that the bonding of thorium can be more nuanced, in terms of 5f- versus 6d-orbital composition and also significant involvement of the 7s-orbital and how this affects the balance of 5f- versus 6d-orbital bonding character. PMID:27682617
Thorium-phosphorus triamidoamine complexes containing Th-P single- and multiple-bond interactions.
Wildman, Elizabeth P; Balázs, Gábor; Wooles, Ashley J; Scheer, Manfred; Liddle, Stephen T
2016-09-29
Despite the burgeoning field of uranium-ligand multiple bonds, analogous complexes involving other actinides remain scarce. For thorium, under ambient conditions only a few multiple bonds to carbon, nitrogen, oxygen, sulfur, selenium and tellurium are reported, and no multiple bonds to phosphorus are known, reflecting a general paucity of synthetic methodologies and also problems associated with stabilising these linkages at the large thorium ion. Here we report structurally authenticated examples of a parent thorium(IV)-phosphanide (Th-PH 2 ), a terminal thorium(IV)-phosphinidene (Th=PH), a parent dithorium(IV)-phosphinidiide (Th-P(H)-Th) and a discrete actinide-phosphido complex under ambient conditions (Th=P=Th). Although thorium is traditionally considered to have dominant 6d-orbital contributions to its bonding, contrasting to majority 5f-orbital character for uranium, computational analyses suggests that the bonding of thorium can be more nuanced, in terms of 5f- versus 6d-orbital composition and also significant involvement of the 7s-orbital and how this affects the balance of 5f- versus 6d-orbital bonding character.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick
2017-01-01
A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less
Overview of the Aeroelastic Prediction Workshop
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Chwalowski, Pawel; Schuster, David M.; Dalenbring, Mats
2013-01-01
The AIAA Aeroelastic Prediction Workshop (AePW) was held in April, 2012, bringing together communities of aeroelasticians and computational fluid dynamicists. The objective in conducting this workshop on aeroelastic prediction was to assess state-of-the-art computational aeroelasticity methods as practical tools for the prediction of static and dynamic aeroelastic phenomena. No comprehensive aeroelastic benchmarking validation standard currently exists, greatly hindering validation and state-of-the-art assessment objectives. The workshop was a step towards assessing the state of the art in computational aeroelasticity. This was an opportunity to discuss and evaluate the effectiveness of existing computer codes and modeling techniques for unsteady flow, and to identify computational and experimental areas needing additional research and development. Three configurations served as the basis for the workshop, providing different levels of geometric and flow field complexity. All cases considered involved supercritical airfoils at transonic conditions. The flow fields contained oscillating shocks and in some cases, regions of separation. The computational tools principally employed Reynolds-Averaged Navier Stokes solutions. The successes and failures of the computations and the experiments are examined in this paper.
QMC Goes BOINC: Using Public Resource Computing to Perform Quantum Monte Carlo Calculations
NASA Astrophysics Data System (ADS)
Rainey, Cameron; Engelhardt, Larry; Schröder, Christian; Hilbig, Thomas
2008-10-01
Theoretical modeling of magnetic molecules traditionally involves the diagonalization of quantum Hamiltonian matrices. However, as the complexity of these molecules increases, the matrices become so large that this process becomes unusable. An additional challenge to this modeling is that many repetitive calculations must be performed, further increasing the need for computing power. Both of these obstacles can be overcome by using a quantum Monte Carlo (QMC) method and a distributed computing project. We have recently implemented a QMC method within the Spinhenge@home project, which is a Public Resource Computing (PRC) project where private citizens allow part-time usage of their PCs for scientific computing. The use of PRC for scientific computing will be described in detail, as well as how you can contribute to the project. See, e.g., L. Engelhardt, et. al., Angew. Chem. Int. Ed. 47, 924 (2008). C. Schröoder, in Distributed & Grid Computing - Science Made Transparent for Everyone. Principles, Applications and Supporting Communities. (Weber, M.H.W., ed., 2008). Project URL: http://spin.fh-bielefeld.de
Mukunthan, B; Nagaveni, N
2014-01-01
In genetic engineering, conventional techniques and algorithms employed by forensic scientists to assist in identification of individuals on the basis of their respective DNA profiles involves more complex computational steps and mathematical formulae, also the identification of location of mutation in a genomic sequence in laboratories is still an exigent task. This novel approach provides ability to solve the problems that do not have an algorithmic solution and the available solutions are also too complex to be found. The perfect blend made of bioinformatics and neural networks technique results in efficient DNA pattern analysis algorithm with utmost prediction accuracy.
Explorative search of distributed bio-data to answer complex biomedical questions
2014-01-01
Background The huge amount of biomedical-molecular data increasingly produced is providing scientists with potentially valuable information. Yet, such data quantity makes difficult to find and extract those data that are most reliable and most related to the biomedical questions to be answered, which are increasingly complex and often involve many different biomedical-molecular aspects. Such questions can be addressed only by comprehensively searching and exploring different types of data, which frequently are ordered and provided by different data sources. Search Computing has been proposed for the management and integration of ranked results from heterogeneous search services. Here, we present its novel application to the explorative search of distributed biomedical-molecular data and the integration of the search results to answer complex biomedical questions. Results A set of available bioinformatics search services has been modelled and registered in the Search Computing framework, and a Bioinformatics Search Computing application (Bio-SeCo) using such services has been created and made publicly available at http://www.bioinformatics.deib.polimi.it/bio-seco/seco/. It offers an integrated environment which eases search, exploration and ranking-aware combination of heterogeneous data provided by the available registered services, and supplies global results that can support answering complex multi-topic biomedical questions. Conclusions By using Bio-SeCo, scientists can explore the very large and very heterogeneous biomedical-molecular data available. They can easily make different explorative search attempts, inspect obtained results, select the most appropriate, expand or refine them and move forward and backward in the construction of a global complex biomedical query on multiple distributed sources that could eventually find the most relevant results. Thus, it provides an extremely useful automated support for exploratory integrated bio search, which is fundamental for Life Science data driven knowledge discovery. PMID:24564278
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, P.; Purdue University, West Lafayette, Indiana 47907; Verma, K.
Borazine is isoelectronic with benzene and is popularly referred to as inorganic benzene. The study of non-covalent interactions with borazine and comparison with its organic counterpart promises to show interesting similarities and differences. The motivation of the present study of the borazine-water interaction, for the first time, stems from such interesting possibilities. Hydrogen-bonded complexes of borazine and water were studied using matrix isolation infrared spectroscopy and quantum chemical calculations. Computations were performed at M06-2X and MP2 levels of theory using 6-311++G(d,p) and aug-cc-pVDZ basis sets. At both the levels of theory, the complex involving an N–H⋯O interaction, where the N–Hmore » of borazine serves as the proton donor to the oxygen of water was found to be the global minimum, in contrast to the benzene-water system, which showed an H–π interaction. The experimentally observed infrared spectra of the complexes corroborated well with our computations for the complex corresponding to the global minimum. In addition to the global minimum, our computations also located two local minima on the borazine-water potential energy surface. Of the two local minima, one corresponded to a structure where the water was the proton donor to the nitrogen of borazine, approaching the borazine ring from above the plane of the ring; a structure that resembled the global minimum in the benzene-water H–π complex. The second local minimum corresponded to an interaction of the oxygen of water with the boron of borazine, which can be termed as the boron bond. Clearly the borazine-water system presents a richer landscape than the benzene-water system.« less
Lewis, F.M.; Voss, C.I.; Rubin, Jacob
1986-01-01
A model was developed that can simulate the effect of certain chemical and sorption reactions simultaneously among solutes involved in advective-dispersive transport through porous media. The model is based on a methodology that utilizes physical-chemical relationships in the development of the basic solute mass-balance equations; however, the form of these equations allows their solution to be obtained by methods that do not depend on the chemical processes. The chemical environment is governed by the condition of local chemical equilibrium, and may be defined either by the linear sorption of a single species and two soluble complexation reactions which also involve that species, or binary ion exchange and one complexation reaction involving a common ion. Partial differential equations that describe solute mass balance entirely in the liquid phase are developed for each tenad (a chemical entity whose total mass is independent of the reaction process) in terms of their total dissolved concentration. These equations are solved numerically in two dimensions through the modification of an existing groundwater flow/transport computer code. (Author 's abstract)
New technologies for advanced three-dimensional optimum shape design in aeronautics
NASA Astrophysics Data System (ADS)
Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno
1999-05-01
The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright
Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo
2008-01-01
Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade for deploying distributed and parallelized versions of a variety of computationally intensive phylogenetic algorithms has been shown. Secondly, the analysis of the utilized H5N1 neuraminidase datasets at macro and micro levels has clearly indicated a pattern of spatial clustering of the H5N1 viral isolates based on geographical distribution rather than temporal or host range based clustering.
Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines
NASA Astrophysics Data System (ADS)
Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.
2016-12-01
Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.
Computer Aided Grid Interface: An Interactive CFD Pre-Processor
NASA Technical Reports Server (NTRS)
Soni, Bharat K.
1997-01-01
NASA maintains an applications oriented computational fluid dynamics (CFD) efforts complementary to and in support of the aerodynamic-propulsion design and test activities. This is especially true at NASA/MSFC where the goal is to advance and optimize present and future liquid-fueled rocket engines. Numerical grid generation plays a significant role in the fluid flow simulations utilizing CFD. An overall goal of the current project was to develop a geometry-grid generation tool that will help engineers, scientists and CFD practitioners to analyze design problems involving complex geometries in a timely fashion. This goal is accomplished by developing the CAGI: Computer Aided Grid Interface system. The CAGI system is developed by integrating CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) geometric system output and/or Initial Graphics Exchange Specification (IGES) files (including all the NASA-IGES entities), geometry manipulations and generations associated with grid constructions, and robust grid generation methodologies. This report describes the development process of the CAGI system.
Computer Aided Grid Interface: An Interactive CFD Pre-Processor
NASA Technical Reports Server (NTRS)
Soni, Bharat K.
1996-01-01
NASA maintains an applications oriented computational fluid dynamics (CFD) efforts complementary to and in support of the aerodynamic-propulsion design and test activities. This is especially true at NASA/MSFC where the goal is to advance and optimize present and future liquid-fueled rocket engines. Numerical grid generation plays a significant role in the fluid flow simulations utilizing CFD. An overall goal of the current project was to develop a geometry-grid generation tool that will help engineers, scientists and CFD practitioners to analyze design problems involving complex geometries in a timely fashion. This goal is accomplished by developing the Computer Aided Grid Interface system (CAGI). The CAGI system is developed by integrating CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) geometric system output and / or Initial Graphics Exchange Specification (IGES) files (including all the NASA-IGES entities), geometry manipulations and generations associated with grid constructions, and robust grid generation methodologies. This report describes the development process of the CAGI system.
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB.
Nichols, David F
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience.
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB
Nichols, David F.
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience. PMID:26557798
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Critchlow, Terence J.; Abdulla, Ghaleb; Becla, Jacek
Data management is the organization of information to support efficient access and analysis. For data intensive computing applications, the speed at which relevant data can be accessed is a limiting factor in terms of the size and complexity of computation that can be performed. Data access speed is impacted by the size of the relevant subset of the data, the complexity of the query used to define it, and the layout of the data relative to the query. As the underlying data sets become increasingly complex, the questions asked of it become more involved as well. For example, geospatial datamore » associated with a city is no longer limited to the map data representing its streets, but now also includes layers identifying utility lines, key points, locations and types of businesses within the city limits, tax information for each land parcel, satellite imagery, and possibly even street-level views. As a result, queries have gone from simple questions, such as "how long is Main Street?", to much more complex questions such as "taking all other factors into consideration, are the property values of houses near parks higher than those under power lines, and if so, by what percentage". Answering these questions requires a coherent infrastructure, integrating the relevant data into a format optimized for the questions being asked.« less
Fetterhoff, Dustin; Opris, Ioan; Simpson, Sean L.; Deadwyler, Sam A.; Hampson, Robert E.; Kraft, Robert A.
2014-01-01
Background Multifractal analysis quantifies the time-scale-invariant properties in data by describing the structure of variability over time. By applying this analysis to hippocampal interspike interval sequences recorded during performance of a working memory task, a measure of long-range temporal correlations and multifractal dynamics can reveal single neuron correlates of information processing. New method Wavelet leaders-based multifractal analysis (WLMA) was applied to hippocampal interspike intervals recorded during a working memory task. WLMA can be used to identify neurons likely to exhibit information processing relevant to operation of brain–computer interfaces and nonlinear neuronal models. Results Neurons involved in memory processing (“Functional Cell Types” or FCTs) showed a greater degree of multifractal firing properties than neurons without task-relevant firing characteristics. In addition, previously unidentified FCTs were revealed because multifractal analysis suggested further functional classification. The cannabinoid-type 1 receptor partial agonist, tetrahydrocannabinol (THC), selectively reduced multifractal dynamics in FCT neurons compared to non-FCT neurons. Comparison with existing methods WLMA is an objective tool for quantifying the memory-correlated complexity represented by FCTs that reveals additional information compared to classification of FCTs using traditional z-scores to identify neuronal correlates of behavioral events. Conclusion z-Score-based FCT classification provides limited information about the dynamical range of neuronal activity characterized by WLMA. Increased complexity, as measured with multifractal analysis, may be a marker of functional involvement in memory processing. The level of multifractal attributes can be used to differentially emphasize neural signals to improve computational models and algorithms underlying brain–computer interfaces. PMID:25086297
Adaptation to High Ethanol Reveals Complex Evolutionary Pathways
Das, Anupam; Espinosa-Cantú, Adriana; De Maeyer, Dries; Arslan, Ahmed; Van Pee, Michiel; van der Zande, Elisa; Meert, Wim; Yang, Yudi; Zhu, Bo; Marchal, Kathleen; DeLuna, Alexander; Van Noort, Vera; Jelier, Rob; Verstrepen, Kevin J.
2015-01-01
Tolerance to high levels of ethanol is an ecologically and industrially relevant phenotype of microbes, but the molecular mechanisms underlying this complex trait remain largely unknown. Here, we use long-term experimental evolution of isogenic yeast populations of different initial ploidy to study adaptation to increasing levels of ethanol. Whole-genome sequencing of more than 30 evolved populations and over 100 adapted clones isolated throughout this two-year evolution experiment revealed how a complex interplay of de novo single nucleotide mutations, copy number variation, ploidy changes, mutator phenotypes, and clonal interference led to a significant increase in ethanol tolerance. Although the specific mutations differ between different evolved lineages, application of a novel computational pipeline, PheNetic, revealed that many mutations target functional modules involved in stress response, cell cycle regulation, DNA repair and respiration. Measuring the fitness effects of selected mutations introduced in non-evolved ethanol-sensitive cells revealed several adaptive mutations that had previously not been implicated in ethanol tolerance, including mutations in PRT1, VPS70 and MEX67. Interestingly, variation in VPS70 was recently identified as a QTL for ethanol tolerance in an industrial bio-ethanol strain. Taken together, our results show how, in contrast to adaptation to some other stresses, adaptation to a continuous complex and severe stress involves interplay of different evolutionary mechanisms. In addition, our study reveals functional modules involved in ethanol resistance and identifies several mutations that could help to improve the ethanol tolerance of industrial yeasts. PMID:26545090
Analysis and design of algorithm-based fault-tolerant systems
NASA Technical Reports Server (NTRS)
Nair, V. S. Sukumaran
1990-01-01
An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.
Zhou, Chen-Chen; Hawthorne, M Frederick; Houk, K N; Jiménez-Osés, Gonzalo
2017-08-18
The thermal decompositions of metallaisoxazolin-5-ones containing Ir, Rh, or Co are investigated using density functional theory. The experimentally observed decarboxylations of these molecules are found to proceed through retro-(3+2)-cycloaddition reactions, generating the experimentally reported η 2 side-bonded nitrile complexes. These intermediates can isomerize in situ to yield a η 1 nitrile complex. A competitive alternative pathway is also found where the decarboxylation happens concertedly with an aryl migration process, producing a η 1 isonitrile complex. Despite their comparable stability, these η 1 bonded species were not detected experimentally. The experimentally detected η 2 side bound species are likely involved in the subsequent C-H activation reactions with hydrocarbon solvents reported for some of these metallaisoxazolin-5-ones.
Automated dynamic analytical model improvement for damped structures
NASA Technical Reports Server (NTRS)
Fuh, J. S.; Berman, A.
1985-01-01
A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.
Improved result on stability analysis of discrete stochastic neural networks with time delay
NASA Astrophysics Data System (ADS)
Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng
2009-04-01
This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling.
Perdikaris, P; Raissi, M; Damianou, A; Lawrence, N D; Karniadakis, G E
2017-02-01
Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.
Computational screening of biomolecular adsorption and self-assembly on nanoscale surfaces.
Heinz, Hendrik
2010-05-01
The quantification of binding properties of ions, surfactants, biopolymers, and other macromolecules to nanometer-scale surfaces is often difficult experimentally and a recurring challenge in molecular simulation. A simple and computationally efficient method is introduced to compute quantitatively the energy of adsorption of solute molecules on a given surface. Highly accurate summation of Coulomb energies as well as precise control of temperature and pressure is required to extract the small energy differences in complex environments characterized by a large total energy. The method involves the simulation of four systems, the surface-solute-solvent system, the solute-solvent system, the solvent system, and the surface-solvent system under consideration of equal molecular volumes of each component under NVT conditions using standard molecular dynamics or Monte Carlo algorithms. Particularly in chemically detailed systems including thousands of explicit solvent molecules and specific concentrations of ions and organic solutes, the method takes into account the effect of complex nonbond interactions and rotational isomeric states on the adsorption behavior on surfaces. As a numerical example, the adsorption of a dodecapeptide on the Au {111} and mica {001} surfaces is described in aqueous solution. Copyright 2009 Wiley Periodicals, Inc.
Cognitive factors associated with immersion in virtual environments
NASA Technical Reports Server (NTRS)
Psotka, Joseph; Davison, Sharon
1993-01-01
Immersion into the dataspace provided by a computer, and the feeling of really being there or 'presence', are commonly acknowledged as the uniquely important features of virtual reality environments. How immersed one feels appears to be determined by a complex set of physical components and affordances of the environment, and as yet poorly understood psychological processes. Pimentel and Teixeira say that the experience of being immersed in a computer-generated world involves the same mental shift of 'suspending your disbelief for a period of time' as 'when you get wrapped up in a good novel or become absorbed in playing a computer game'. That sounds as if it could be right, but it would be good to get some evidence for these important conclusions. It might be even better to try to connect these statements with theoretical positions that try to do justice to complex cognitive processes. The basic precondition for understanding Virtual Reality (VR) is understanding the spatial representation systems that localize our bodies or egocenters in space. The effort to understand these cognitive processes is being driven with new energy by the pragmatic demands of successful virtual reality environments, but the literature is largely sparse and anecdotal.
Geometric and topological characterization of porous media: insights from eigenvector centrality
NASA Astrophysics Data System (ADS)
Jimenez-Martinez, J.; Negre, C.
2017-12-01
Solving flow and transport through complex geometries such as porous media involves an extreme computational cost. Simplifications such as pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models have the ability to preserve the connectivity of the medium. However, they have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Network theory approaches, where the complex network is conceptualized like a graph, can help to simplify and better understand fluid dynamics and transport in porous media. To address this issue, we propose a method based on eigenvector centrality. It has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction which allows considering the flow and transport anisotropy in porous media. The model predictions are compared with millifluidic transport experiments, showing that this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. Entropy computed from the eigenvector centrality probability distribution is proposed as an indicator of the "mixing capacity" of the system.
New strategy for protein interactions and application to structure-based drug design
NASA Astrophysics Data System (ADS)
Zou, Xiaoqin
One of the greatest challenges in computational biophysics is to predict interactions between biological molecules, which play critical roles in biological processes and rational design of therapeutic drugs. Biomolecular interactions involve delicate interplay between multiple interactions, including electrostatic interactions, van der Waals interactions, solvent effect, and conformational entropic effect. Accurate determination of these complex and subtle interactions is challenging. Moreover, a biological molecule such as a protein usually consists of thousands of atoms, and thus occupies a huge conformational space. The large degrees of freedom pose further challenges for accurate prediction of biomolecular interactions. Here, I will present our development of physics-based theory and computational modeling on protein interactions with other molecules. The major strategy is to extract microscopic energetics from the information embedded in the experimentally-determined structures of protein complexes. I will also present applications of the methods to structure-based therapeutic design. Supported by NSF CAREER Award DBI-0953839, NIH R01GM109980, and the American Heart Association (Midwest Affiliate) [13GRNT16990076].
A power-efficient ZF precoding scheme for multi-user indoor visible light communication systems
NASA Astrophysics Data System (ADS)
Zhao, Qiong; Fan, Yangyu; Deng, Lijun; Kang, Bochao
2017-02-01
In this study, we propose a power-efficient ZF precoding scheme for visible light communication (VLC) downlink multi-user multiple-input-single-output (MU-MISO) systems, which incorporates the zero-forcing (ZF) and the characteristics of VLC systems. The main idea of this scheme is that the channel matrix used to perform pseudoinverse comes from the set of optical Access Points (APs) shared by more than one user, instead of the set of all involved serving APs as the existing ZF precoding schemes often used. By doing this, the waste of power, which is caused by the transmission of one user's data in the un-serving APs, can be avoided. In addition, the size of the channel matrix needs to perform pseudoinverse becomes smaller, which helps to reduce the computation complexity. Simulation results in two scenarios show that the proposed ZF precoding scheme has higher power efficiency, better bit error rate (BER) performance and lower computation complexity compared with traditional ZF precoding schemes.
Nonlinear channel equalization for QAM signal constellation using artificial neural networks.
Patra, J C; Pal, R N; Baliarsingh, R; Panda, G
1999-01-01
Application of artificial neural networks (ANN's) to adaptive channel equalization in a digital communication system with 4-QAM signal constellation is reported in this paper. A novel computationally efficient single layer functional link ANN (FLANN) is proposed for this purpose. This network has a simple structure in which the nonlinearity is introduced by functional expansion of the input pattern by trigonometric polynomials. Because of input pattern enhancement, the FLANN is capable of forming arbitrarily nonlinear decision boundaries and can perform complex pattern classification tasks. Considering channel equalization as a nonlinear classification problem, the FLANN has been utilized for nonlinear channel equalization. The performance of the FLANN is compared with two other ANN structures [a multilayer perceptron (MLP) and a polynomial perceptron network (PPN)] along with a conventional linear LMS-based equalizer for different linear and nonlinear channel models. The effect of eigenvalue ratio (EVR) of input correlation matrix on the equalizer performance has been studied. The comparison of computational complexity involved for the three ANN structures is also provided.
Petrov, Artem; Arzhanik, Vladimir; Makarov, Gennady; Koliasnikov, Oleg
2016-08-01
Antibodies are the family of proteins, which are responsible for antigen recognition. The computational modeling of interaction between an antigen and an antibody is very important when crystallographic structure is unavailable. In this research, we have discovered the correlation between the amino acid sequence of antibody and its specific binding characteristics on the example of the novel conservative binding motif, which consists of four residues: Arg H52, Tyr H33, Thr H59, and Glu H61. These residues are specifically oriented in the binding site and interact with each other in a specific manner. The residues of the binding motif are involved in interaction strictly with negatively charged groups of antigens, and form a binding complex. Mechanism of interaction and characteristics of the complex were also discovered. The results of this research can be used to increase the accuracy of computational antibody-antigen interaction modeling and for post-modeling quality control of the modeled structures.
Anson, Colin W; Ghosh, Soumya; Hammes-Schiffer, Sharon; Stahl, Shannon S
2016-03-30
Macrocyclic metal complexes and p-benzoquinones are commonly used as co-catalytic redox mediators in aerobic oxidation reactions. In an effort to gain insight into the mechanism and energetic efficiency of these reactions, we investigated Co(salophen)-catalyzed aerobic oxidation of p-hydroquinone. Kinetic and spectroscopic data suggest that the catalyst resting-state consists of an equilibrium between a Co(II)(salophen) complex, a Co(III)-superoxide adduct, and a hydrogen-bonded adduct between the hydroquinone and the Co(III)-O2 species. The kinetic data, together with density functional theory computational results, reveal that the turnover-limiting step involves proton-coupled electron transfer from a semi-hydroquinone species and a Co(III)-hydroperoxide intermediate. Additional experimental and computational data suggest that a coordinated H2O2 intermediate oxidizes a second equivalent of hydroquinone. Collectively, the results show how Co(salophen) and p-hydroquinone operate synergistically to mediate O2 reduction and generate the reactive p-benzoquinone co-catalyst.
A window-based time series feature extraction method.
Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife
2017-10-01
This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computer-based training for improving mental calculation in third- and fifth-graders.
Caviola, Sara; Gerotto, Giulia; Mammarella, Irene C
2016-11-01
The literature on intervention programs to improve arithmetical abilities is fragmentary and few studies have examined training on the symbolic representation of numbers (i.e. Arabic digits). In the present research, three groups of 3rd- and 5th-grade schoolchildren were given training on mental additions: 76 were assigned to a computer-based strategic training (ST) group, 73 to a process-based training (PBT) group, and 71 to a passive control (PC) group. Before and after the training, the children were given a criterion task involving complex addition problems, a nearest transfer task on complex subtraction problems, two near transfer tasks on math fluency, and a far transfer task on numerical reasoning. Our results showed developmental differences: 3rd-graders benefited more from the ST, with transfer effects on subtraction problems and math fluency, while 5th-graders benefited more from the PBT, improving their response times in the criterion task. Developmental, clinical and educational implications of these findings are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Development of a Aerothermoelastic-Acoustics Simulation Capability of Flight Vehicles
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Choi, S. B.; Ibrahim, A.
2010-01-01
A novel numerical, finite element based analysis methodology is presented in this paper suitable for accurate and efficient simulation of practical, complex flight vehicles. An associated computer code, developed in this connection, is also described in some detail. Thermal effects of high speed flow obtained from a heat conduction analysis are incorporated in the modal analysis which in turn affects the unsteady flow arising out of interaction of elastic structures with the air. Numerical examples pertaining to representative problems are given in much detail testifying to the efficacy of the advocated techniques. This is a unique implementation of temperature effects in a finite element CFD based multidisciplinary simulation analysis capability involving large scale computations.
Chronopoulos, D
2017-01-01
A systematic expression quantifying the wave energy skewing phenomenon as a function of the mechanical characteristics of a non-isotropic structure is derived in this study. A structure of arbitrary anisotropy, layering and geometric complexity is modelled through Finite Elements (FEs) coupled to a periodic structure wave scheme. A generic approach for efficiently computing the angular sensitivity of the wave slowness for each wave type, direction and frequency is presented. The approach does not involve any finite differentiation scheme and is therefore computationally efficient and not prone to the associated numerical errors. Copyright © 2016 Elsevier B.V. All rights reserved.
Exploring the early steps of amyloid peptide aggregation by computers.
Mousseau, Normand; Derreumaux, Philippe
2005-11-01
The assembly of normally soluble proteins into amyloid fibrils is a hallmark of neurodegenerative diseases. Because protein aggregation is very complex, involving a variety of oligomeric metastable intermediates, the detailed aggregation paths and structural characterization of the intermediates remain to be determined. Yet, there is strong evidence that these oligomers, which form early in the process of fibrillogenesis, are cytotoxic. In this paper, we review our current understanding of the underlying factors that promote the aggregation of peptides into amyloid fibrils. We focus here on the structural and dynamic aspects of the aggregation as observed in state-of-the-art computer simulations of amyloid-forming peptides with an emphasis on the activation-relaxation technique.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Computational Modeling of the Dolphin Kick in Competitive Swimming
NASA Astrophysics Data System (ADS)
Loebbeck, A.; Mark, R.; Bhanot, G.
2005-11-01
Numerical simulations are being used to study the fluid dynamics of the dolphin kick in competitive swimming. This stroke is performed underwater after starts and turns and involves an undulatory motion of the body. Highly detailed laser body scans of elite swimmers are used and the kinematics of the dolphin kick is recreated from videos of Olympic level swimmers. We employ a parallelized immersed boundary method to simulate the flow associated with this stroke in all its complexity. The simulations provide a first of its kind glimpse of the fluid and vortex dynamics associated with this stroke and hydrodynamic force computations allow us to gain a better understanding of the thrust producing mechanisms.
Umari, Amjad M.J.; Gorelick, Steven M.
1986-01-01
In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous in the governing differential equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i.e., have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersion equation. Previous investigators have either used complex arithmetic to represent a complex eigensystem or chosen large dispersivity values for which the imaginary components of the complex eigenvalues may be ignored without significant error. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.
On Chaotic and Hyperchaotic Complex Nonlinear Dynamical Systems
NASA Astrophysics Data System (ADS)
Mahmoud, Gamal M.
Dynamical systems described by real and complex variables are currently one of the most popular areas of scientific research. These systems play an important role in several fields of physics, engineering, and computer sciences, for example, laser systems, control (or chaos suppression), secure communications, and information science. Dynamical basic properties, chaos (hyperchaos) synchronization, chaos control, and generating hyperchaotic behavior of these systems are briefly summarized. The main advantage of introducing complex variables is the reduction of phase space dimensions by a half. They are also used to describe and simulate the physics of detuned laser and thermal convection of liquid flows, where the electric field and the atomic polarization amplitudes are both complex. Clearly, if the variables of the system are complex the equations involve twice as many variables and control parameters, thus making it that much harder for a hostile agent to intercept and decipher the coded message. Chaotic and hyperchaotic complex systems are stated as examples. Finally there are many open problems in the study of chaotic and hyperchaotic complex nonlinear dynamical systems, which need further investigations. Some of these open problems are given.
Defining protein electrostatic recognition processes
NASA Astrophysics Data System (ADS)
Getzoff, Elizabeth D.; Roberts, Victoria A.
The objective is to elucidate the nature of electrostatic forces controlling protein recognition processes by using a tightly coupled computational and interactive computer graphics approach. The TURNIP program was developed to determine the most favorable precollision orientations for two molecules by systematic search of all orientations and evaluation of the resulting electrostatic interactions. TURNIP was applied to the transient interaction between two electron transfer metalloproteins, plastocyanin and cytochrome c. The results suggest that the productive electron-transfer complex involves interaction of the positive region of cytochrome c with the negative patch of plastocyanin, consistent with experimental data. Application of TURNIP to the formation of the stable complex between the HyHEL-5 antibody and its protein antigen lysozyme showed that long-distance electrostatic forces guide lysozyme toward the HyHEL-5 binding site, but do not fine tune its orientation. Determination of docked antigen/antibody complexes requires including steric as well as electrostatic interactions, as was done for the U10 mutant of the anti-phosphorylcholine antibody S107. The graphics program Flex, a convenient desktop workstation program for visualizing molecular dynamics and normal mode motions, was enhanced. Flex now has a user interface and was rewritten to use standard graphics libraries, so as to run on most desktop workstations.
Response surface method in geotechnical/structural analysis, phase 1
NASA Astrophysics Data System (ADS)
Wong, F. S.
1981-02-01
In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.
Agent-Based Modeling in Molecular Systems Biology.
Soheilypour, Mohammad; Mofrad, Mohammad R K
2018-07-01
Molecular systems orchestrating the biology of the cell typically involve a complex web of interactions among various components and span a vast range of spatial and temporal scales. Computational methods have advanced our understanding of the behavior of molecular systems by enabling us to test assumptions and hypotheses, explore the effect of different parameters on the outcome, and eventually guide experiments. While several different mathematical and computational methods are developed to study molecular systems at different spatiotemporal scales, there is still a need for methods that bridge the gap between spatially-detailed and computationally-efficient approaches. In this review, we summarize the capabilities of agent-based modeling (ABM) as an emerging molecular systems biology technique that provides researchers with a new tool in exploring the dynamics of molecular systems/pathways in health and disease. © 2018 WILEY Periodicals, Inc.
The Berlin Brain-Computer Interface: Progress Beyond Communication and Control
Blankertz, Benjamin; Acqualagna, Laura; Dähne, Sven; Haufe, Stefan; Schultze-Kraft, Matthias; Sturm, Irene; Ušćumlic, Marija; Wenzel, Markus A.; Curio, Gabriel; Müller, Klaus-Robert
2016-01-01
The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world. PMID:27917107
The Berlin Brain-Computer Interface: Progress Beyond Communication and Control.
Blankertz, Benjamin; Acqualagna, Laura; Dähne, Sven; Haufe, Stefan; Schultze-Kraft, Matthias; Sturm, Irene; Ušćumlic, Marija; Wenzel, Markus A; Curio, Gabriel; Müller, Klaus-Robert
2016-01-01
The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.
Methodologies for extracting kinetic constants for multiphase reacting flow simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, S.L.; Lottes, S.A.; Golchert, B.
1997-03-01
Flows in industrial reactors often involve complex reactions of many species. A computational fluid dynamics (CFD) computer code, ICRKFLO, was developed to simulate multiphase, multi-species reacting flows. The ICRKFLO uses a hybrid technique to calculate species concentration and reaction for a large number of species in a reacting flow. This technique includes a hydrodynamic and reacting flow simulation with a small but sufficient number of lumped reactions to compute flow field properties followed by a calculation of local reaction kinetics and transport of many subspecies (order of 10 to 100). Kinetic rate constants of the numerous subspecies chemical reactions aremore » difficult to determine. A methodology has been developed to extract kinetic constants from experimental data efficiently. A flow simulation of a fluid catalytic cracking (FCC) riser was successfully used to demonstrate this methodology.« less
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Truhlar, Donald G.
1990-01-01
The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.
Gentle Masking of Low-Complexity Sequences Improves Homology Search
Frith, Martin C.
2011-01-01
Detection of sequences that are homologous, i.e. descended from a common ancestor, is a fundamental task in computational biology. This task is confounded by low-complexity tracts (such as atatatatatat), which arise frequently and independently, causing strong similarities that are not homologies. There has been much research on identifying low-complexity tracts, but little research on how to treat them during homology search. We propose to find homologies by aligning sequences with “gentle” masking of low-complexity tracts. Gentle masking means that the match score involving a masked letter is , where is the unmasked score. Gentle masking slightly but noticeably improves the sensitivity of homology search (compared to “harsh” masking), without harming specificity. We show examples in three useful homology search problems: detection of NUMTs (nuclear copies of mitochondrial DNA), recruitment of metagenomic DNA reads to reference genomes, and pseudogene detection. Gentle masking is currently the best way to treat low-complexity tracts during homology search. PMID:22205972
NASA Astrophysics Data System (ADS)
Faizan, Mohd; Afroz, Ziya; Alam, Mohammad Jane; Bhat, Sheeraz Ahmad; Ahmad, Shabbir; Ahmad, Afaq
2018-05-01
The intermolecular interactions in complex formation between 2-amino-4-hydroxy-6-methylpyrimidine (AHMP) and 2,3-pyrazinedicarboxylicacid (PDCA) have been explored using density functional theory calculations. The isolated 1:1 molecular geometry of proton transfer (PT) complex between AHMP and PDCA has been optimized on a counterpoise corrected potential energy surface (PES) at DFT-B3LYP/6-31G(d,p) level of theory in the gaseous phase. Further, the formation of hydrogen bonded charge transfer (HBCT) complex between PDCA and AHMP has been also discussed. PT energy barrier between two extremes is calculated using potential energy surface (PES) scan by varying bond length. The intermolecular interactions have been analyzed from theoretical perspective of natural bond orbital (NBO) analysis. In addition, the interaction energy between molecular fragments involved in the complex formation has been also computed by counterpoise procedure at same level of theory.
Composition of web services using Markov decision processes and dynamic programming.
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
Manning, Brendan D
2012-07-10
In their study published in Science Signaling (Research Article, 27 March 2012, DOI: 10.1126/scisignal.2002469), Dalle Pezze et al. tackle the dynamic and complex wiring of the signaling network involving the protein kinase mTOR, which exists within two distinct protein complexes (mTORC1 and mTORC2) that differ in their regulation and function. The authors use a combination of immunoblotting for specific phosphorylation events and computational modeling. The primary experimental tool employed is to monitor the autophosphorylation of mTOR on Ser(2481) in cell lysates as a surrogate for mTOR activity, which the authors conclude is a specific readout for mTORC2. However, Ser(2481) phosphorylation occurs on both mTORC1 and mTORC2 and will dynamically change as the network through which these two complexes are connected is manipulated. Therefore, models of mTOR network regulation built using this tool are inherently imperfect and open to alternative explanations. Specific issues with the main conclusion made in this study, involving the TSC1-TSC2 (tuberous sclerosis complex 1 and 2) complex and its potential regulation of mTORC2, are discussed here. A broader goal of this Letter is to clarify to other investigators the caveats of using mTOR Ser(2481) phosphorylation in cell lysates as a specific readout for either of the two mTOR complexes.
Paranasal sinuses and nasopharynx CT and MRI.
Sievers, K W; Greess, H; Baum, U; Dobritz, M; Lenz, M
2000-03-01
Neoplastic disease of the nose, paranasal sinuses, the nasopharynx and the parapharyngeal space requires thorough assessment of location and extent in order to plan appropriate treatment. CT allows the deep soft tissue planes to be evaluated and provides a complement to the physical examination. It is especially helpful in regions involving thin bony structures (paranasal sinuses, orbita); here CT performs better than MRI. MRI possesses many advantages over other imaging modalities caused by its excellent tissue contrast. In evaluating regions involving predominantly soft tissue structures (ec nasopharynx and parapharyngeal space) MRI is superior to CT. The possibility to obtain strictly consecutive volume data sets with spiral CT or 3D MRI offer excellent perspectives to visualize the data via 2D or 3D postprocessing. Because head and neck tumors reside in a complex area, having a 3D model of the anatomical features may assist in the delineation of pathology. Data sets may be transferred directly into computer systems and thus be used in computer assisted surgery.
Involvement of Mossy Cells in Sharp Wave-Ripple Activity In Vitro.
Swaminathan, Aarti; Wichert, Ines; Schmitz, Dietmar; Maier, Nikolaus
2018-05-29
The role of mossy cells (MCs) of the hippocampal dentate area has long remained mysterious. Recent research has begun to unveil their significance in spatial computation of the hippocampus. Here, we used an in vitro model of sharp wave-ripple complexes (SWRs), which contribute to hippocampal memory formation, to investigate MC involvement in this fundamental population activity. We find that a significant fraction of MCs (∼47%) is recruited into the active neuronal network during SWRs in the CA3 area. Moreover, MCs receive pronounced, ripple-coherent, excitatory and inhibitory synaptic input. Finally, we find evidence for SWR-related synaptic activity in granule cells that is mediated by MCs. Given the widespread connectivity of MCs within and between hippocampi, our data suggest a role for MCs as a hub functionally coupling the CA3 and the DG during ripple-associated computations. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Computational studies of complexation of nitrous oxide by borane-phosphine frustrated Lewis pairs.
Gilbert, Thomas M
2012-08-14
Computational studies of complexes Ar(3)B-ONN-PR(3) derived from reactions between borane-phosphine frustrated Lewis pairs and N(2)O reveal several interesting facets. Natural resonance theory calculations support a change in the preferred resonance structure as the Lewis acidity of the borane increases. Potential constitutional isomers where phosphorus binds to oxygen and boron to nitrogen are predicted to be unstable with respect to loss of phosphine oxide and free N(2). Other constitutional isomers represent stationary points on the potential energy surface; most are considerably less stable than the observed complexes, but one is predicted to be as stable. This arises because the dominant resonance form combines alternating charge with the presence of a stabilizing NO double bond. The relationship between Lewis acidity and complex formation for a variety of boranes was explored; the results are consistent with the idea that greater Lewis acidity stabilizes both classical and frustrated Lewis acid-base pairs, but to differing degrees such that both types can entrap N(2)O. Calculations addressing the mechanism of complex formation suggest that N(2)O binds first through the nitrogen to the phosphine phosphorus of the FLP, whereupon boron coordinates the oxygen atom. Studies of the mechanism of the degenerate exchange reaction between (4-F-H(4)C(6))(3)B-ONN-P(t-Bu)(3) and B(C(6)H(4)-4-F)(3), involves a "transition state", with relatively short B-O distances, and so resembles a classical I(a) process. The process involves two barriers, one associated with bringing the incoming borane into proximity with the oxygen, and the other associated with isomerising from a ladle-shaped cis-trans ct conformer to the observed trans-trans tt-type structure. The overall barrier for degenerate exchange was predicted to be between 65 and 110 kJ mol(-1), in fair agreement with experiment. Similar studies of the reaction between (4-F-H(4)C(6))(3)B-ONN-P(t-Bu)(3) and B(C(6)F(5))(3) indicate that this process more closely resembles a classical I(d) process, in that the "transition state" involves long B-O distances. Derivatization of the complexed NNO fragment appears possible; interaction between (F(5)C(6))(3)B-ONN-P(t-Bu)(3) and MeLi suggests stability for the ion pairs (F(5)C(6))(3)B-ON(Me)N-P(t-Bu)(3)(-)/Li(+) and (F(5)C(6))(3)B-ONN(Me)-P(t-Bu)(3)(-)/Li(+).
Simulation of complex pharmacokinetic models in Microsoft Excel.
Meineke, Ingolf; Brockmöller, Jürgen
2007-12-01
With the arrival of powerful personal computers in the office numerical methods are accessible to everybody. Simulation of complex processes therefore has become an indispensible tool in research and education. In this paper Microsoft EXCEL is used as a platform for a universal differential equation solver. The software is designed as an add-in aiming at a minimum of required user input to perform a given task. Four examples are included to demonstrate both, the simplicity of use and the versatility of possible applications. While the layout of the program is admittedly geared to the needs of pharmacokineticists, it can be used in any field where sets of differential equations are involved. The software package is available upon request.
Modeling and Visualizing Flow of Chemical Agents Across Complex Terrain
NASA Technical Reports Server (NTRS)
Kao, David; Kramer, Marc; Chaderjian, Neal
2005-01-01
Release of chemical agents across complex terrain presents a real threat to homeland security. Modeling and visualization tools are being developed that capture flow fluid terrain interaction as well as point dispersal downstream flow paths. These analytic tools when coupled with UAV atmospheric observations provide predictive capabilities to allow for rapid emergency response as well as developing a comprehensive preemptive counter-threat evacuation plan. The visualization tools involve high-end computing and massive parallel processing combined with texture mapping. We demonstrate our approach across a mountainous portion of North California under two contrasting meteorological conditions. Animations depicting flow over this geographical location provide immediate assistance in decision support and crisis management.
Complexity Bounds for Quantum Computation
2007-06-22
Programs Trustees of Boston University Boston, MA 02215 - Complexity Bounds for Quantum Computation REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION...Complexity Bounds for Quantum Comp[utation Report Title ABSTRACT This project focused on upper and lower bounds for quantum computability using constant...classical computation models, particularly emphasizing new examples of where quantum circuits are more powerful than their classical counterparts. A second
A HISTORICAL PERSPECTIVE OF NUCLEAR THERMAL HYDRAULICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
D’Auria, F; Rohatgi, Upendra S.
The nuclear thermal-hydraulics discipline was developed following the needs for nuclear power plants (NPPs) and, to a more limited extent, research reactors (RR) design and safety. As in all other fields where analytical methods are involved, nuclear thermal-hydraulics took benefit of the development of computers. Thermodynamics, rather than fluid dynamics, is at the basis of the development of nuclear thermal-hydraulics together with the experiments in complex two-phase situations, namely, geometry, high thermal density, and pressure.
2016-04-01
fields associated with these control mechanisms for US Army weapons are complex, involving 3-dimensional (3-D) shock- boundary layer interactions...distribution over the rear finned section and thus produce control forces and moments. Dykes et al.6 used a flat - plate fin interaction design of...cells—tetrahedrals, triangular prisms, and pyramids—were used in the mesh. Grid points shown in Fig. 3a were clustered in the boundary layer region
Vision-related problems among the workers engaged in jewellery manufacturing.
Salve, Urmi Ravindra
2015-01-01
American Optometric Association defines Computer Vision Syndrome (CVS) as "complex of eye and vision problems related to near work which are experienced during or related to computer use." This happens when visual demand of the tasks exceeds the visual ability of the users. Even though problems were initially attributed to computer-related activities subsequently similar problems are also reported while carrying any near point task. Jewellery manufacturing activities involves precision designs, setting the tiny metals and stones which requires high visual attention and mental concentration and are often near point task. It is therefore expected that the workers engaged in jewellery manufacturing may also experience symptoms like CVS. Keeping the above in mind, this study was taken up (1) To identify the prevalence of symptoms like CVS among the workers of the jewellery manufacturing and compare the same with the workers working at computer workstation and (2) To ascertain whether such symptoms have any permanent vision-related problems. Case control study. The study was carried out in Zaveri Bazaar region and at an IT-enabled organization in Mumbai. The study involved the identification of symptoms of CVS using a questionnaire of Eye Strain Journal, opthalmological check-ups and measurement of Spontaneous Eye Blink rate. The data obtained from the jewellery manufacturing was compared with the data of the subjects engaged in computer work and with the data available in the literature. A comparative inferential statistics was used. Results showed that visual demands of the task carried out in jewellery manufacturing were much higher than that of carried out in computer-related work.
Minimally complex ion traps as modules for quantum communication and computing
NASA Astrophysics Data System (ADS)
Nigmatullin, Ramil; Ballance, Christopher J.; de Beaudrap, Niel; Benjamin, Simon C.
2016-10-01
Optically linked ion traps are promising as components of network-based quantum technologies, including communication systems and modular computers. Experimental results achieved to date indicate that the fidelity of operations within each ion trap module will be far higher than the fidelity of operations involving the links; fortunately internal storage and processing can effectively upgrade the links through the process of purification. Here we perform the most detailed analysis to date on this purification task, using a protocol which is balanced to maximise fidelity while minimising the device complexity and the time cost of the process. Moreover we ‘compile down’ the quantum circuit to device-level operations including cooling and shuttling events. We find that a linear trap with only five ions (two of one species, three of another) can support our protocol while incorporating desirable features such as global control, i.e. laser control pulses need only target an entire zone rather than differentiating one ion from its neighbour. To evaluate the capabilities of such a module we consider its use both as a universal communications node for quantum key distribution, and as the basic repeating unit of a quantum computer. For the latter case we evaluate the threshold for fault tolerant quantum computing using the surface code, finding acceptable fidelities for the ‘raw’ entangling link as low as 83% (or under 75% if an additional ion is available).
Eigenvector centrality for geometric and topological characterization of porous media
NASA Astrophysics Data System (ADS)
Jimenez-Martinez, Joaquin; Negre, Christian F. A.
2017-07-01
Solving flow and transport through complex geometries such as porous media is computationally difficult. Such calculations usually involve the solution of a system of discretized differential equations, which could lead to extreme computational cost depending on the size of the domain and the accuracy of the model. Geometric simplifications like pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models, despite their ability to preserve the connectivity of the medium, have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Nonetheless, network theory approaches, where a complex network is a graph, can help to simplify and better understand fluid dynamics and transport in porous media. Here we present an alternative method to address these issues based on eigenvector centrality, which has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction to address the flow and transport anisotropy in porous media. We compare the model predictions with millifluidic transport experiments, which shows that, albeit simple, this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. We propose to use the eigenvector centrality probability distribution to compute the entropy as an indicator of the "mixing capacity" of the system.
A Statistician's View of Upcoming Grand Challenges
NASA Astrophysics Data System (ADS)
Meng, Xiao Li
2010-01-01
In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.
Iterative Demodulation and Decoding of Non-Square QAM
NASA Technical Reports Server (NTRS)
Li, Lifang; Divsalar, Dariush; Dolinar, Samuel
2004-01-01
It has been shown that a non-square (NS) 2(sup 2n+1)-ary (where n is a positive integer) quadrature amplitude modulation [(NS)2(sup 2n+1)-QAM] has inherent memory that can be exploited to obtain coding gains. Moreover, it should not be necessary to build new hardware to realize these gains. The present scheme is a product of theoretical calculations directed toward reducing the computational complexity of decoding coded 2(sup 2n+1)-QAM. In the general case of 2(sup 2n+1)-QAM, the signal constellation is not square and it is impossible to have independent in-phase (I) and quadrature-phase (Q) mapping and demapping. However, independent I and Q mapping and demapping are desirable for reducing the complexity of computing the log likelihood ratio (LLR) between a bit and a received symbol (such computations are essential operations in iterative decoding). This is because in modulation schemes that include independent I and Q mapping and demapping, each bit of a signal point is involved in only one-dimensional mapping and demapping. As a result, the computation of the LLR is equivalent to that of a one-dimensional pulse amplitude modulation (PAM) system. Therefore, it is desirable to find a signal constellation that enables independent I and Q mapping and demapping for 2(sup 2n+1)-QAM.
NASA Technical Reports Server (NTRS)
Reuther, James; Jameson, Antony; Alonso, Juan Jose; Rimlinger, Mark J.; Saunders, David
1997-01-01
An aerodynamic shape optimization method that treats the design of complex aircraft configurations subject to high fidelity computational fluid dynamics (CFD), geometric constraints and multiple design points is described. The design process will be greatly accelerated through the use of both control theory and distributed memory computer architectures. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on a higher order CFD method. In order to facilitate the integration of these high fidelity CFD approaches into future multi-disciplinary optimization (NW) applications, new methods must be developed which are capable of simultaneously addressing complex geometries, multiple objective functions, and geometric design constraints. In our earlier studies, we coupled the adjoint based design formulations with unconstrained optimization algorithms and showed that the approach was effective for the aerodynamic design of airfoils, wings, wing-bodies, and complex aircraft configurations. In many of the results presented in these earlier works, geometric constraints were satisfied either by a projection into feasible space or by posing the design space parameterization such that it automatically satisfied constraints. Furthermore, with the exception of reference 9 where the second author initially explored the use of multipoint design in conjunction with adjoint formulations, our earlier works have focused on single point design efforts. Here we demonstrate that the same methodology may be extended to treat complete configuration designs subject to multiple design points and geometric constraints. Examples are presented for both transonic and supersonic configurations ranging from wing alone designs to complex configuration designs involving wing, fuselage, nacelles and pylons.
NASA Technical Reports Server (NTRS)
Bogdanoff, J. L.; Kayser, K.; Krieger, W.
1977-01-01
The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.
NASA Astrophysics Data System (ADS)
Zobnina, V. G.; Kosevich, M. V.; Chagovets, V. V.; Boryak, O. A.
A problem of elucidation of structure of nanomaterials based on combination of proteins and polyether polymers is addressed on the monomeric level of single amino acids and oligomers of PEG-400 and OEG-5 polyethers. Efficiency of application of combined approach involving experimental electrospray mass spectrometry and computer modeling by molecular dynamics simulation is demonstrated. It is shown that oligomers of polyethers form stable complexes with amino acids valine, proline, histidine, glutamic, and aspartic acids. Molecular dynamics simulation has shown that stabilization of amino acid-polyether complexes is achieved due to winding of the polymeric chain around charged groups of amino acids. Structural motives revealed for complexes of single amino acids with polyethers can be realized in structures of protein-polyether nanoparticles currently designed for drug delivery.
Experimental and computational fluid dynamic studies of mixing for complex oral health products
NASA Astrophysics Data System (ADS)
Garcia, Marti Cortada; Mazzei, Luca; Angeli, Panagiota
2015-11-01
Mixing high viscous non-Newtonian fluids is common in the consumer health industry. Sometimes this process is empirical and involves many pilot plants trials which are product specific. The first step to study the mixing process is to build on knowledge on the rheology of the fluids involved. In this research a systematic approach is used to validate the rheology of two liquids: glycerol and a gel formed by polyethylene glycol and carbopol. Initially, the constitutive equation is determined which relates the viscosity of the fluids with temperature, shear rate, and concentration. The key variable for the validation is the power required for mixing, which can be obtained both from CFD and experimentally using a stirred tank and impeller of well-defined geometries at different impeller speeds. A good agreement between the two values indicates a successful validation of the rheology and allows the CFD model to be used for the study of mixing in the complex vessel geometries and increased sizes encountered during scale up.
Moncho, Salvador; Autschbach, Jochen
2010-01-12
A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.
Construction of an advanced software tool for planetary atmospheric modeling
NASA Technical Reports Server (NTRS)
Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.
1993-01-01
Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a testbed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.
NASA Technical Reports Server (NTRS)
Karl, D. R.
1972-01-01
An evaluation was made of the feasibility of utilizing a simplified man machine interface concept to manage and control a complex space system involving multiple redundant computers that control multiple redundant subsystems. The concept involves the use of a CRT for display and a simple keyboard for control, with a tree-type control logic for accessing and controlling mission, systems, and subsystem elements. The concept was evaluated in terms of the Phase B space shuttle orbiter, to utilize the wide scope of data management and subsystem control inherent in the central data management subsystem provided by the Phase B design philosophy. Results of these investigations are reported in four volumes.
Construction of an advanced software tool for planetary atmospheric modeling
NASA Technical Reports Server (NTRS)
Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.
1992-01-01
Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a test bed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.
Expert-guided evolutionary algorithm for layout design of complex space stations
NASA Astrophysics Data System (ADS)
Qian, Zhiqin; Bi, Zhuming; Cao, Qun; Ju, Weiguo; Teng, Hongfei; Zheng, Yang; Zheng, Siyu
2017-08-01
The layout of a space station should be designed in such a way that different equipment and instruments are placed for the station as a whole to achieve the best overall performance. The station layout design is a typical nondeterministic polynomial problem. In particular, how to manage the design complexity to achieve an acceptable solution within a reasonable timeframe poses a great challenge. In this article, a new evolutionary algorithm has been proposed to meet such a challenge. It is called as the expert-guided evolutionary algorithm with a tree-like structure decomposition (EGEA-TSD). Two innovations in EGEA-TSD are (i) to deal with the design complexity, the entire design space is divided into subspaces with a tree-like structure; it reduces the computation and facilitates experts' involvement in the solving process. (ii) A human-intervention interface is developed to allow experts' involvement in avoiding local optimums and accelerating convergence. To validate the proposed algorithm, the layout design of one-space station is formulated as a multi-disciplinary design problem, the developed algorithm is programmed and executed, and the result is compared with those from other two algorithms; it has illustrated the superior performance of the proposed EGEA-TSD.
Grid Convergence of High Order Methods for Multiscale Complex Unsteady Viscous Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
Grid convergence of several high order methods for the computation of rapidly developing complex unsteady viscous compressible flows with a wide range of physical scales is studied. The recently developed adaptive numerical dissipation control high order methods referred to as the ACM and wavelet filter schemes are compared with a fifth-order weighted ENO (WENO) scheme. The two 2-D compressible full Navier-Stokes models considered do not possess known analytical and experimental data. Fine grid solutions from a standard second-order TVD scheme and a MUSCL scheme with limiters are used as reference solutions. The first model is a 2-D viscous analogue of a shock tube problem which involves complex shock/shear/boundary-layer interactions. The second model is a supersonic reactive flow concerning fuel breakup. The fuel mixing involves circular hydrogen bubbles in air interacting with a planar moving shock wave. Both models contain fine scale structures and are stiff in the sense that even though the unsteadiness of the flows are rapidly developing, extreme grid refinement and time step restrictions are needed to resolve all the flow scales as well as the chemical reaction scales.
Free energy component analysis for drug design: a case study of HIV-1 protease-inhibitor binding.
Kalra, P; Reddy, T V; Jayaram, B
2001-12-06
A theoretically rigorous and computationally tractable methodology for the prediction of the free energies of binding of protein-ligand complexes is presented. The method formulated involves developing molecular dynamics trajectories of the enzyme, the inhibitor, and the complex, followed by a free energy component analysis that conveys information on the physicochemical forces driving the protein-ligand complex formation and enables an elucidation of drug design principles for a given receptor from a thermodynamic perspective. The complexes of HIV-1 protease with two peptidomimetic inhibitors were taken as illustrative cases. Four-nanosecond-level all-atom molecular dynamics simulations using explicit solvent without any restraints were carried out on the protease-inhibitor complexes and the free proteases, and the trajectories were analyzed via a thermodynamic cycle to calculate the binding free energies. The computed free energies were seen to be in good accord with the reported data. It was noted that the net van der Waals and hydrophobic contributions were favorable to binding while the net electrostatics, entropies, and adaptation expense were unfavorable in these protease-inhibitor complexes. The hydrogen bond between the CH2OH group of the inhibitor at the scissile position and the catalytic aspartate was found to be favorable to binding. Various implicit solvent models were also considered and their shortcomings discussed. In addition, some plausible modifications to the inhibitor residues were attempted, which led to better binding affinities. The generality of the method and the transferability of the protocol with essentially no changes to any other protein-ligand system are emphasized.
Modeling of Wildlife-Associated Zoonoses: Applications and Caveats
Lewis, Bryan L.; Marathe, Madhav; Eubank, Stephen; Blackburn, Jason K.
2012-01-01
Abstract Wildlife species are identified as an important source of emerging zoonotic disease. Accordingly, public health programs have attempted to expand in scope to include a greater focus on wildlife and its role in zoonotic disease outbreaks. Zoonotic disease transmission dynamics involving wildlife are complex and nonlinear, presenting a number of challenges. First, empirical characterization of wildlife host species and pathogen systems are often lacking, and insight into one system may have little application to another involving the same host species and pathogen. Pathogen transmission characterization is difficult due to the changing nature of population size and density associated with wildlife hosts. Infectious disease itself may influence wildlife population demographics through compensatory responses that may evolve, such as decreased age to reproduction. Furthermore, wildlife reservoir dynamics can be complex, involving various host species and populations that may vary in their contribution to pathogen transmission and persistence over space and time. Mathematical models can provide an important tool to engage these complex systems, and there is an urgent need for increased computational focus on the coupled dynamics that underlie pathogen spillover at the human–wildlife interface. Often, however, scientists conducting empirical studies on emerging zoonotic disease do not have the necessary skill base to choose, develop, and apply models to evaluate these complex systems. How do modeling frameworks differ and what considerations are important when applying modeling tools to the study of zoonotic disease? Using zoonotic disease examples, we provide an overview of several common approaches and general considerations important in the modeling of wildlife-associated zoonoses. PMID:23199265
Long non-coding RNAs and complex diseases: from experimental results to computational models.
Chen, Xing; Yan, Chenggang Clarence; Zhang, Xu; You, Zhu-Hong
2017-07-01
LncRNAs have attracted lots of attentions from researchers worldwide in recent decades. With the rapid advances in both experimental technology and computational prediction algorithm, thousands of lncRNA have been identified in eukaryotic organisms ranging from nematodes to humans in the past few years. More and more research evidences have indicated that lncRNAs are involved in almost the whole life cycle of cells through different mechanisms and play important roles in many critical biological processes. Therefore, it is not surprising that the mutations and dysregulations of lncRNAs would contribute to the development of various human complex diseases. In this review, we first made a brief introduction about the functions of lncRNAs, five important lncRNA-related diseases, five critical disease-related lncRNAs and some important publicly available lncRNA-related databases about sequence, expression, function, etc. Nowadays, only a limited number of lncRNAs have been experimentally reported to be related to human diseases. Therefore, analyzing available lncRNA-disease associations and predicting potential human lncRNA-disease associations have become important tasks of bioinformatics, which would benefit human complex diseases mechanism understanding at lncRNA level, disease biomarker detection and disease diagnosis, treatment, prognosis and prevention. Furthermore, we introduced some state-of-the-art computational models, which could be effectively used to identify disease-related lncRNAs on a large scale and select the most promising disease-related lncRNAs for experimental validation. We also analyzed the limitations of these models and discussed the future directions of developing computational models for lncRNA research. © The Author 2016. Published by Oxford University Press.
Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems
NASA Technical Reports Server (NTRS)
Dahl, Milo D. (Editor)
2004-01-01
This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a vortical gust interacting with an airfoil. Category 4:Sound Transmission and Radiation. Category 5:Sound Generation in Viscous Problems. Sound is generated under certain conditions by a viscous flow as the flow passes an object or a cavity.
Harris, C; Straker, L; Pollock, C
2013-01-01
Young people are exposed to a range of information technologies (IT) in different environments, including home and school, however the factors influencing IT use at home and school are poorly understood. The aim of this study was to investigate young people's computer exposure patterns at home and school, and related factors such as age, gender and the types of IT used. 1351 children in Years 1, 6, 9 and 11 from 10 schools in metropolitan Western Australia were surveyed. Most children had access to computers at home and school, with computer exposures comparable to TV, reading and writing. Total computer exposure was greater at home than school, and increased with age. Computer activities varied with age and gender and became more social with increased age, at the same time parental involvement reduced. Bedroom computer use was found to result in higher exposure patterns. High use of home and school computers were associated with each other. Associations varied depending on the type of IT exposure measure (frequency, mean weekly hours, usual and longest duration). The frequency and duration of children's computer exposure were associated with a complex interplay of the environment of use, the participant's age and gender and other IT activities.
Challenging Density Functional Theory Calculations with Hemes and Porphyrins
de Visser, Sam P.; Stillman, Martin J.
2016-01-01
In this paper we review recent advances in computational chemistry and specifically focus on the chemical description of heme proteins and synthetic porphyrins that act as both mimics of natural processes and technological uses. These are challenging biochemical systems involved in electron transfer as well as biocatalysis processes. In recent years computational tools have improved considerably and now can reproduce experimental spectroscopic and reactivity studies within a reasonable error margin (several kcal·mol−1). This paper gives recent examples from our groups, where we investigated heme and synthetic metal-porphyrin systems. The four case studies highlight how computational modelling can correctly reproduce experimental product distributions, predicted reactivity trends and guide interpretation of electronic structures of complex systems. The case studies focus on the calculations of a variety of spectroscopic features of porphyrins and show how computational modelling gives important insight that explains the experimental spectra and can lead to the design of porphyrins with tuned properties. PMID:27070578
Computer considerations for real time simulation of a generalized rotor model
NASA Technical Reports Server (NTRS)
Howe, R. M.; Fogarty, L. E.
1977-01-01
Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.
NASA Technical Reports Server (NTRS)
Reddy, C. J.; Deshpande, M. D.; Cockrell, C. R.; Beck, F. B.
2004-01-01
The hybrid Finite Element Method(FEM)/Method of Moments(MoM) technique has become popular over the last few years due to its flexibility to handle arbitrarily shaped objects with complex materials. One of the disadvantages of this technique, however, is the computational cost involved in obtaining solutions over a frequency range as computations are repeated for each frequency. In this paper, the application of Model Based Parameter Estimation (MBPE) method[1] with the hybrid FEM/MoM technique is presented for fast computation of frequency response of cavity-backed apertures[2,3]. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency-derivatives of the integro-differential equation formed by the hybrid FEM/MoM technique. Using the rational function approximation, the electric field is calculated at different frequencies from which the frequency response is obtained.
Integrating computational methods to retrofit enzymes to synthetic pathways.
Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula
2012-02-01
Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.
On the computation of molecular surface correlations for protein docking using fourier techniques.
Sakk, Eric
2007-08-01
The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.
The challenge of big data in public health: an opportunity for visual analytics.
Ola, Oluwakemi; Sedig, Kamran
2014-01-01
Public health (PH) data can generally be characterized as big data. The efficient and effective use of this data determines the extent to which PH stakeholders can sufficiently address societal health concerns as they engage in a variety of work activities. As stakeholders interact with data, they engage in various cognitive activities such as analytical reasoning, decision-making, interpreting, and problem solving. Performing these activities with big data is a challenge for the unaided mind as stakeholders encounter obstacles relating to the data's volume, variety, velocity, and veracity. Such being the case, computer-based information tools are needed to support PH stakeholders. Unfortunately, while existing computational tools are beneficial in addressing certain work activities, they fall short in supporting cognitive activities that involve working with large, heterogeneous, and complex bodies of data. This paper presents visual analytics (VA) tools, a nascent category of computational tools that integrate data analytics with interactive visualizations, to facilitate the performance of cognitive activities involving big data. Historically, PH has lagged behind other sectors in embracing new computational technology. In this paper, we discuss the role that VA tools can play in addressing the challenges presented by big data. In doing so, we demonstrate the potential benefit of incorporating VA tools into PH practice, in addition to highlighting the need for further systematic and focused research.
The Challenge of Big Data in Public Health: An Opportunity for Visual Analytics
Ola, Oluwakemi; Sedig, Kamran
2014-01-01
Public health (PH) data can generally be characterized as big data. The efficient and effective use of this data determines the extent to which PH stakeholders can sufficiently address societal health concerns as they engage in a variety of work activities. As stakeholders interact with data, they engage in various cognitive activities such as analytical reasoning, decision-making, interpreting, and problem solving. Performing these activities with big data is a challenge for the unaided mind as stakeholders encounter obstacles relating to the data’s volume, variety, velocity, and veracity. Such being the case, computer-based information tools are needed to support PH stakeholders. Unfortunately, while existing computational tools are beneficial in addressing certain work activities, they fall short in supporting cognitive activities that involve working with large, heterogeneous, and complex bodies of data. This paper presents visual analytics (VA) tools, a nascent category of computational tools that integrate data analytics with interactive visualizations, to facilitate the performance of cognitive activities involving big data. Historically, PH has lagged behind other sectors in embracing new computational technology. In this paper, we discuss the role that VA tools can play in addressing the challenges presented by big data. In doing so, we demonstrate the potential benefit of incorporating VA tools into PH practice, in addition to highlighting the need for further systematic and focused research. PMID:24678376
Measuring glomerular number from kidney MRI images
NASA Astrophysics Data System (ADS)
Thiagarajan, Jayaraman J.; Natesan Ramamurthy, Karthikeyan; Kanberoglu, Berkay; Frakes, David; Bennett, Kevin; Spanias, Andreas
2016-03-01
Measuring the glomerular number in the entire, intact kidney using non-destructive techniques is of immense importance in studying several renal and systemic diseases. Commonly used approaches either require destruction of the entire kidney or perform extrapolation from measurements obtained from a few isolated sections. A recent magnetic resonance imaging (MRI) method, based on the injection of a contrast agent (cationic ferritin), has been used to effectively identify glomerular regions in the kidney. In this work, we propose a robust, accurate, and low-complexity method for estimating the number of glomeruli from such kidney MRI images. The proposed technique has a training phase and a low-complexity testing phase. In the training phase, organ segmentation is performed on a few expert-marked training images, and glomerular and non-glomerular image patches are extracted. Using non-local sparse coding to compute similarity and dissimilarity graphs between the patches, the subspace in which the glomerular regions can be discriminated from the rest are estimated. For novel test images, the image patches extracted after pre-processing are embedded using the discriminative subspace projections. The testing phase is of low computational complexity since it involves only matrix multiplications, clustering, and simple morphological operations. Preliminary results with MRI data obtained from five kidneys of rats show that the proposed non-invasive, low-complexity approach performs comparably to conventional approaches such as acid maceration and stereology.
NASA Astrophysics Data System (ADS)
Gosai, Agnivo
The concomitant detection, monitoring and analysis of biomolecules have assumed utmost importance in the field of medical diagnostics as well as in different spheres of biotechnology research such as drug development, environmental hazard detection and biodefense. There is an increased demand for the modulation of the biological response for such detection / sensing schemes which will be facilitated by the sensitive and controllable transmission of external stimuli. Electrostatic actuation for the controlled release/capture of biomolecules through conformational transformations of bioreceptors provides an efficient and feasible mechanism to modulate biological response. In addition, electrostatic actuation mechanism has the advantage of allowing massively parallel schemes and measurement capabilities that could ultimately be essential for biomedical applications. Experiments have previously demonstrated the unbinding of thrombin from its aptamer in presence of small positive electrode potential whereas the complex remained associated in presence of small negative potentials / zero potential. However, the nanoscale physics/chemistry involved in this process is not clearly understood. In this thesis a combination of continuum mechanics based modeling and a variety of atomistic simulation techniques have been utilized to corroborate the aforementioned experimental observations. It is found that the computational approach can satisfactorily predict the dynamics of the electrically excited aptamer-thrombin complex as well as provide an analytical model to characterize the forced binding of the complex.
Hétu, Sébastien; Luo, Yi; D’Ardenne, Kimberlee; Lohrenz, Terry
2017-01-01
Abstract As models of shared expectations, social norms play an essential role in our societies. Since our social environment is changing constantly, our internal models of it also need to change. In humans, there is mounting evidence that neural structures such as the insula and the ventral striatum are involved in detecting norm violation and updating internal models. However, because of methodological challenges, little is known about the possible involvement of midbrain structures in detecting norm violation and updating internal models of our norms. Here, we used high-resolution cardiac-gated functional magnetic resonance imaging and a norm adaptation paradigm in healthy adults to investigate the role of the substantia nigra/ventral tegmental area (SN/VTA) complex in tracking signals related to norm violation that can be used to update internal norms. We show that the SN/VTA codes for the norm’s variance prediction error (PE) and norm PE with spatially distinct regions coding for negative and positive norm PE. These results point to a common role played by the SN/VTA complex in supporting both simple reward-based and social decision making. PMID:28981876
An Algorithm for Integrated Subsystem Embodiment and System Synthesis
NASA Technical Reports Server (NTRS)
Lewis, Kemper
1997-01-01
Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.
Infrared Laser Stark Spectroscopy and AB Initio Computations of the OH\\cdotsCO Complex
NASA Astrophysics Data System (ADS)
Liang, Tao; Raston, Paul; Douberly, Gary
2014-06-01
Following the sequential pick-up of OH and CO by helium nanodroplets, the infrared depletion spectrum is measured in the fundamental OH stretching region. Although several potentially accessible minima exist on the associated OH + CO reactive potential energy surface [e.g. J. Ma, J. Li, and H. Guo, J. Phys. Chem. Lett. 3 (2012) 2482], such as the weakly bound OH-OC dimer and the chemically bound HOCO molecule, we only observe the weakly bound OH-CO dimer. The rovibrational spectrum of this complex displays narrow (0.02 cm-1) Lorentzian shaped peaks with spacings that are characteristic of a linear complex with unquenched electronic angular momentum, similar to what was previously observed in the gas phase [M.I. Lester, B.V. Pond, D.T. Anderson, L.B. Harding, and A.F. Wagner, J. Chem. Phys. 113 (2000) 9889]. Analogous spectra involving OD were collected, for which we also only observe the OD-CO isomer. From the Stark spectra, the dipole moments for OH-CO are determined to be 1.85(3) and 1.89(3) D for v=0 and v=1, respectively, while the analogous dipole moments for OD-CO are determined to be 1.88(8) and 1.94(5) D. The computed equilibrium ground state dipole moment at the CCSD(T)/Def2-TZVPD level of theory is 2.185 D, in disagreement with experiment. The role of vibrational averaging is investigated via the solution of a three-dimensional vibrational Schrödinger equation, which is constructed in internal bond-angle coordinates. The computed expectation value of the ground state dipole moment is in excellent agreement with experiment, indicating a floppy molecular complex.
Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
Kilambi, Krishna Praneeth; Pacella, Michael S; Xu, Jianqing; Labonte, Jason W; Porter, Justin R; Muthu, Pravin; Drew, Kevin; Kuroda, Daisuke; Schueler-Furman, Ora; Bonneau, Richard; Gray, Jeffrey J
2013-12-01
Rounds 20-27 of the Critical Assessment of PRotein Interactions (CAPRI) provided a testing platform for computational methods designed to address a wide range of challenges. The diverse targets drove the creation of and new combinations of computational tools. In this study, RosettaDock and other novel Rosetta protocols were used to successfully predict four of the 10 blind targets. For example, for DNase domain of Colicin E2-Im2 immunity protein, RosettaDock and RosettaLigand were used to predict the positions of water molecules at the interface, recovering 46% of the native water-mediated contacts. For α-repeat Rep4-Rep2 and g-type lysozyme-PliG inhibitor complexes, homology models were built and standard and pH-sensitive docking algorithms were used to generate structures with interface RMSD values of 3.3 Å and 2.0 Å, respectively. A novel flexible sugar-protein docking protocol was also developed and used for structure prediction of the BT4661-heparin-like saccharide complex, recovering 71% of the native contacts. Challenges remain in the generation of accurate homology models for protein mutants and sampling during global docking. On proteins designed to bind influenza hemagglutinin, only about half of the mutations were identified that affect binding (T55: 54%; T56: 48%). The prediction of the structure of the xylanase complex involving homology modeling and multidomain docking pushed the limits of global conformational sampling and did not result in any successful prediction. The diversity of problems at hand requires computational algorithms to be versatile; the recent additions to the Rosetta suite expand the capabilities to encompass more biologically realistic docking problems. Copyright © 2013 Wiley Periodicals, Inc.
Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.
Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe
2018-02-19
Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.
Thompson, Damien; Hermes, Jens P; Quinn, Aidan J; Mayor, Marcel
2012-04-24
The formation of true single-molecule complexes between organic ligands and nanoparticles is challenging and requires careful design of molecules with size, shape, and chemical properties tailored for the specific nanoparticle. Here we use computer simulations to describe the atomic-scale structure, dynamics, and energetics of ligand-mediated synthesis and interlinking of 1 nm gold clusters. The models help explain recent experimental results and provide insight into how multidentate thioether dendrimers can be employed for synthesis of true single-ligand-nanoparticle complexes and also nanoparticle-molecule-nanoparticle "dumbbell" nanostructures. Electronic structure calculations reveal the individually weak thioether-gold bonds (325 ± 36 meV), which act collectively through the multivalent (multisite) anchoring to stabilize the ligand-nanoparticle complex (∼7 eV total binding energy) and offset the conformational and solvation penalties involved in this "wrapping" process. Molecular dynamics simulations show that the dendrimer is sufficiently flexible to tolerate the strained conformations and desolvation penalties involved in fully wrapping the particle, quantifying the subtle balance between covalent anchoring and noncovalent wrapping in the assembly of ligand-nanoparticle complexes. The computed preference for binding of a single dendrimer to the cluster reveals the prohibitively high dendrimer desolvation barrier (1.5 ± 0.5 eV) to form the alternative double-dendrimer structure. Finally, the models show formation of an additional electron transfer channel between nitrogen and gold for ligands with a central pyridine unit, which gives a stiff binding orientation and explains the recently measured larger interparticle distances for particles synthesized and interlinked using linear ligands with a central pyridine rather than a benzene moiety. The findings stress the importance of organic-inorganic interactions, the control of which is central to the rational engineering and eventual large-scale production of functional building blocks for nano(bio)electronics.
[Computers in biomedical research: I. Analysis of bioelectrical signals].
Vivaldi, E A; Maldonado, P
2001-08-01
A personal computer equipped with an analog-to-digital conversion card is able to input, store and display signals of biomedical interest. These signals can additionally be submitted to ad-hoc software for analysis and diagnosis. Data acquisition is based on the sampling of a signal at a given rate and amplitude resolution. The automation of signal processing conveys syntactic aspects (data transduction, conditioning and reduction); and semantic aspects (feature extraction to describe and characterize the signal and diagnostic classification). The analytical approach that is at the basis of computer programming allows for the successful resolution of apparently complex tasks. Two basic principles involved are the definition of simple fundamental functions that are then iterated and the modular subdivision of tasks. These two principles are illustrated, respectively, by presenting the algorithm that detects relevant elements for the analysis of a polysomnogram, and the task flow in systems that automate electrocardiographic reports.
Student Research in Computational Astrophysics
NASA Astrophysics Data System (ADS)
Blondin, J. M.
1999-12-01
Computational physics can shorten the long road from freshman physics major to independent research by providing students with powerful tools to deal with the complexities of modern research problems. At North Carolina State University we have introduced dozens of students to astrophysics research using the tools of computational fluid dynamics. We have used several formats for working with students, including the traditional approach of one-on-one mentoring, a more group-oriented format in which several students work together on one or more related projects, and a novel attempt to involve an entire class in a coordinated semester research project. The advantages and disadvantages of these formats will be discussed at length, but the single most important influence has been peer support. Having students work in teams or learn the tools of research together but tackle different problems has led to more positive experiences than a lone student diving into solo research. This work is supported by an NSF CAREER Award.
Enabling large-scale viscoelastic calculations via neural network acceleration
NASA Astrophysics Data System (ADS)
Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.
2017-12-01
One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.
NASA Technical Reports Server (NTRS)
Huck, F. O.; Davis, R. E.; Fales, C. L.; Aherron, R. M.
1982-01-01
A computational model of the deterministic and stochastic processes involved in remote sensing is used to study spectral feature identification techniques for real-time onboard processing of data acquired with advanced earth-resources sensors. Preliminary results indicate that: Narrow spectral responses are advantageous; signal normalization improves mean-square distance (MSD) classification accuracy but tends to degrade maximum-likelihood (MLH) classification accuracy; and MSD classification of normalized signals performs better than the computationally more complex MLH classification when imaging conditions change appreciably from those conditions during which reference data were acquired. The results also indicate that autonomous categorization of TM signals into vegetation, bare land, water, snow and clouds can be accomplished with adequate reliability for many applications over a reasonably wide range of imaging conditions. However, further analysis is required to develop computationally efficient boundary approximation algorithms for such categorization.
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1990-01-01
Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations.
Computers and the design of ion beam optical systems
NASA Astrophysics Data System (ADS)
White, Nicholas R.
Advances in microcomputers have made it possible to maintain a library of advanced ion optical programs which can be used on inexpensive computer hardware, which are suitable for the design of a variety of ion beam systems including ion implanters, giving excellent results. This paper describes in outline the steps typically involved in designing a complete ion beam system for materials modification applications. Two computer programs are described which, although based largely on algorithms which have been in use for many years, make possible detailed beam optical calculations using microcomputers, specifically the IBM PC. OPTICIAN is an interactive first-order program for tracing beam envelopes through complex optical systems. SORCERY is a versatile program for solving Laplace's and Poisson's equations by finite difference methods using successive over-relaxation. Ion and electron trajectories can be traced through these potential fields, and plots of beam emittance obtained.
NASA Astrophysics Data System (ADS)
Morrison, Foster
2009-06-01
Imagine a story about a stay-at-home mother who, anticipating the departure of her children for college, takes a job at a government agency and by dint of hard work and persistence becomes a world-renowned scientist. This might sound improbable, but it happens to be the true story of Irene K. Fischer, a geodesist and AGU Fellow. How it happened and the way it did is a fascinating and complex story. In 1952, Fischer started working at the U.S. Army Map Service (AMS) in Brookmont, Md. (now part of Bethesda), at a time when computers were large, expensive, and feeble compared with the cheapest desktop personal computers available today. Much computing was still done on slow and noisy mechanical calculators. Artificial satellites, space probes, global positioning systems, and the like were science fiction fantasies.
Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola
2016-01-01
Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.
The development of the ICME supply-chain: Route to ICME implementation and sustainment
NASA Astrophysics Data System (ADS)
Furrer, David; Schirra, John
2011-04-01
Over the past twenty years, integrated computational materials engineering (ICME) has emerged as a key engineering field with great promise. Models simulating materials-related phenomena have been developed and are being validated for industrial application. The integration of computational methods into material, process and component design has been a challenge, however, in part due to the complexities in the development of an ICME "supply-chain" that supports, sustains and delivers this emerging technology. ICME touches many disciplines, which results in a requirement for many types of computational-based technology organizations to be involved to provide tools that can be rapidly developed, validated, deployed and maintained for industrial applications. The need for, and the current state of an ICME supply-chain along with development and future requirements for the continued pace of introduction of ICME into industrial design practices will be reviewed within this article.
DualSPHysics: A numerical tool to simulate real breakwaters
NASA Astrophysics Data System (ADS)
Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho
2018-02-01
The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.
Turbulence modeling of free shear layers for high-performance aircraft
NASA Technical Reports Server (NTRS)
Sondak, Douglas L.
1993-01-01
The High Performance Aircraft (HPA) Grand Challenge of the High Performance Computing and Communications (HPCC) program involves the computation of the flow over a high performance aircraft. A variety of free shear layers, including mixing layers over cavities, impinging jets, blown flaps, and exhaust plumes, may be encountered in such flowfields. Since these free shear layers are usually turbulent, appropriate turbulence models must be utilized in computations in order to accurately simulate these flow features. The HPCC program is relying heavily on parallel computers. A Navier-Stokes solver (POVERFLOW) utilizing the Baldwin-Lomax algebraic turbulence model was developed and tested on a 128-node Intel iPSC/860. Algebraic turbulence models run very fast, and give good results for many flowfields. For complex flowfields such as those mentioned above, however, they are often inadequate. It was therefore deemed that a two-equation turbulence model will be required for the HPA computations. The k-epsilon two-equation turbulence model was implemented on the Intel iPSC/860. Both the Chien low-Reynolds-number model and a generalized wall-function formulation were included.
Computational Hemodynamics Involving Artificial Devices
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, Cetin; Feiereisen, William (Technical Monitor)
2001-01-01
This paper reports the progress being made towards developing complete blood flow simulation capability in human, especially, in the presence of artificial devices such as valves and ventricular assist devices. Devices modeling poses unique challenges different from computing the blood flow in natural hearts and arteries. There are many elements needed such as flow solvers, geometry modeling including flexible walls, moving boundary procedures and physiological characterization of blood. As a first step, computational technology developed for aerospace applications was extended in the recent past to the analysis and development of mechanical devices. The blood flow in these devices is practically incompressible and Newtonian, and thus various incompressible Navier-Stokes solution procedures can be selected depending on the choice of formulations, variables and numerical schemes. Two primitive variable formulations used are discussed as well as the overset grid approach to handle complex moving geometry. This procedure has been applied to several artificial devices. Among these, recent progress made in developing DeBakey axial flow blood pump will be presented from computational point of view. Computational and clinical issues will be discussed in detail as well as additional work needed.
A hybrid framework for coupling arbitrary summation-by-parts schemes on general meshes
NASA Astrophysics Data System (ADS)
Lundquist, Tomas; Malan, Arnaud; Nordström, Jan
2018-06-01
We develop a general interface procedure to couple both structured and unstructured parts of a hybrid mesh in a non-collocated, multi-block fashion. The target is to gain optimal computational efficiency in fluid dynamics simulations involving complex geometries. While guaranteeing stability, the proposed procedure is optimized for accuracy and requires minimal algorithmic modifications to already existing schemes. Initial numerical investigations confirm considerable efficiency gains compared to non-hybrid calculations of up to an order of magnitude.
Interleaved concatenated codes: new perspectives on approaching the Shannon limit.
Viterbi, A J; Viterbi, A M; Sindhushayana, N T
1997-09-02
The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit.
1994-02-01
desired that the problem to which the design space mapping techniques were applied be easily analyzed, yet provide a design space with realistic complexity...consistent fully stressed solution. 3 DESIGN SPACE MAPPING In order to reduce the computational expense required to optimize design spaces, neural networks...employed in this study. Some of the issues involved in using neural networks to do design space mapping are how to configure the neural network, how much
Teaching NMR spectra analysis with nmr.cheminfo.org.
Patiny, Luc; Bolaños, Alejandro; Castillo, Andrés M; Bernal, Andrés; Wist, Julien
2018-06-01
Teaching spectra analysis and structure elucidation requires students to get trained on real problems. This involves solving exercises of increasing complexity and when necessary using computational tools. Although desktop software packages exist for this purpose, nmr.cheminfo.org platform offers students an online alternative. It provides a set of exercises and tools to help solving them. Only a small number of exercises are currently available, but contributors are invited to submit new ones and suggest new types of problems. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Lhamon, Michael Earl
A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.
NASA Astrophysics Data System (ADS)
Sharpanskykh, Alexei; Treur, Jan
Employing rich internal agent models of actors in large-scale socio-technical systems often results in scalability issues. The problem addressed in this paper is how to improve computational properties of a complex internal agent model, while preserving its behavioral properties. The problem is addressed for the case of an existing affective-cognitive decision making model instantiated for an emergency scenario. For this internal decision model an abstracted behavioral agent model is obtained, which ensures a substantial increase of the computational efficiency at the cost of approximately 1% behavioural error. The abstraction technique used can be applied to a wide range of internal agent models with loops, for example, involving mutual affective-cognitive interactions.
Improved Collaborative Filtering Algorithm via Information Transformation
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Wang, Bing-Hong; Guo, Qiang
In this paper, we propose a spreading activation approach for collaborative filtering (SA-CF). By using the opinion spreading process, the similarity between any users can be obtained. The algorithm has remarkably higher accuracy than the standard collaborative filtering using the Pearson correlation. Furthermore, we introduce a free parameter β to regulate the contributions of objects to user-user correlations. The numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality. We argue that a better algorithm should simultaneously require less computation and generate higher accuracy. Accordingly, we further propose an algorithm involving only the top-N similar neighbors for each target user, which has both less computational complexity and higher algorithmic accuracy.
An Integrative Account of Constraints on Cross-Situational Learning
Yurovsky, Daniel; Frank, Michael C.
2015-01-01
Word-object co-occurrence statistics are a powerful information source for vocabulary learning, but there is considerable debate about how learners actually use them. While some theories hold that learners accumulate graded, statistical evidence about multiple referents for each word, others suggest that they track only a single candidate referent. In two large-scale experiments, we show that neither account is sufficient: Cross-situational learning involves elements of both. Further, the empirical data are captured by a computational model that formalizes how memory and attention interact with co-occurrence tracking. Together, the data and model unify opposing positions in a complex debate and underscore the value of understanding the interaction between computational and algorithmic levels of explanation. PMID:26302052
Building a computer-aided design capability using a standard time share operating system
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.
1975-01-01
The paper describes how an integrated system of engineering computer programs can be built using a standard commercially available operating system. The discussion opens with an outline of the auxiliary functions that an operating system can perform for a team of engineers involved in a large and complex task. An example of a specific integrated system is provided to explain how the standard operating system features can be used to organize the programs into a simple and inexpensive but effective system. Applications to an aircraft structural design study are discussed to illustrate the use of an integrated system as a flexible and efficient engineering tool. The discussion concludes with an engineer's assessment of an operating system's capabilities and desirable improvements.
Energy Efficiency in Public Buildings through Context-Aware Social Computing.
García, Óscar; Alonso, Ricardo S; Prieto, Javier; Corchado, Juan M
2017-04-11
The challenge of promoting behavioral changes in users that leads to energy savings in public buildings has become a complex task requiring the involvement of multiple technologies. Wireless sensor networks have a great potential for the development of tools, such as serious games, that encourage acquiring good energy and healthy habits among users in the workplace. This paper presents the development of a serious game using CAFCLA, a framework that allows for integrating multiple technologies, which provide both context-awareness and social computing. Game development has shown that the data provided by sensor networks encourage users to reduce energy consumption in their workplace and that social interactions and competitiveness allow for accelerating the achievement of good results and behavioral changes that favor energy savings.
Developing software to use parallel processing effectively. Final report, June-December 1987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Center, J.
1988-10-01
This report describes the difficulties involved in writing efficient parallel programs and describes the hardware and software support currently available for generating software that utilizes processing effectively. Historically, the processing rate of single-processor computers has increased by one order of magnitude every five years. However, this pace is slowing since electronic circuitry is coming up against physical barriers. Unfortunately, the complexity of engineering and research problems continues to require ever more processing power (far in excess of the maximum estimated 3 Gflops achievable by single-processor computers). For this reason, parallel-processing architectures are receiving considerable interest, since they offer high performancemore » more cheaply than a single-processor supercomputer, such as the Cray.« less
Information Leakage Analysis by Abstract Interpretation
NASA Astrophysics Data System (ADS)
Zanioli, Matteo; Cortesi, Agostino
Protecting the confidentiality of information stored in a computer system or transmitted over a public network is a relevant problem in computer security. The approach of information flow analysis involves performing a static analysis of the program with the aim of proving that there will not be leaks of sensitive information. In this paper we propose a new domain that combines variable dependency analysis, based on propositional formulas, and variables' value analysis, based on polyhedra. The resulting analysis is strictly more accurate than the state of the art abstract interpretation based analyses for information leakage detection. Its modular construction allows to deal with the tradeoff between efficiency and accuracy by tuning the granularity of the abstraction and the complexity of the abstract operators.
Superior model for fault tolerance computation in designing nano-sized circuit systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my
2014-10-24
As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less
MindEdit: A P300-based text editor for mobile devices.
Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M
2017-01-01
Practical application of Brain-Computer Interfaces (BCIs) requires that the whole BCI system be portable. The mobility of BCI systems involves two aspects: making the electroencephalography (EEG) recording devices portable, and developing software applications with low computational complexity to be able to run on low computational-power devices such as tablets and smartphones. This paper addresses the development of MindEdit; a P300-based text editor for Android-based devices. Given the limited resources of mobile devices and their limited computational power, a novel ensemble classifier is utilized that uses Principal Component Analysis (PCA) features to identify P300 evoked potentials from EEG recordings. PCA computations in the proposed method are channel-based as opposed to concatenating all channels as in traditional feature extraction methods; thus, this method has less computational complexity compared to traditional P300 detection methods. The performance of the method is demonstrated on data recorded from MindEdit on an Android tablet using the Emotiv wireless neuroheadset. Results demonstrate the capability of the introduced PCA ensemble classifier to classify P300 data with maximum average accuracy of 78.37±16.09% for cross-validation data and 77.5±19.69% for online test data using only 10 trials per symbol and a 33-character training dataset. Our analysis indicates that the introduced method outperforms traditional feature extraction methods. For a faster operation of MindEdit, a variable number of trials scheme is introduced that resulted in an online average accuracy of 64.17±19.6% and a maximum bitrate of 6.25bit/min. These results demonstrate the efficacy of using the developed BCI application with mobile devices. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer
2018-03-01
Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).
Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.
Abbasi, Mahdi
2014-01-01
Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.
Niazi, Muaz A
2014-01-01
The body structure of snakes is composed of numerous natural components thereby making it resilient, flexible, adaptive, and dynamic. In contrast, current computer animations as well as physical implementations of snake-like autonomous structures are typically designed to use either a single or a relatively smaller number of components. As a result, not only these artificial structures are constrained by the dimensions of the constituent components but often also require relatively more computationally intensive algorithms to model and animate. Still, these animations often lack life-like resilience and adaptation. This paper presents a solution to the problem of modeling snake-like structures by proposing an agent-based, self-organizing algorithm resulting in an emergent and surprisingly resilient dynamic structure involving a minimal of interagent communication. Extensive simulation experiments demonstrate the effectiveness as well as resilience of the proposed approach. The ideas originating from the proposed algorithm can not only be used for developing self-organizing animations but can also have practical applications such as in the form of complex, autonomous, evolvable robots with self-organizing, mobile components with minimal individual computational capabilities. The work also demonstrates the utility of exploratory agent-based modeling (EABM) in the engineering of artificial life-like complex adaptive systems.
Niazi, Muaz A.
2014-01-01
The body structure of snakes is composed of numerous natural components thereby making it resilient, flexible, adaptive, and dynamic. In contrast, current computer animations as well as physical implementations of snake-like autonomous structures are typically designed to use either a single or a relatively smaller number of components. As a result, not only these artificial structures are constrained by the dimensions of the constituent components but often also require relatively more computationally intensive algorithms to model and animate. Still, these animations often lack life-like resilience and adaptation. This paper presents a solution to the problem of modeling snake-like structures by proposing an agent-based, self-organizing algorithm resulting in an emergent and surprisingly resilient dynamic structure involving a minimal of interagent communication. Extensive simulation experiments demonstrate the effectiveness as well as resilience of the proposed approach. The ideas originating from the proposed algorithm can not only be used for developing self-organizing animations but can also have practical applications such as in the form of complex, autonomous, evolvable robots with self-organizing, mobile components with minimal individual computational capabilities. The work also demonstrates the utility of exploratory agent-based modeling (EABM) in the engineering of artificial life-like complex adaptive systems. PMID:24701135
Bennett clocking of quantum-dot cellular automata and the limits to binary logic scaling.
Lent, Craig S; Liu, Mo; Lu, Yuhui
2006-08-28
We examine power dissipation in different clocking schemes for molecular quantum-dot cellular automata (QCA) circuits. 'Landauer clocking' involves the adiabatic transition of a molecular cell from the null state to an active state carrying data. Cell layout creates devices which allow data in cells to interact and thereby perform useful computation. We perform direct solutions of the equation of motion for the system in contact with the thermal environment and see that Landauer's Principle applies: one must dissipate an energy of at least k(B)T per bit only when the information is erased. The ideas of Bennett can be applied to keep copies of the bit information by echoing inputs to outputs, thus embedding any logically irreversible circuit in a logically reversible circuit, at the cost of added circuit complexity. A promising alternative which we term 'Bennett clocking' requires only altering the timing of the clocking signals so that bit information is simply held in place by the clock until a computational block is complete, then erased in the reverse order of computation. This approach results in ultralow power dissipation without additional circuit complexity. These results offer a concrete example in which to consider recent claims regarding the fundamental limits of binary logic scaling.
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2018-03-01
A novel reduced-scaling, general-order coupled-cluster approach is formulated by exploiting hierarchical representations of many-body tensors, combined with the recently suggested formalism of scale-adaptive tensor algebra. Inspired by the hierarchical techniques from the renormalisation group approach, H/H2-matrix algebra and fast multipole method, the computational scaling reduction in our formalism is achieved via coarsening of quantum many-body interactions at larger interaction scales, thus imposing a hierarchical structure on many-body tensors of coupled-cluster theory. In our approach, the interaction scale can be defined on any appropriate Euclidean domain (spatial domain, momentum-space domain, energy domain, etc.). We show that the hierarchically resolved many-body tensors can reduce the storage requirements to O(N), where N is the number of simulated quantum particles. Subsequently, we prove that any connected many-body diagram consisting of a finite number of arbitrary-order tensors, e.g. an arbitrary coupled-cluster diagram, can be evaluated in O(NlogN) floating-point operations. On top of that, we suggest an additional approximation to further reduce the computational complexity of higher order coupled-cluster equations, i.e. equations involving higher than double excitations, which otherwise would introduce a large prefactor into formal O(NlogN) scaling.
NASA Astrophysics Data System (ADS)
Gunceler, Deniz
Solvents are of great importance in many technological applications, but are difficult to study using standard, off-the-shelf ab initio electronic structure methods. This is because a single configuration of molecular positions in the solvent (a "snapshot" of the fluid) is not necessarily representative of the thermodynamic average. To obtain any thermodynamic averages (e.g. free energies), the phase space of the solvent must be sampled, typically using molecular dynamics. This greatly increases the computational cost involved in studying solvated systems. Joint density-functional theory has made its mark by being a computationally efficient yet rigorous theory by which to study solvation. It replaces the need for thermodynamic sampling with an effective continuum description of the solvent environment that is in-principle exact, computationally efficient and intuitive (easier to interpret). It has been very successful in aqueous systems, with potential applications in (among others) energy materials discovery, catalysis and surface science. In this dissertation, we develop accurate and fast joint density functional theories for complex, non-aqueous solvent enviroments, including organic solvents and room temperature ionic liquids, as well as new methods for calculating electron excitation spectra in such systems. These theories are then applied to a range of physical problems, from dendrite formation in lithium-metal batteries to the optical spectra of solvated ions.
Bennett clocking of quantum-dot cellular automata and the limits to binary logic scaling
NASA Astrophysics Data System (ADS)
Lent, Craig S.; Liu, Mo; Lu, Yuhui
2006-08-01
We examine power dissipation in different clocking schemes for molecular quantum-dot cellular automata (QCA) circuits. 'Landauer clocking' involves the adiabatic transition of a molecular cell from the null state to an active state carrying data. Cell layout creates devices which allow data in cells to interact and thereby perform useful computation. We perform direct solutions of the equation of motion for the system in contact with the thermal environment and see that Landauer's Principle applies: one must dissipate an energy of at least kBT per bit only when the information is erased. The ideas of Bennett can be applied to keep copies of the bit information by echoing inputs to outputs, thus embedding any logically irreversible circuit in a logically reversible circuit, at the cost of added circuit complexity. A promising alternative which we term 'Bennett clocking' requires only altering the timing of the clocking signals so that bit information is simply held in place by the clock until a computational block is complete, then erased in the reverse order of computation. This approach results in ultralow power dissipation without additional circuit complexity. These results offer a concrete example in which to consider recent claims regarding the fundamental limits of binary logic scaling.
Demonstration of quantum advantage in machine learning
NASA Astrophysics Data System (ADS)
Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.
2017-04-01
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.
Thermodynamic cost of computation, algorithmic complexity and the information metric
NASA Technical Reports Server (NTRS)
Zurek, W. H.
1989-01-01
Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.
Locality for quantum systems on graphs depends on the number field
NASA Astrophysics Data System (ADS)
Hall, H. Tracy; Severini, Simone
2013-07-01
Adapting a definition of Aaronson and Ambainis (2005 Theory Comput. 1 47-79), we call a quantum dynamics on a digraph saturated Z-local if the nonzero transition amplitudes specifying the unitary evolution are in exact correspondence with the directed edges (including loops) of the digraph. This idea appears recurrently in a variety of contexts including angular momentum, quantum chaos, and combinatorial matrix theory. Complete characterization of the digraph properties that allow such a process to exist is a long-standing open question that can also be formulated in terms of minimum rank problems. We prove that saturated Z-local dynamics involving complex amplitudes occur on a proper superset of the digraphs that allow restriction to the real numbers or, even further, the rationals. Consequently, among these fields, complex numbers guarantee the largest possible choice of topologies supporting a discrete quantum evolution. A similar construction separates complex numbers from the skew field of quaternions. The result proposes a concrete ground for distinguishing between complex and quaternionic quantum mechanics.
Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien
2012-01-01
Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Chen, Goong; Wang, Yi-Ching; Perronnet, Alain; Gu, Cong; Yao, Pengfei; Bin-Mohsin, Bandar; Hajaiej, Hichem; Scully, Marlan O.
2017-03-01
Computational mathematics, physics and engineering form a major constituent of modern computational science, which now stands on an equal footing with the established branches of theoretical and experimental sciences. Computational mechanics solves problems in science and engineering based upon mathematical modeling and computing, bypassing the need for expensive and time-consuming laboratory setups and experimental measurements. Furthermore, it allows the numerical simulations of large scale systems, such as the formation of galaxies that could not be done in any earth bound laboratories. This article is written as part of the 21st Century Frontiers Series to illustrate some state-of-the-art computational science. We emphasize how to do numerical modeling and visualization in the study of a contemporary event, the pulverizing crash of the Germanwings Flight 9525 on March 24, 2015, as a showcase. Such numerical modeling and the ensuing simulation of aircraft crashes into land or mountain are complex tasks as they involve both theoretical study and supercomputing of a complex physical system. The most tragic type of crash involves ‘pulverization’ such as the one suffered by this Germanwings flight. Here, we show pulverizing airliner crashes by visualization through video animations from supercomputer applications of the numerical modeling tool LS-DYNA. A sound validation process is challenging but essential for any sophisticated calculations. We achieve this by validation against the experimental data from a crash test done in 1993 of an F4 Phantom II fighter jet into a wall. We have developed a method by hybridizing two primary methods: finite element analysis and smoothed particle hydrodynamics. This hybrid method also enhances visualization by showing a ‘debris cloud’. Based on our supercomputer simulations and the visualization, we point out that prior works on this topic based on ‘hollow interior’ modeling can be quite problematic and, thus, not likely to be correct. We discuss the effects of terrain on pulverization using the information from the recovered flight-data-recorder and show our forensics and assessments of what may have happened during the final moments of the crash. Finally, we point out that our study has potential for being made into real-time flight crash simulators to help the study of crashworthiness and survivability for future aviation safety. Some forward-looking statements are also made.
A continuum theory for multicomponent chromatography modeling.
Pfister, David; Morbidelli, Massimo; Nicoud, Roger-Marc
2016-05-13
A continuum theory is proposed for modeling multicomponent chromatographic systems under linear conditions. The model is based on the description of complex mixtures, possibly involving tens or hundreds of solutes, by a continuum. The present approach is shown to be very efficient when dealing with a large number of similar components presenting close elution behaviors and whose individual analytical characterization is impossible. Moreover, approximating complex mixtures by continuous distributions of solutes reduces the required number of model parameters to the few ones specific to the characterization of the selected continuous distributions. Therefore, in the frame of the continuum theory, the simulation of large multicomponent systems gets simplified and the computational effectiveness of the chromatographic model is thus dramatically improved. Copyright © 2016 Elsevier B.V. All rights reserved.
Logic-Based Models for the Analysis of Cell Signaling Networks†
2010-01-01
Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868
Low photon count based digital holography for quadratic phase cryptography.
Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Ryle, James P; Healy, John J; Lee, Byung-Geun; Sheridan, John T
2017-07-15
Recently, the vulnerability of the linear canonical transform-based double random phase encryption system to attack has been demonstrated. To alleviate this, we present for the first time, to the best of our knowledge, a method for securing a two-dimensional scene using a quadratic phase encoding system operating in the photon-counted imaging (PCI) regime. Position-phase-shifting digital holography is applied to record the photon-limited encrypted complex samples. The reconstruction of the complex wavefront involves four sparse (undersampled) dataset intensity measurements (interferograms) at two different positions. Computer simulations validate that the photon-limited sparse-encrypted data has adequate information to authenticate the original data set. Finally, security analysis, employing iterative phase retrieval attacks, has been performed.
Lempel-Ziv complexity analysis of one dimensional cellular automata.
Estevez-Rams, E; Lora-Serrano, R; Nunes, C A J; Aragón-Fernández, B
2015-12-01
Lempel-Ziv complexity measure has been used to estimate the entropy density of a string. It is defined as the number of factors in a production factorization of a string. In this contribution, we show that its use can be extended, by using the normalized information distance, to study the spatiotemporal evolution of random initial configurations under cellular automata rules. In particular, the transfer information from time consecutive configurations is studied, as well as the sensitivity to perturbed initial conditions. The behavior of the cellular automata rules can be grouped in different classes, but no single grouping captures the whole nature of the involved rules. The analysis carried out is particularly appropriate for studying the computational processing capabilities of cellular automata rules.
Lempel-Ziv complexity analysis of one dimensional cellular automata
NASA Astrophysics Data System (ADS)
Estevez-Rams, E.; Lora-Serrano, R.; Nunes, C. A. J.; Aragón-Fernández, B.
2015-12-01
Lempel-Ziv complexity measure has been used to estimate the entropy density of a string. It is defined as the number of factors in a production factorization of a string. In this contribution, we show that its use can be extended, by using the normalized information distance, to study the spatiotemporal evolution of random initial configurations under cellular automata rules. In particular, the transfer information from time consecutive configurations is studied, as well as the sensitivity to perturbed initial conditions. The behavior of the cellular automata rules can be grouped in different classes, but no single grouping captures the whole nature of the involved rules. The analysis carried out is particularly appropriate for studying the computational processing capabilities of cellular automata rules.
Low-complexity camera digital signal imaging for video document projection system
NASA Astrophysics Data System (ADS)
Hsia, Shih-Chang; Tsai, Po-Shien
2011-04-01
We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.
Intelligent Agent Architectures: Reactive Planning Testbed
NASA Technical Reports Server (NTRS)
Rosenschein, Stanley J.; Kahn, Philip
1993-01-01
An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected.
The foundations of the human cultural niche
Derex, Maxime; Boyd, Robert
2015-01-01
Technological innovations have allowed humans to settle in habitats for which they are poorly suited biologically. However, our understanding of how humans produce complex technologies is limited. We used a computer-based experiment, involving humans and learning bots, to investigate how reasoning abilities, social learning mechanisms and population structure affect the production of virtual artefacts. We found that humans' reasoning abilities play an important role in the production of innovations, but that groups of individuals are able to produce artefacts that are more complex than any isolated individual can produce during the same amount of time. We show that this group-level ability to produce complex innovations is maximized when social information is easy to acquire and when individuals are organized into large and partially connected populations. These results suggest that the transition to behavioural modernity could have been triggered by a change in ancestral between-group interaction patterns. PMID:26400015
Guzman, Karen; Bartlett, John
2012-01-01
Biological systems and living processes involve a complex interplay of biochemicals and macromolecular structures that can be challenging for undergraduate students to comprehend and, thus, misconceptions abound. Protein synthesis, or translation, is an example of a biological process for which students often hold many misconceptions. This article describes an exercise that was developed to illustrate the process of translation using simple objects to represent complex molecules. Animations, 3D physical models, computer simulations, laboratory experiments and classroom lectures are also used to reinforce the students' understanding of translation, but by focusing on the simple manipulatives in this exercise, students are better able to visualize concepts that can elude them when using the other methods. The translation exercise is described along with suggestions for background material, questions used to evaluate student comprehension and tips for using the manipulatives to identify common misconceptions. Copyright © 2012 Wiley Periodicals, Inc.
HERMIES-3: A step toward autonomous mobility, manipulation, and perception
NASA Technical Reports Server (NTRS)
Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.
1989-01-01
HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.
A comparative study of turbulence models for overset grids
NASA Technical Reports Server (NTRS)
Renze, Kevin J.; Buning, Pieter G.; Rajagopalan, R. G.
1992-01-01
The implementation of two different types of turbulence models for a flow solver using the Chimera overset grid method is examined. Various turbulence model characteristics, such as length scale determination and transition modeling, are found to have a significant impact on the computed pressure distribution for a multielement airfoil case. No inherent problem is found with using either algebraic or one-equation turbulence models with an overset grid scheme, but simulation of turbulence for multiple-body or complex geometry flows is very difficult regardless of the gridding method. For complex geometry flowfields, modification of the Baldwin-Lomax turbulence model is necessary to select the appropriate length scale in wall-bounded regions. The overset grid approach presents no obstacle to use of a one- or two-equation turbulence model. Both Baldwin-Lomax and Baldwin-Barth models have problems providing accurate eddy viscosity levels for complex multiple-body flowfields such as those involving the Space Shuttle.
Kushniruk, Andre W; Borycki, Elizabeth M
2015-01-01
Innovations in healthcare information systems promise to revolutionize and streamline healthcare processes worldwide. However, the complexity of these systems and the need to better understand issues related to human-computer interaction have slowed progress in this area. In this chapter the authors describe their work in using methods adapted from usability engineering, video ethnography and analysis of digital log files for improving our understanding of complex real-world healthcare interactions between humans and technology. The approaches taken are cost-effective and practical and can provide detailed ethnographic data on issues health professionals and consumers encounter while using systems as well as potential safety problems. The work is important in that it can be used in techno-anthropology to characterize complex user interactions with technologies and also to provide feedback into redesign and optimization of improved healthcare information systems.
Understanding of Leaf Development-the Science of Complexity.
Malinowski, Robert
2013-06-25
The leaf is the major organ involved in light perception and conversion of solar energy into organic carbon. In order to adapt to different natural habitats, plants have developed a variety of leaf forms, ranging from simple to compound, with various forms of dissection. Due to the enormous cellular complexity of leaves, understanding the mechanisms regulating development of these organs is difficult. In recent years there has been a dramatic increase in the use of technically advanced imaging techniques and computational modeling in studies of leaf development. Additionally, molecular tools for manipulation of morphogenesis were successfully used for in planta verification of developmental models. Results of these interdisciplinary studies show that global growth patterns influencing final leaf form are generated by cooperative action of genetic, biochemical, and biomechanical inputs. This review summarizes recent progress in integrative studies on leaf development and illustrates how intrinsic features of leaves (including their cellular complexity) influence the choice of experimental approach.
Understanding of Leaf Development—the Science of Complexity
Malinowski, Robert
2013-01-01
The leaf is the major organ involved in light perception and conversion of solar energy into organic carbon. In order to adapt to different natural habitats, plants have developed a variety of leaf forms, ranging from simple to compound, with various forms of dissection. Due to the enormous cellular complexity of leaves, understanding the mechanisms regulating development of these organs is difficult. In recent years there has been a dramatic increase in the use of technically advanced imaging techniques and computational modeling in studies of leaf development. Additionally, molecular tools for manipulation of morphogenesis were successfully used for in planta verification of developmental models. Results of these interdisciplinary studies show that global growth patterns influencing final leaf form are generated by cooperative action of genetic, biochemical, and biomechanical inputs. This review summarizes recent progress in integrative studies on leaf development and illustrates how intrinsic features of leaves (including their cellular complexity) influence the choice of experimental approach. PMID:27137383
Liu, Shiwei; Liu, Yihui; Zhao, Jiawei; Cai, Shitao; Qian, Hongmei; Zuo, Kaijing; Zhao, Lingxia; Zhang, Lida
2017-04-01
Rice (Oryza sativa) is one of the most important staple foods for more than half of the global population. Many rice traits are quantitative, complex and controlled by multiple interacting genes. Thus, a full understanding of genetic relationships will be critical to systematically identify genes controlling agronomic traits. We developed a genome-wide rice protein-protein interaction network (RicePPINet, http://netbio.sjtu.edu.cn/riceppinet) using machine learning with structural relationship and functional information. RicePPINet contained 708 819 predicted interactions for 16 895 non-transposable element related proteins. The power of the network for discovering novel protein interactions was demonstrated through comparison with other publicly available protein-protein interaction (PPI) prediction methods, and by experimentally determined PPI data sets. Furthermore, global analysis of domain-mediated interactions revealed RicePPINet accurately reflects PPIs at the domain level. Our studies showed the efficiency of the RicePPINet-based method in prioritizing candidate genes involved in complex agronomic traits, such as disease resistance and drought tolerance, was approximately 2-11 times better than random prediction. RicePPINet provides an expanded landscape of computational interactome for the genetic dissection of agronomically important traits in rice. © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.
Cognitive performance modeling based on general systems performance theory.
Kondraske, George V
2010-01-01
General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).
Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter
2015-01-01
Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227
Tzoupis, Haralambos; Leonis, Georgios; Avramopoulos, Aggelos; Reis, Heribert; Czyżnikowska, Żaneta; Zerva, Sofia; Vergadou, Niki; Peristeras, Loukas D; Papavasileiou, Konstantinos D; Alexis, Michael N; Mavromoustakos, Thomas; Papadopoulos, Manthos G
2015-11-01
We investigate the binding mechanism in renin complexes, involving three drugs (remikiren, zankiren and enalkiren) and one lead compound, which was selected after screening the ZINC database. For this purpose, we used ab initio methods (the effective fragment potential, the variational perturbation theory, the energy decomposition analysis, the atoms-in-molecules), docking, molecular dynamics, and the MM-PBSA method. A biological assay for the lead compound has been performed to validate the theoretical findings. Importantly, binding free energy calculations for the three drug complexes are within 3 kcal/mol of the experimental values, thus further justifying our computational protocol, which has been validated through previous studies on 11 drug-protein systems. The main elements of the discovered mechanism are: (i) minor changes are induced to renin upon drug binding, (ii) the three drugs form an extensive network of hydrogen bonds with renin, whilst the lead compound presented diminished interactions, (iii) ligand binding in all complexes is driven by favorable van der Waals interactions and the nonpolar contribution to solvation, while the lead compound is associated with diminished van der Waals interactions compared to the drug-bound forms of renin, and (iv) the environment (H2O/Na(+)) has a small effect on the renin-remikiren interaction. Copyright © 2015 Elsevier Inc. All rights reserved.
Introduction to the LaRC central scientific computing complex
NASA Technical Reports Server (NTRS)
Shoosmith, John N.
1993-01-01
The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Composition of Web Services Using Markov Decision Processes and Dynamic Programming
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247
Orthopaedic Application Of Spatio Temporal Analysis Of Body Form And Function
NASA Astrophysics Data System (ADS)
Tauber, C.; Au, J.; Bernstein, S.; Grant, A.; Pugh, J.
1983-07-01
Spatial and temporal analysis of walking provides the orthopaedist with objective evidence of functional ability and improvement in a patient. Patients with orthopaedic problems experiencing extreme pain and, consequently, irregularities in joint motions on weightbearing are videorecorded before, during and after a course of rehabilitative treatment and/or surgical correction of their disability. A specially-programmed computer analyzes these tapes for the parameters of walking by locating reflective spots which indicate the centers of the lower limb joints. The following parameters of gait are then generated: dynamic hip, knee and foot angles at various intervals during walking; vertical, horizontal and lateral displacements of each joint at various time intervals; linear and angular velocities of each joint; and the relationships between the joints during various phases of the gait cycle. The systematic sampling and analysis of the videorecordings by computer enable such information to be converted into and presented as computer graphics, as well as organized into tables of gait variables. This format of presentation of the skeletal adjustments involved in normal human motion provides the clinician with a visual format of gait information which objectively illuminates the multifaceted and complex factors involved. This system provides the clinician a method by which to evaluate the success of the regimen in terms of patient comfort and function.
High Performance Parallel Computational Nanotechnology
NASA Technical Reports Server (NTRS)
Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.
NASA Astrophysics Data System (ADS)
Chien, Cheng-Chih
In the past thirty years, the effectiveness of computer assisted learning was found varied by individual studies. Today, with drastic technical improvement, computers have been widely spread in schools and used in a variety of ways. In this study, a design model involving educational technology, pedagogy, and content domain is proposed for effective use of computers in learning. Computer simulation, constructivist and Vygotskian perspectives, and circular motion are the three elements of the specific Chain Model for instructional design. The goal of the physics course is to help students remove the ideas which are not consistent with the physics community and rebuild new knowledge. To achieve the learning goal, the strategies of using conceptual conflicts and using language to internalize specific tasks into mental functions were included. Computer simulations and accompanying worksheets were used to help students explore their own ideas and to generate questions for discussions. Using animated images to describe the dynamic processes involved in the circular motion may reduce the complexity and possible miscommunications resulting from verbal explanations. The effectiveness of the instructional material on student learning is evaluated. The results of problem solving activities show that students using computer simulations had significantly higher scores than students not using computer simulations. For conceptual understanding, on the pretest students in the non-simulation group had significantly higher score than students in the simulation group. There was no significant difference observed between the two groups in the posttest. The relations of gender, prior physics experience, and frequency of computer uses outside the course to student achievement were also studied. There were fewer female students than male students and fewer students using computer simulations than students not using computer simulations. These characteristics affect the statistical power for detecting differences. For the future research, more intervention of simulations may be introduced to explore the potential of computer simulation in helping students learning. A test for conceptual understanding with more problems and appropriate difficulty level may be needed.
Computational complexity of the landscape II-Cosmological considerations
NASA Astrophysics Data System (ADS)
Denef, Frederik; Douglas, Michael R.; Greene, Brian; Zukowski, Claire
2018-05-01
We propose a new approach for multiverse analysis based on computational complexity, which leads to a new family of "computational" measure factors. By defining a cosmology as a space-time containing a vacuum with specified properties (for example small cosmological constant) together with rules for how time evolution will produce the vacuum, we can associate global time in a multiverse with clock time on a supercomputer which simulates it. We argue for a principle of "limited computational complexity" governing early universe dynamics as simulated by this supercomputer, which translates to a global measure for regulating the infinities of eternal inflation. The rules for time evolution can be thought of as a search algorithm, whose details should be constrained by a stronger principle of "minimal computational complexity". Unlike previously studied global measures, ours avoids standard equilibrium considerations and the well-known problems of Boltzmann Brains and the youngness paradox. We also give various definitions of the computational complexity of a cosmology, and argue that there are only a few natural complexity classes.
Chronopoulos, Dimitrios; Collet, Manuel; Ichchou, Mohamed
2015-02-17
The waves propagating within complex smart structures are hereby computed by employing a wave and finite element method. The structures can be of arbitrary layering and of complex geometric characteristics as long as they exhibit two-dimensional periodicity. The piezoelectric coupling phenomena are considered within the finite element formulation. The mass, stiffness and piezoelectric stiffness matrices of the modelled segment can be extracted using a conventional finite element code. The post-processing of these matrices involves the formulation of an eigenproblem whose solutions provide the phase velocities for each wave propagating within the structure and for any chosen direction of propagation. The model is then modified in order to account for a shunted piezoelectric patch connected to the composite structure. The impact of the energy dissipation induced by the shunted circuit on the total damping loss factor of the composite panel is then computed. The influence of the additional mass and stiffness provided by the attached piezoelectric devices on the wave propagation characteristics of the structure is also investigated.
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.
Hula, Andreas; Montague, P Read; Dayan, Peter
2015-06-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference.
Unraveling Entropic Rate Acceleration Induced by Solvent Dynamics in Membrane Enzymes.
Kürten, Charlotte; Syrén, Per-Olof
2016-01-16
Enzyme catalysis evolved in an aqueous environment. The influence of solvent dynamics on catalysis is, however, currently poorly understood and usually neglected. The study of water dynamics in enzymes and the associated thermodynamical consequences is highly complex and has involved computer simulations, nuclear magnetic resonance (NMR) experiments, and calorimetry. Water tunnels that connect the active site with the surrounding solvent are key to solvent displacement and dynamics. The protocol herein allows for the engineering of these motifs for water transport, which affects specificity, activity and thermodynamics. By providing a biophysical framework founded on theory and experiments, the method presented herein can be used by researchers without previous expertise in computer modeling or biophysical chemistry. The method will advance our understanding of enzyme catalysis on the molecular level by measuring the enthalpic and entropic changes associated with catalysis by enzyme variants with obstructed water tunnels. The protocol can be used for the study of membrane-bound enzymes and other complex systems. This will enhance our understanding of the importance of solvent reorganization in catalysis as well as provide new catalytic strategies in protein design and engineering.
Synthetic collective intelligence.
Solé, Ricard; Amor, Daniel R; Duran-Nebreda, Salva; Conde-Pueyo, Núria; Carbonell-Ballestero, Max; Montañez, Raúl
2016-10-01
Intelligent systems have emerged in our biosphere in different contexts and achieving different levels of complexity. The requirement of communication in a social context has been in all cases a determinant. The human brain, probably co-evolving with language, is an exceedingly successful example. Similarly, social insects complex collective decisions emerge from information exchanges between many agents. The difference is that such processing is obtained out of a limited individual cognitive power. Computational models and embodied versions using non-living systems, particularly involving robot swarms, have been used to explore the potentiality of collective intelligence. Here we suggest a novel approach to the problem grounded in the genetic engineering of unicellular systems, which can be modified in order to interact, store memories or adapt to external stimuli in collective ways. What we label as Synthetic Swarm Intelligence defines a parallel approach to the evolution of computation and swarm intelligence and allows to explore potential embodied scenarios for decision making at the microscale. Here, we consider several relevant examples of collective intelligence and their synthetic organism counterparts. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The application of an MPM-MFM method for simulating weapon-target interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, X.; Zou, Q.; Zhang, D. Z.
2005-01-01
During the past two decades, Los Alamos National Laboratory (LANL) has developed computational algorithms and software for analysis of multiphase flow suitable for high-speed projectile penetration of metallic and nonmetallic materials, using a material point method (MPM)-multiphase flow method (MFM). Recently, ACTA has teamed with LANL to advance a computational algorithm for simulating complex weapon-target interaction for penetrating and exploding munitions, such as tank rounds and artillery shells, as well as non-exploding kinetic energy penetrators. This paper will outline the mathematical basis for the MPM-MFM method as implemented in LANL's CartaBlanca code. CartaBlanca, written entirely in Java using object-oriented design,more » is used to solve complex problems involving (a) failure and penetration of solids, (b) heat transfer, (c) phase change, (d) chemical reactions, and (e) multiphase flow. We will present its application to the penetration of a steel target by a tungsten cylinder and compare results with time-resolved experimental data published by Anderson, et. al., Int. J. Impact Engng., Vol. 16, No. 1, pp. 1-18, 1995.« less
Orenha, Renato Pereira; Santiago, Régis Tadeu; Haiduke, Roberto Luiz Andrade; Galembeck, Sérgio Emanuel
2017-05-05
Two treatments of relativistic effects, namely effective core potentials (ECP) and all-electron scalar relativistic effects (DKH2), are used to obtain geometries and chemical reaction energies for a series of ruthenium complexes in B3LYP/def2-TZVP calculations. Specifically, the reaction energies of reduction (A-F), isomerization (G-I), and Cl - negative trans influence in relation to NH 3 (J-L) are considered. The ECP and DKH2 approaches provided geometric parameters close to experimental data and the same ordering for energy changes of reactions A-L. From geometries optimized with ECP, the electronic energies are also determined by means of the same ECP and basis set combined with the computational methods: MP2, M06, BP86, and its derivatives, so as B2PLYP, LC-wPBE, and CCSD(T) (reference method). For reactions A-I, B2PLYP provides the best agreement with CCSD(T) results. Additionally, B3LYP gave the smallest error for the energies of reactions J-L. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Chronopoulos, Dimitrios; Collet, Manuel; Ichchou, Mohamed; Shah, Tahir
2015-01-01
The waves propagating within complex smart structures are hereby computed by employing a wave and finite element method. The structures can be of arbitrary layering and of complex geometric characteristics as long as they exhibit two-dimensional periodicity. The piezoelectric coupling phenomena are considered within the finite element formulation. The mass, stiffness and piezoelectric stiffness matrices of the modelled segment can be extracted using a conventional finite element code. The post-processing of these matrices involves the formulation of an eigenproblem whose solutions provide the phase velocities for each wave propagating within the structure and for any chosen direction of propagation. The model is then modified in order to account for a shunted piezoelectric patch connected to the composite structure. The impact of the energy dissipation induced by the shunted circuit on the total damping loss factor of the composite panel is then computed. The influence of the additional mass and stiffness provided by the attached piezoelectric devices on the wave propagation characteristics of the structure is also investigated. PMID:28787972
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange
Hula, Andreas; Montague, P. Read; Dayan, Peter
2015-01-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
Heffernan, Kayla Joanne; Chang, Shanton; Maclean, Skye Tamara; Callegari, Emma Teresa; Garland, Suzanne Marie; Reavley, Nicola Jane; Varigos, George Andrew; Wark, John Dennis
2016-02-09
The now ubiquitous catchphrase, "There's an app for that," rings true owing to the growing number of mobile phone apps. In excess of 97,000 eHealth apps are available in major app stores. Yet the effectiveness of these apps varies greatly. While a minority of apps are developed grounded in theory and in conjunction with health care experts, the vast majority are not. This is concerning given the Hippocratic notion of "do no harm." There is currently no unified formal theory for developing interactive eHealth apps, and development is especially difficult when complex messaging is required, such as in health promotion and prevention. This paper aims to provide insight into the creation of interactive eHealth apps for complex messaging, by leveraging the Safe-D case study, which involved complex messaging required to guide safe but sufficient UV exposure for vitamin D synthesis in users. We aim to create recommendations for developing interactive eHealth apps for complex messages based on the lessons learned during Safe-D app development. For this case study we developed an Apple and Android app, both named Safe-D, to safely improve vitamin D status in young women through encouraging safe ultraviolet radiation exposure. The app was developed through participatory action research involving medical and human computer interaction researchers, subject matter expert clinicians, external developers, and target users. The recommendations for development were created from analysis of the development process. By working with clinicians and implementing disparate design examples from the literature, we developed the Safe-D app. From this development process, recommendations for developing interactive eHealth apps for complex messaging were created: (1) involve a multidisciplinary team in the development process, (2) manage complex messages to engage users, and (3) design for interactivity (tailor recommendations, remove barriers to use, design for simplicity). This research has provided principles for developing interactive eHealth apps for complex messaging as guidelines by aggregating existing design concepts and expanding these concepts and new learnings from our development process. A set of guidelines to develop interactive eHealth apps generally, and specifically those for complex messaging, was previously missing from the literature; this research has contributed these principles. Safe-D delivers complex messaging simply, to aid education, and explicitly, considering user safety.
Heffernan, Kayla Joanne; Maclean, Skye Tamara; Callegari, Emma Teresa; Garland, Suzanne Marie; Reavley, Nicola Jane; Varigos, George Andrew; Wark, John Dennis
2016-01-01
Background The now ubiquitous catchphrase, “There’s an app for that,” rings true owing to the growing number of mobile phone apps. In excess of 97,000 eHealth apps are available in major app stores. Yet the effectiveness of these apps varies greatly. While a minority of apps are developed grounded in theory and in conjunction with health care experts, the vast majority are not. This is concerning given the Hippocratic notion of “do no harm.” There is currently no unified formal theory for developing interactive eHealth apps, and development is especially difficult when complex messaging is required, such as in health promotion and prevention. Objective This paper aims to provide insight into the creation of interactive eHealth apps for complex messaging, by leveraging the Safe-D case study, which involved complex messaging required to guide safe but sufficient UV exposure for vitamin D synthesis in users. We aim to create recommendations for developing interactive eHealth apps for complex messages based on the lessons learned during Safe-D app development. Methods For this case study we developed an Apple and Android app, both named Safe-D, to safely improve vitamin D status in young women through encouraging safe ultraviolet radiation exposure. The app was developed through participatory action research involving medical and human computer interaction researchers, subject matter expert clinicians, external developers, and target users. The recommendations for development were created from analysis of the development process. Results By working with clinicians and implementing disparate design examples from the literature, we developed the Safe-D app. From this development process, recommendations for developing interactive eHealth apps for complex messaging were created: (1) involve a multidisciplinary team in the development process, (2) manage complex messages to engage users, and (3) design for interactivity (tailor recommendations, remove barriers to use, design for simplicity). Conclusions This research has provided principles for developing interactive eHealth apps for complex messaging as guidelines by aggregating existing design concepts and expanding these concepts and new learnings from our development process. A set of guidelines to develop interactive eHealth apps generally, and specifically those for complex messaging, was previously missing from the literature; this research has contributed these principles. Safe-D delivers complex messaging simply, to aid education, and explicitly, considering user safety. PMID:26860623
Future in biomolecular computation
NASA Astrophysics Data System (ADS)
Wimmer, E.
1988-01-01
Large-scale computations for biomolecules are dominated by three levels of theory: rigorous quantum mechanical calculations for molecules with up to about 30 atoms, semi-empirical quantum mechanical calculations for systems with up to several hundred atoms, and force-field molecular dynamics studies of biomacromolecules with 10,000 atoms and more including surrounding solvent molecules. It can be anticipated that increased computational power will allow the treatment of larger systems of ever growing complexity. Due to the scaling of the computational requirements with increasing number of atoms, the force-field approaches will benefit the most from increased computational power. On the other hand, progress in methodologies such as density functional theory will enable us to treat larger systems on a fully quantum mechanical level and a combination of molecular dynamics and quantum mechanics can be envisioned. One of the greatest challenges in biomolecular computation is the protein folding problem. It is unclear at this point, if an approach with current methodologies will lead to a satisfactory answer or if unconventional, new approaches will be necessary. In any event, due to the complexity of biomolecular systems, a hierarchy of approaches will have to be established and used in order to capture the wide ranges of length-scales and time-scales involved in biological processes. In terms of hardware development, speed and power of computers will increase while the price/performance ratio will become more and more favorable. Parallelism can be anticipated to become an integral architectural feature in a range of computers. It is unclear at this point, how fast massively parallel systems will become easy enough to use so that new methodological developments can be pursued on such computers. Current trends show that distributed processing such as the combination of convenient graphics workstations and powerful general-purpose supercomputers will lead to a new style of computing in which the calculations are monitored and manipulated as they proceed. The combination of a numeric approach with artificial-intelligence approaches can be expected to open up entirely new possibilities. Ultimately, the most exciding aspect of the future in biomolecular computing will be the unexpected discoveries.
Lindström, Ida; Dogan, Jakob
2018-05-18
Intrinsically disordered proteins (IDPs) are abundant in the eukaryotic proteome. However, little is known about the role of subnanosecond dynamics and the conformational entropy that it represents in protein-protein interactions involving IDPs. Using nuclear magnetic resonance side chain and backbone relaxation, stopped-flow kinetics, isothermal titration calorimetry, and computational studies, we have characterized the interaction between the globular TAZ1 domain of the CREB binding protein and the intrinsically disordered transactivation domain of STAT2 (TAD-STAT2). We show that the TAZ1/TAD-STAT2 complex retains considerable subnanosecond motions, with TAD-STAT2 undergoing only a partial disorder-to-order transition. We report here the first experimental determination of the conformational entropy change for both binding partners in an IDP binding interaction and find that the total change even exceeds in magnitude the binding enthalpy and is comparable to the contribution from the hydrophobic effect, demonstrating its importance in the binding energetics. Furthermore, we show that the conformational entropy change for TAZ1 is also instrumental in maintaining a biologically meaningful binding affinity. Strikingly, a spatial clustering of very high amplitude motions and a cluster of more rigid sites in the complex exist, which through computational studies we found to overlap with regions that experience energetic frustration and are less frustrated, respectively. Thus, the residual dynamics in the bound state could be necessary for faster dissociation, which is important for proteins that interact with multiple binding partners.
Tools of the Future: How Decision Tree Analysis Will Impact Mission Planning
NASA Technical Reports Server (NTRS)
Otterstatter, Matthew R.
2005-01-01
The universe is infinitely complex; however, the human mind has a finite capacity. The multitude of possible variables, metrics, and procedures in mission planning are far too many to address exhaustively. This is unfortunate because, in general, considering more possibilities leads to more accurate and more powerful results. To compensate, we can get more insightful results by employing our greatest tool, the computer. The power of the computer will be utilized through a technology that considers every possibility, decision tree analysis. Although decision trees have been used in many other fields, this is innovative for space mission planning. Because this is a new strategy, no existing software is able to completely accommodate all of the requirements. This was determined through extensive research and testing of current technologies. It was necessary to create original software, for which a short-term model was finished this summer. The model was built into Microsoft Excel to take advantage of the familiar graphical interface for user input, computation, and viewing output. Macros were written to automate the process of tree construction, optimization, and presentation. The results are useful and promising. If this tool is successfully implemented in mission planning, our reliance on old-fashioned heuristics, an error-prone shortcut for handling complexity, will be reduced. The computer algorithms involved in decision trees will revolutionize mission planning. The planning will be faster and smarter, leading to optimized missions with the potential for more valuable data.
Wang, Juan; Guo, Yunjie; Zhang, Xue
2018-02-01
Calmodulin-dependent protein kinase (CAMK) is physiologically activated in fertilized human oocytes and is involved in the Ca 2+ response pathways that link the fertilization calmodulin signal to meiosis resumption and cortical granule exocytosis. The kinase has an unstructured C-terminal tail that can be recognized and bound by the PDZ5 domain of its cognate partner, the multi-PDZ domain protein (MUP). In the current study, we reported a rational biomolecular design of halogen-bonding system at the complex interface of CAMK's C-terminal peptide with MUP PDZ5 domain by using high-level computational approaches. Four organic halogens were employed as atom probes to explore the structural geometry and energetic property of designed halogen bonds in the PDZ5-peptide complex. It was found that the heavier halogen elements such as bromine Br and iodine I can confer stronger halogen bond but would cause bad atomic contacts and overlaps at the complex interface, while fluorine F cannot form effective halogen bond in the complex. In addition, the halogen substitution at different positions of peptide's aromatic ring would result in distinct effects on the halogen-bonding system. The computational findings were then verified by using fluorescence analysis; it is indicated that the halogen type and substitution position play critical role in the interaction strength of halogen bonds, and thus the PDZ5-peptide binding affinity can be improved considerably by optimizing their combination. Copyright © 2017 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... Computer Software and Complex Electronics Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear...-1209, ``Software Requirement Specifications for Digital Computer Software and Complex Electronics used... Electronics Engineers (ANSI/IEEE) Standard 830-1998, ``IEEE Recommended Practice for Software Requirements...
Recent advances in QM/MM free energy calculations using reference potentials.
Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L
2015-05-01
Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.
Portrat, Sophie; Guida, Alessandro; Phénix, Thierry; Lemaire, Benoît
2016-04-01
Working memory (WM) is a cognitive system allowing short-term maintenance and processing of information. Maintaining information in WM consists, classically, in rehearsing or refreshing it. Chunking could also be considered as a maintenance mechanism. However, in the literature, it is more often used to explain performance than explicitly investigated within WM paradigms. Hence, the aim of the present paper was (1) to strengthen the experimental dialogue between WM and chunking, by studying the effect of acronyms in a computer-paced complex span task paradigm and (2) to formalize explicitly this dialogue within a computational model. Young adults performed a WM complex span task in which they had to maintain series of 7 letters for further recall while performing a concurrent location judgment task. The series to be remembered were either random strings of letters or strings containing a 3-letter acronym that appeared in position 1, 3, or 5 in the series. Together, the data and simulations provide a better understanding of the maintenance mechanisms taking place in WM and its interplay with long-term memory. Indeed, the behavioral WM performance lends evidence to the functional characteristics of chunking that seems to be, especially in a WM complex span task, an attentional time-based mechanism that certainly enhances WM performance but also competes with other processes at hand in WM. Computational simulations support and delineate such a conception by showing that searching for a chunk in long-term memory involves attentionally demanding subprocesses that essentially take place during the encoding phases of the task.
NASA Astrophysics Data System (ADS)
Vagh, Hardik A.; Baghai-Wadji, Alireza
2008-12-01
Current technological challenges in materials science and high-tech device industry require the solution of boundary value problems (BVPs) involving regions of various scales, e.g. multiple thin layers, fibre-reinforced composites, and nano/micro pores. In most cases straightforward application of standard variational techniques to BVPs of practical relevance necessarily leads to unsatisfactorily ill-conditioned analytical and/or numerical results. To remedy the computational challenges associated with sub-sectional heterogeneities various sophisticated homogenization techniques need to be employed. Homogenization refers to the systematic process of smoothing out the sub-structural heterogeneities, leading to the determination of effective constitutive coefficients. Ordinarily, homogenization involves a sophisticated averaging and asymptotic order analysis to obtain solutions. In the majority of the cases only zero-order terms are constructed due to the complexity of the processes involved. In this paper we propose a constructive scheme for obtaining homogenized solutions involving higher order terms, and thus, guaranteeing higher accuracy and greater robustness of the numerical results. We present
Quantum computational complexity, Einstein's equations and accelerated expansion of the Universe
NASA Astrophysics Data System (ADS)
Ge, Xian-Hui; Wang, Bin
2018-02-01
We study the relation between quantum computational complexity and general relativity. The quantum computational complexity is proposed to be quantified by the shortest length of geodesic quantum curves. We examine the complexity/volume duality in a geodesic causal ball in the framework of Fermi normal coordinates and derive the full non-linear Einstein equation. Using insights from the complexity/action duality, we argue that the accelerated expansion of the universe could be driven by the quantum complexity and free from coincidence and fine-tunning problems.
The importance of structural anisotropy in computational models of traumatic brain injury.
Carlsen, Rika W; Daphalapurkar, Nitin P
2015-01-01
Understanding the mechanisms of injury might prove useful in assisting the development of methods for the management and mitigation of traumatic brain injury (TBI). Computational head models can provide valuable insight into the multi-length-scale complexity associated with the primary nature of diffuse axonal injury. It involves understanding how the trauma to the head (at the centimeter length scale) translates to the white-matter tissue (at the millimeter length scale), and even further down to the axonal-length scale, where physical injury to axons (e.g., axon separation) may occur. However, to accurately represent the development of TBI, the biofidelity of these computational models is of utmost importance. There has been a focused effort to improve the biofidelity of computational models by including more sophisticated material definitions and implementing physiologically relevant measures of injury. This paper summarizes recent computational studies that have incorporated structural anisotropy in both the material definition of the white matter and the injury criterion as a means to improve the predictive capabilities of computational models for TBI. We discuss the role of structural anisotropy on both the mechanical response of the brain tissue and on the development of injury. We also outline future directions in the computational modeling of TBI.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1994-01-01
Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.
Development and implementation of a PACS network and resource manager
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Taira, Ricky K.; Dwyer, Samuel J., III; Huang, H. K.
1992-07-01
Clinical acceptance of PACS is predicated upon maximum uptime. Upon component failure, detection, diagnosis, reconfiguration and repair must occur immediately. Our current PACS network is large, heterogeneous, complex and wide-spread geographically. The overwhelming number of network devices, computers and software processes involved in a departmental or inter-institutional PACS makes development of tools for network and resource management critical. The authors have developed and implemented a comprehensive solution (PACS Network-Resource Manager) using the OSI Network Management Framework with network element agents that respond to queries and commands for network management stations. Managed resources include: communication protocol layers for Ethernet, FDDI and UltraNet; network devices; computer and operating system resources; and application, database and network services. The Network-Resource Manager is currently being used for warning, fault, security violation and configuration modification event notification. Analysis, automation and control applications have been added so that PACS resources can be dynamically reconfigured and so that users are notified when active involvement is required. Custom data and error logging have been implemented that allow statistics for each PACS subsystem to be charted for performance data. The Network-Resource Manager allows our departmental PACS system to be monitored continuously and thoroughly, with a minimal amount of personal involvement and time.
High-order moments of spin-orbit energy in a multielectron configuration
NASA Astrophysics Data System (ADS)
Na, Xieyu; Poirier, M.
2016-07-01
In order to analyze the energy-level distribution in complex ions such as those found in warm dense plasmas, this paper provides values for high-order moments of the spin-orbit energy in a multielectron configuration. Using second-quantization results and standard angular algebra or fully analytical expressions, explicit values are given for moments up to 10th order for the spin-orbit energy. Two analytical methods are proposed, using the uncoupled or coupled orbital and spin angular momenta. The case of multiple open subshells is considered with the help of cumulants. The proposed expressions for spin-orbit energy moments are compared to numerical computations from Cowan's code and agree with them. The convergence of the Gram-Charlier expansion involving these spin-orbit moments is analyzed. While a spectrum with infinitely thin components cannot be adequately represented by such an expansion, a suitable convolution procedure ensures the convergence of the Gram-Charlier series provided high-order terms are accounted for. A corrected analytical formula for the third-order moment involving both spin-orbit and electron-electron interactions turns out to be in fair agreement with Cowan's numerical computations.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Description and operational status of the National Transonic Facility computer complex
NASA Technical Reports Server (NTRS)
Boyles, G. B., Jr.
1986-01-01
This paper describes the National Transonic Facility (NTF) computer complex and its support of tunnel operations. The capabilities of the research data acquisition and reduction are discussed along with the types of data that can be acquired and presented. Pretest, test, and posttest capabilities are also outlined along with a discussion of the computer complex to monitor the tunnel control processes and provide the tunnel operators with information needed to control the tunnel. Planned enhancements to the computer complex for support of future testing are presented.
Dimensionality of visual complexity in computer graphics scenes
NASA Astrophysics Data System (ADS)
Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce
2008-02-01
How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.
Ullman, Michael T; Pancheva, Roumyana; Love, Tracy; Yee, Eiling; Swinney, David; Hickok, Gregory
2005-05-01
Are the linguistic forms that are memorized in the mental lexicon and those that are specified by the rules of grammar subserved by distinct neurocognitive systems or by a single computational system with relatively broad anatomic distribution? On a dual-system view, the productive -ed-suffixation of English regular past tense forms (e.g., look-looked) depends upon the mental grammar, whereas irregular forms (e.g., dig-dug) are retrieved from lexical memory. On a single-mechanism view, the computation of both past tense types depends on associative memory. Neurological double dissociations between regulars and irregulars strengthen the dual-system view. The computation of real and novel, regular and irregular past tense forms was investigated in 20 aphasic subjects. Aphasics with non-fluent agrammatic speech and left frontal lesions were consistently more impaired at the production, reading, and judgment of regular than irregular past tenses. Aphasics with fluent speech and word-finding difficulties, and with left temporal/temporo-parietal lesions, showed the opposite pattern. These patterns held even when measures of frequency, phonological complexity, articulatory difficulty, and other factors were held constant. The data support the view that the memorized words of the mental lexicon are subserved by a brain system involving left temporal/temporo-parietal structures, whereas aspects of the mental grammar, in particular the computation of regular morphological forms, are subserved by a distinct system involving left frontal structures.
Airbreathing Propulsion System Analysis Using Multithreaded Parallel Processing
NASA Technical Reports Server (NTRS)
Schunk, Richard Gregory; Chung, T. J.; Rodriguez, Pete (Technical Monitor)
2000-01-01
In this paper, parallel processing is used to analyze the mixing, and combustion behavior of hypersonic flow. Preliminary work for a sonic transverse hydrogen jet injected from a slot into a Mach 4 airstream in a two-dimensional duct combustor has been completed [Moon and Chung, 1996]. Our aim is to extend this work to three-dimensional domain using multithreaded domain decomposition parallel processing based on the flowfield-dependent variation theory. Numerical simulations of chemically reacting flows are difficult because of the strong interactions between the turbulent hydrodynamic and chemical processes. The algorithm must provide an accurate representation of the flowfield, since unphysical flowfield calculations will lead to the faulty loss or creation of species mass fraction, or even premature ignition, which in turn alters the flowfield information. Another difficulty arises from the disparity in time scales between the flowfield and chemical reactions, which may require the use of finite rate chemistry. The situations are more complex when there is a disparity in length scales involved in turbulence. In order to cope with these complicated physical phenomena, it is our plan to utilize the flowfield-dependent variation theory mentioned above, facilitated by large eddy simulation. Undoubtedly, the proposed computation requires the most sophisticated computational strategies. The multithreaded domain decomposition parallel processing will be necessary in order to reduce both computational time and storage. Without special treatments involved in computer engineering, our attempt to analyze the airbreathing combustion appears to be difficult, if not impossible.
Accelerating the Mining of Influential Nodes in Complex Networks through Community Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halappanavar, Mahantesh; Sathanur, Arun V.; Nandi, Apurba
Computing the set of influential nodes with a given size to ensure maximal spread of influence on a complex network is a challenging problem impacting multiple applications. A rigorous approach to influence maximization involves utilization of optimization routines that comes with a high computational cost. In this work, we propose to exploit the existence of communities in complex networks to accelerate the mining of influential seeds. We provide intuitive reasoning to explain why our approach should be able to provide speedups without significantly degrading the extent of the spread of influence when compared to the case of influence maximization withoutmore » using the community information. Additionally, we have parallelized the complete workflow by leveraging an existing parallel implementation of the Louvain community detection algorithm. We then conduct a series of experiments on a dataset with three representative graphs to first verify our implementation and then demonstrate the speedups. Our method achieves speedups ranging from 3x - 28x for graphs with small number of communities while nearly matching or even exceeding the activation performance on the entire graph. Complexity analysis reveals that dramatic speedups are possible for larger graphs that contain a correspondingly larger number of communities. In addition to the speedups obtained from the utilization of the community structure, scalability results show up to 6.3x speedup on 20 cores relative to the baseline run on 2 cores. Finally, current limitations of the approach are outlined along with the planned next steps.« less
Gas-phase nitrosation of ethylene and related events in the C2H4NO+ landscape.
Gerbaux, Pascal; Dechamps, Noemie; Flammang, Robert; Nam, Pham Cam; Nguyen, Minh Tho; Djazi, Fayçal; Berruyer, Florence; Bouchoux, Guy
2008-06-19
The C2H4NO(+) system has been examined by means of quantum chemical calculations using the G2 and G3B3 approaches and tandem mass spectrometry experiments. Theoretical investigation of the C2H4NO(+) potential-energy surface includes 19 stable C2H4NO(+) structures and a large set of their possible interconnections. These computations provide insights for the understanding of the (i) addition of the nitrosonium cation NO(+) to the ethylene molecule, (ii) skeletal rearrangements evidenced in previous experimental studies on comparable systems, and (iii) experimental identification of new C2H4NO(+) structures. It is predicted from computation that gas-phase nitrosation of ethylene may produce C2H4(*)NO(+) adducts, the most stable structure of which is a pi-complex, 1, stabilized by ca. 65 kJ/mol with respect to its separated components. This complex was produced in the gas phase by a transnitrosation process involving as reactant a complex between water and NO(+) (H2O.NO(+)) and the ethylene molecule and fully characterized by collisional experiments. Among the other C 2H 4NO (+) structures predicted by theory to be protected against dissociation or isomerization by significant energy barriers, five were also experimentally identified. These finding include structures CH3CHNO(+) (5), CH 3CNOH (+) ( 8), CH3NHCO(+) (18), CH3NCOH(+) (19), and an ion/neutral complex CH2O...HCNH(+) (12).
Advanced computer-aided design for bone tissue-engineering scaffolds.
Ramin, E; Harris, R A
2009-04-01
The design of scaffolds with an intricate and controlled internal structure represents a challenge for tissue engineering. Several scaffold-manufacturing techniques allow the creation of complex architectures but with little or no control over the main features of the channel network such as the size, shape, and interconnectivity of each individual channel, resulting in intricate but random structures. The combined use of computer-aided design (CAD) systems and layer-manufacturing techniques allows a high degree of control over these parameters with few limitations in terms of achievable complexity. However, the design of complex and intricate networks of channels required in CAD is extremely time-consuming since manually modelling hundreds of different geometrical elements, all with different parameters, may require several days to design individual scaffold structures. An automated design methodology is proposed by this research to overcome these limitations. This approach involves the investigation of novel software algorithms, which are able to interact with a conventional CAD program and permit the automated design of several geometrical elements, each with a different size and shape. In this work, the variability of the parameters required to define each geometry has been set as random, but any other distribution could have been adopted. This methodology has been used to design five cubic scaffolds with interconnected pore channels that range from 200 to 800 microm in diameter, each with an increased complexity of the internal geometrical arrangement. A clinical case study, consisting of an integration of one of these geometries with a craniofacial implant, is then presented.
Leveraging Modeling Approaches: Reaction Networks and Rules
Blinov, Michael L.; Moraru, Ion I.
2012-01-01
We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high resolution and/or high throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatio-temporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks – the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks. PMID:22161349
Leveraging modeling approaches: reaction networks and rules.
Blinov, Michael L; Moraru, Ion I
2012-01-01
We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.
Outsourcing data processing: planning for the disentanglement.
Moss, M E; Gordon, M L
1993-06-01
Outsourcing data processing operations may be considered a conventional acquisition transaction between a customer and supplier. The most distinctive feature of a DP outsourcing contract is that it involves complex issues relating to computer software and technology and, frequently, intense issues relating to employees. But, one must do more in order to provide for preservation of the integrity (and, therefore, the value) of the data center. The contract must include not just the sale of a facility to a supplier who will take over the operations, but also terms for reconveying the facility at a future date. Getting out of the arrangement can be very complex. Disentanglement can be made less complex however, if the customer and the supplier negotiate all or part of the disentanglement procedures during the original contract proposal. Know ahead of time the possible scenarios for when disentanglement may take place and know what to do during the contract negotiations and during the length of the agreement to keep track of each other's properties. Know also the risks involved in outsourcing DP operations, such as what happens when the supplier's business fails. Having the supplier set up a separate profit entity for your contracted business or using a lien on the data center properties may help avoid loss if such failure occurs.
Timms, Sara; Lakhani, Raj; Connor, Steve; Hopkins, Claire
2017-07-01
Introduction Pneumosinus dilatans (PSD) is a rare phenomenon involving the expansion of the paranasal sinuses, without bony destruction or a mass. Previously documented cases have demonstrated simple expansion of a solitary air cell. We present two unique cases of PSD in the presence of meningioma, in which complex new cells developed within the frontal sinus. One of the two patients developed associated sinus disease. Case 1 A 28-year-old man presented with facial pain. A computed tomography scan showed an abnormally enlarged, septated right frontal sinus, not present on childhood scans. He underwent a modified endoscopic Lothrop approach to divide the septations, and his symptoms resolved. Case 2 A 72-year-old woman presented with a 3-month history of headaches. Scans revealed a left frontal meningioma and multiple enlarged, dilated left frontal air cells. She had no clinical sinusitis and therefore was managed conservatively. Conclusions PSD has been widely documented in association with fibrous dysplasia and meningioma. The most prevalent theory of the mechanism of PSD is of obstruction of the sinus ostium causing sinus expansion through a "ball-valve" effect. Our cases, which demonstrate septated PSD, suggest a more complex process involving local mediators and highlight the need to consider underlying meningioma in pneumosinus dilatans.
Negi, Surendra S.; Carol, Andrew A.; Pandya, Shivangi; Braun, Werner; Anderson, Louise E.
2008-01-01
In immunogold double-labeling of pea leaf thin sections with antibodies raised against ferredoxin-NADP reductase (EC 1.18.1.2, FNR) and antibodies directed against the A or B subunits of the NADP-linked glyceraldehyde-3-P dehydrogenase (GAPD) (EC 1.2.1.13), many small and large gold particles were found together over the chloroplasts. Nearest neighbor analysis of the distribution of the gold particles indicates that FNR and the NADP-linked GAPD are co-localized, in situ. This suggests that FNR might carry FADH2 or NADPH from the thylakoid membrane to GAPD, or that ferredoxin might carry electrons to FNR co-localized with GAPD in the stroma. Crystal structures of the spinach enzymes are available. When they are docked computationally, the proteins appear, as modeled, to be able to form at least two different complexes. One involves a single GAPD monomer and an FNR monomer (or dimer). The amino acid residues located at the putative interface are highly conserved on the chloroplastic forms of both enzymes. The other potential complex involves the GAPD A2B2 tetramer and an FNR monomer (or dimer). The interface residues are conserved in this model as well. Ferredoxin is able to interact with FNR in either complex. PMID:17945509
Energy Efficiency in Public Buildings through Context-Aware Social Computing
García, Óscar; Alonso, Ricardo S.; Prieto, Javier; Corchado, Juan M.
2017-01-01
The challenge of promoting behavioral changes in users that leads to energy savings in public buildings has become a complex task requiring the involvement of multiple technologies. Wireless sensor networks have a great potential for the development of tools, such as serious games, that encourage acquiring good energy and healthy habits among users in the workplace. This paper presents the development of a serious game using CAFCLA, a framework that allows for integrating multiple technologies, which provide both context-awareness and social computing. Game development has shown that the data provided by sensor networks encourage users to reduce energy consumption in their workplace and that social interactions and competitiveness allow for accelerating the achievement of good results and behavioral changes that favor energy savings. PMID:28398237
Multiscale mechanobiology: computational models for integrating molecules to multicellular systems
Mak, Michael; Kim, Taeyoon
2015-01-01
Mechanical signals exist throughout the biological landscape. Across all scales, these signals, in the form of force, stiffness, and deformations, are generated and processed, resulting in an active mechanobiological circuit that controls many fundamental aspects of life, from protein unfolding and cytoskeletal remodeling to collective cell motions. The multiple scales and complex feedback involved present a challenge for fully understanding the nature of this circuit, particularly in development and disease in which it has been implicated. Computational models that accurately predict and are based on experimental data enable a means to integrate basic principles and explore fine details of mechanosensing and mechanotransduction in and across all levels of biological systems. Here we review recent advances in these models along with supporting and emerging experimental findings. PMID:26019013
NASA Technical Reports Server (NTRS)
Lewis, Clayton; Wilde, Nick
1989-01-01
Space construction will require heavy investment in the development of a wide variety of user interfaces for the computer-based tools that will be involved at every stage of construction operations. Using today's technology, user interface development is very expensive for two reasons: (1) specialized and scarce programming skills are required to implement the necessary graphical representations and complex control regimes for high-quality interfaces; (2) iteration on prototypes is required to meet user and task requirements, since these are difficult to anticipate with current (and foreseeable) design knowledge. We are attacking this problem by building a user interface development tool based on extensions to the spreadsheet model of computation. The tool provides high-level support for graphical user interfaces and permits dynamic modification of interfaces, without requiring conventional programming concepts and skills.
Computational Modeling of Morphogenesis Regulated by Mechanical Feedback
Ramasubramanian, Ashok; Taber, Larry A.
2008-01-01
Mechanical forces cause changes in form during embryogenesis and likely play a role in regulating these changes. This paper explores the idea that changes in homeostatic tissue stress (target stress), possibly modulated by genes, drive some morphogenetic processes. Computational models are presented to illustrate how regional variations in target stress can cause a range of complex behaviors involving the bending of epithelia. These models include growth and cytoskeletal contraction regulated by stress-based mechanical feedback. All simulations were carried out using the commercial finite element code ABAQUS, with growth and contraction included by modifying the zero-stress state in the material constitutive relations. Results presented for bending of bilayered beams and invagination of cylindrical and spherical shells provide insight into some of the mechanical aspects that must be considered in studying morphogenetic mechanisms. PMID:17318485
Quantum speedup in solving the maximal-clique problem
NASA Astrophysics Data System (ADS)
Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang
2018-03-01
The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.
Efficient searching in meshfree methods
NASA Astrophysics Data System (ADS)
Olliff, James; Alford, Brad; Simkins, Daniel C.
2018-04-01
Meshfree methods such as the Reproducing Kernel Particle Method and the Element Free Galerkin method have proven to be excellent choices for problems involving complex geometry, evolving topology, and large deformation, owing to their ability to model the problem domain without the constraints imposed on the Finite Element Method (FEM) meshes. However, meshfree methods have an added computational cost over FEM that come from at least two sources: increased cost of shape function evaluation and the determination of adjacency or connectivity. The focus of this paper is to formally address the types of adjacency information that arises in various uses of meshfree methods; a discussion of available techniques for computing the various adjacency graphs; propose a new search algorithm and data structure; and finally compare the memory and run time performance of the methods.
Self-Directed Cooperative Planetary Rovers
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo; Morris, Robert (Technical Monitor)
2003-01-01
The project is concerned with the development of decision-theoretic techniques to optimize the scientific return of planetary rovers. Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We have developed a comprehensive solution to this problem that involves high-level tools to describe a mission; a compiler that maps a mission description and additional probabilistic models of the components of the rover into a Markov decision problem; and algorithms for solving the rover control problem that are sensitive to the limited computational resources and high-level of uncertainty in this domain.