Sample records for theoretical analysis computer

  1. Theoretical, Experimental, and Computational Evaluation of Several Vane-Type Slow-Wave Structures

    NASA Technical Reports Server (NTRS)

    Wallett, Thomas M.; Qureshi, A. Haq

    1994-01-01

    Several types of periodic vane slow-wave structures were fabricated. The dispersion characteristics were found by theoretical analysis, experimental testing, and computer simulation using the MAFIA code. Computer-generated characteristics agreed to approximately within 2 percent of the experimental characteristics for all structures. The theoretical characteristics, however, deviated increasingly as the width to height ratio became smaller. Interaction impedances were also computed based on the experimental and computer-generated resonance frequency shifts due to the introduction of a perturbing dielectric rod.

  2. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    NASA Astrophysics Data System (ADS)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the success of the workshop. Further information on ACAT 2011 can be found at http://acat2011.cern.ch Dr Liliana Teodorescu Brunel University ACATgroup The PDF also contains details of the workshop's committees and sponsors.

  3. Computer program for assessing the theoretical performance of a three dimensional inlet

    NASA Technical Reports Server (NTRS)

    Agnone, A. M.; Kung, F.

    1972-01-01

    A computer program for determining the theoretical performance of a three dimensional inlet is presented. An analysis for determining the capture area, ram force, spillage force, and surface pressure force is presented, along with the necessary computer program. A sample calculation is also included.

  4. Application of theoretical methods to increase succinate production in engineered strains.

    PubMed

    Valderrama-Gomez, M A; Kreitmayer, D; Wolf, S; Marin-Sanguino, A; Kremling, A

    2017-04-01

    Computational methods have enabled the discovery of non-intuitive strategies to enhance the production of a variety of target molecules. In the case of succinate production, reviews covering the topic have not yet analyzed the impact and future potential that such methods may have. In this work, we review the application of computational methods to the production of succinic acid. We found that while a total of 26 theoretical studies were published between 2002 and 2016, only 10 studies reported the successful experimental implementation of any kind of theoretical knowledge. None of the experimental studies reported an exact application of the computational predictions. However, the combination of computational analysis with complementary strategies, such as directed evolution and comparative genome analysis, serves as a proof of concept and demonstrates that successful metabolic engineering can be guided by rational computational methods.

  5. Can Computer-Mediated Interventions Change Theoretical Mediators of Safer Sex? A Meta-Analysis

    ERIC Educational Resources Information Center

    Noar, Seth M.; Pierce, Larson B.; Black, Hulda G.

    2010-01-01

    The purpose of this study was to conduct a meta-analysis of computer-mediated interventions (CMIs) aimed at changing theoretical mediators of safer sex. Meta-analytic aggregation of effect sizes from k = 20 studies indicated that CMIs significantly improved HIV/AIDS knowledge, d = 0.276, p less than 0.001, k = 15, N = 6,625; sexual/condom…

  6. Theoretical Definition of Instructor Role in Computer-Managed Instruction.

    ERIC Educational Resources Information Center

    McCombs, Barbara L.; Dobrovolny, Jacqueline L.

    This report describes the results of a theoretical analysis of the ideal role functions of the Computer Managed Instruction (CMI) instructor. Concepts relevant to instructor behavior are synthesized from both cognitive and operant learning theory perspectives, and the roles allocated to instructors by seven large-scale operational CMI systems are…

  7. Superimposed Code Theoretic Analysis of DNA Codes and DNA Computing

    DTIC Science & Technology

    2008-01-01

    complements of one another and the DNA duplex formed is a Watson - Crick (WC) duplex. However, there are many instances when the formation of non-WC...that the user’s requirements for probe selection are met based on the Watson - Crick probe locality within a target. The second type, called...AFRL-RI-RS-TR-2007-288 Final Technical Report January 2008 SUPERIMPOSED CODE THEORETIC ANALYSIS OF DNA CODES AND DNA COMPUTING

  8. Computers in the University's Curriculum: The Theoretical Arguments for Including Computers in Telecommunications.

    ERIC Educational Resources Information Center

    Eastman, Susan T.

    1984-01-01

    Argues that the telecommunications field has specific computer applications; therefore courses on how to use computer programs for audience analysis, station accounting, newswriting, etc., should be included in the telecommunications curriculum. (PD)

  9. Optimization Techniques for Analysis of Biological and Social Networks

    DTIC Science & Technology

    2012-03-28

    analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational

  10. An Algebra-Based Introductory Computational Neuroscience Course with Lab.

    PubMed

    Fink, Christian G

    2017-01-01

    A course in computational neuroscience has been developed at Ohio Wesleyan University which requires no previous experience with calculus or computer programming, and which exposes students to theoretical models of neural information processing and techniques for analyzing neural data. The exploration of theoretical models of neural processes is conducted in the classroom portion of the course, while data analysis techniques are covered in lab. Students learn to program in MATLAB and are offered the opportunity to conclude the course with a final project in which they explore a topic of their choice within computational neuroscience. Results from a questionnaire administered at the beginning and end of the course indicate significant gains in student facility with core concepts in computational neuroscience, as well as with analysis techniques applied to neural data.

  11. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    NASA Astrophysics Data System (ADS)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  12. Theoretical, Experimental, and Computational Evaluation of Disk-Loaded Circular Wave Guides

    NASA Technical Reports Server (NTRS)

    Wallett, Thomas M.; Qureshi, A. Haq

    1994-01-01

    A disk-loaded circular wave guide structure and test fixture were fabricated. The dispersion characteristics were found by theoretical analysis, experimental testing, and computer simulation using the codes ARGUS and SOS. Interaction impedances were computed based on the corresponding dispersion characteristics. Finally, an equivalent circuit model for one period of the structure was chosen using equivalent circuit models for cylindrical wave guides of different radii. Optimum values for the discrete capacitors and inductors describing discontinuities between cylindrical wave guides were found using the computer code TOUCHSTONE.

  13. Main rotor free wake geometry effects on blade air loads and response for helicopters in steady maneuvers. Volume 1: Theoretical formulation and analysis of results

    NASA Technical Reports Server (NTRS)

    Sadler, S. G.

    1972-01-01

    A mathematical model and computer program were implemented to study the main rotor free wake geometry effects on helicopter rotor blade air loads and response in steady maneuvers. The theoretical formulation and analysis of results are presented.

  14. Renormalization group analysis of anisotropic diffusion in turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Barton, J. Michael

    1991-01-01

    The renormalization group is applied to compute anisotropic corrections to the scalar eddy diffusivity representation of turbulent diffusion of a passive scalar. The corrections are linear in the mean velocity gradients. All model constants are computed theoretically. A form of the theory valid at arbitrary Reynolds number is derived. The theory applies only when convection of the velocity-scalar correlation can be neglected. A ratio of diffusivity components, found experimentally to have a nearly constant value in a variety of shear flows, is computed theoretically for flows in a certain state of equilibrium. The theoretical value is well within the fairly narrow range of experimentally observed values. Theoretical predictions of this diffusivity ratio are also compared with data from experiments and direct numerical simulations of homogeneous shear flows with constant velocity and scalar gradients.

  15. Study of ICRF wave propagation and plasma coupling efficiency in a linear magnetic mirror device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, S.Y.

    1991-07-01

    Ion Cyclotron Range of Frequency (ICRF) wave propagation in an inhomogeneous axial magnetic field in a cylindrical plasma-vacuum system has historically been inadequately modelled. Previous works either sacrifice the cylindrical geometry in favor of a simpler slab geometry, concentrate on the resonance region, use a single mode to represent the entire field structure, or examine only radial propagation. This thesis performs both analytical and computational studies to model the ICRF wave-plasma coupling and propagation problem. Experimental analysis is also conducted to compare experimental results with theoretical predictions. Both theoretical as well as experimental analysis are undertaken as part of themore » thesis. The theoretical studies simulate the propagation of ICRF waves in an axially inhomogeneous magnetic field and in cylindrical geometry. Two theoretical analysis are undertaken - an analytical study and a computational study. The analytical study treats the inhomogeneous magnetic field by transforming the (r,z) coordinate into another coordinate system ({rho},{xi}) that allows the solution of the fields with much simpler boundaries. The plasma fields are then Fourier transformed into two coupled convolution-integral equations which are then differenced and solved for both the perpendicular mode number {alpha} as well as the complete EM fields. The computational study involves a multiple eigenmode computational analysis of the fields that exist within the plasma-vacuum system. The inhomogeneous axial field is treated by dividing the geometry into a series of transverse axial slices and using a constant dielectric tensor in each individual slice. The slices are then connected by longitudinal boundary conditions.« less

  16. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    NASA Astrophysics Data System (ADS)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  17. Computational and theoretical analysis of free surface flow in a thin liquid film under zero and normal gravity

    NASA Technical Reports Server (NTRS)

    Faghri, Amir; Swanson, Theodore D.

    1988-01-01

    The results of a numerical computation and theoretical analysis are presented for the flow of a thin liquid film in the presence and absence of a gravitational body force. Five different flow systems were used. Also presented are the governing equations and boundary conditions for the situation of a thin liquid emanating from a pressure vessel; traveling along a horizontal plate with a constant initial height and uniform initial velocity; and traveling radially along a horizontal disk with a constant initial height and uniform initial velocity.

  18. Nastran level 16 theoretical manual updates for aeroelastic analysis of bladed discs

    NASA Technical Reports Server (NTRS)

    Elchuri, V.; Smith, G. C. C.

    1980-01-01

    A computer program based on state of the art compressor and structural technologies applied to bladed shrouded disc was developed and made operational in NASTRAN Level 16. Aeroelastic analyses, modes and flutter. Theoretical manual updates are included.

  19. The `TTIME' Package: Performance Evaluation in a Cluster Computing Environment

    NASA Astrophysics Data System (ADS)

    Howe, Marico; Berleant, Daniel; Everett, Albert

    2011-06-01

    The objective of translating developmental event time across mammalian species is to gain an understanding of the timing of human developmental events based on known time of those events in animals. The potential benefits include improvements to diagnostic and intervention capabilities. The CRAN `ttime' package provides the functionality to infer unknown event timings and investigate phylogenetic proximity utilizing hierarchical clustering of both known and predicted event timings. The original generic mammalian model included nine eutherian mammals: Felis domestica (cat), Mustela putorius furo (ferret), Mesocricetus auratus (hamster), Macaca mulatta (monkey), Homo sapiens (humans), Mus musculus (mouse), Oryctolagus cuniculus (rabbit), Rattus norvegicus (rat), and Acomys cahirinus (spiny mouse). However, the data for this model is expected to grow as more data about developmental events is identified and incorporated into the analysis. Performance evaluation of the `ttime' package across a cluster computing environment versus a comparative analysis in a serial computing environment provides an important computational performance assessment. A theoretical analysis is the first stage of a process in which the second stage, if justified by the theoretical analysis, is to investigate an actual implementation of the `ttime' package in a cluster computing environment and to understand the parallelization process that underlies implementation.

  20. Computer Series, 13.

    ERIC Educational Resources Information Center

    Moore, John W., Ed.

    1981-01-01

    Provides short descriptions of chemists' applications of computers in instruction: an interactive instructional program for Instrumental-Qualitative Organic Analysis; question-and-answer exercises in organic chemistry; computerized organic nomenclature drills; integration of theoretical and descriptive materials; acid-base titration simulation;…

  1. Analysis of the DFP/AFCS Systems for Compensating Gravity Distortions on the 70-Meter Antenna

    NASA Technical Reports Server (NTRS)

    Imbriale, William A.; Hoppe, Daniel J.; Rochblatt, David

    2000-01-01

    This paper presents the theoretical computations showing the expected performances for both systems. The basic analysis tool is a Physical Optics reflector analysis code that was ported to a parallel computer for faster execution times. There are several steps involved in computing the RF performance of the various systems. 1 . A model of the RF distortions of the main reflector is required. This model is based upon measured holography maps of the 70-meter antenna obtained at 3 elevation angles. The holography maps are then processed (using an appropriate gravity mechanical model of the dish) to provide surface distortion maps at all elevation angles. 2. From the surface distortion maps, ray optics is used to determine the theoretical shape of the DFP that will exactly phase compensate the distortions. 3. From the theoretical shape and a NASTRAN mechanical model of the plate, the actuator positions that generate a surface that provides the best RMS fit to the theoretical model are selected. Using the actuator positions and the NASTRAN model provides an accurate description of the actual mirror shape. 4. Starting from the mechanical drawings of the feed, a computed RF feed pattern is generated. This pattern is expanded into a set of spherical wave modes so that a complete near field analysis of the reflector system can be obtained. 5. For the array feed, the excitation coefficients that provide the maximum gain are computed using a phase conjugate technique. The basic experimental geometry consisted of a dual shaped 70-meter antenna system; a refocusing ellipse, a DFP and an array feed system. To provide physical insight to the systems performance, focal plane field plots are presented at several elevations. Curves of predicted performance are shown for the DFP system, monopulse tracking system, AFCS and combined DFP/AFCS system. The calculated results show that the combined DFP/AFCS system is capable of recovering the majority of the gain lost due to gravity distortion.

  2. Research in Computational Astrobiology

    NASA Technical Reports Server (NTRS)

    Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.

    2003-01-01

    We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.

  3. EnviroLand: A Simple Computer Program for Quantitative Stream Assessment.

    ERIC Educational Resources Information Center

    Dunnivant, Frank; Danowski, Dan; Timmens-Haroldson, Alice; Newman, Meredith

    2002-01-01

    Introduces the Enviroland computer program which features lab simulations of theoretical calculations for quantitative analysis and environmental chemistry, and fate and transport models. Uses the program to demonstrate the nature of linear and nonlinear equations. (Author/YDS)

  4. The Real-World Connection.

    ERIC Educational Resources Information Center

    Estes, Charles R.

    1994-01-01

    Discusses theoretical versus applied science and the use of the scientific method for analysis of social issues. Topics addressed include the use of simulation and modeling; the growth in computer power, including nanotechnology; distributed computing; self-evolving programs; spiritual matters; human engineering, i.e., molding individuals;…

  5. DEVELOPMENT OF COMPUTATIONAL TOOLS FOR OPTIMAL IDENTIFICATION OF BIOLOGICAL NETWORKS

    EPA Science Inventory

    Following the theoretical analysis and computer simulations, the next step for the development of SNIP will be a proof-of-principle laboratory application. Specifically, we have obtained a synthetic transcriptional cascade (harbored in Escherichia coli...

  6. Flutter: A finite element program for aerodynamic instability analysis of general shells of revolution with thermal prestress

    NASA Technical Reports Server (NTRS)

    Fallon, D. J.; Thornton, E. A.

    1983-01-01

    Documentation for the computer program FLUTTER is presented. The theory of aerodynamic instability with thermal prestress is discussed. Theoretical aspects of the finite element matrices required in the aerodynamic instability analysis are also discussed. General organization of the computer program is explained, and instructions are then presented for the execution of the program.

  7. Investigating the Potential of Computer Environments for the Teaching and Learning of Functions: A Double Analysis from Two Research Traditions

    ERIC Educational Resources Information Center

    Lagrange, Jean-Baptiste; Psycharis, Giorgos

    2014-01-01

    The general goal of this paper is to explore the potential of computer environments for the teaching and learning of functions. To address this, different theoretical frameworks and corresponding research traditions are available. In this study, we aim to network different frameworks by following a "double analysis" method to analyse two…

  8. Research in progress in applied mathematics, numerical analysis, fluid mechanics, and computer science

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.

  9. Surface electrical properties experiment, Part 3

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A complete unified discussion of the electromagnetic response of a plane stratified structure is reported. A detailed and comprehensive analysis of the theoretical parts of the electromagnetic is given. The numerical problem of computing numbers of the electromagnetic field strengths is discussed. It is shown that the analysis of the conductive media is not very far removed from the theoretical analysis and the numerical difficulties are not as accute as for the low-loss problem. For Vol. 1, see N75-15570; for Vol. 2 see N75-15571.

  10. ICASE

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in the areas of (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving Langley facilities and scientists; and (4) computer science.

  11. From atomistic interfaces to dendritic patterns

    NASA Astrophysics Data System (ADS)

    Galenko, P. K.; Alexandrov, D. V.

    2018-01-01

    Transport processes around phase interfaces, together with thermodynamic properties and kinetic phenomena, control the formation of dendritic patterns. Using the thermodynamic and kinetic data of phase interfaces obtained on the atomic scale, one can analyse the formation of a single dendrite and the growth of a dendritic ensemble. This is the result of recent progress in theoretical methods and computational algorithms calculated using powerful computer clusters. Great benefits can be attained from the development of micro-, meso- and macro-levels of analysis when investigating the dynamics of interfaces, interpreting experimental data and designing the macrostructure of samples. The review and research articles in this theme issue cover the spectrum of scales (from nano- to macro-length scales) in order to exhibit recently developing trends in the theoretical analysis and computational modelling of dendrite pattern formation. Atomistic modelling, the flow effect on interface dynamics, the transition from diffusion-limited to thermally controlled growth existing at a considerable driving force, two-phase (mushy) layer formation, the growth of eutectic dendrites, the formation of a secondary dendritic network due to coalescence, computational methods, including boundary integral and phase-field methods, and experimental tests for theoretical models-all these themes are highlighted in the present issue. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  12. Aerodynamic design and analysis system for supersonic aircraft. Part 1: General description and theoretical development

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1975-01-01

    An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.

  13. In Praise of Numerical Computation

    NASA Astrophysics Data System (ADS)

    Yap, Chee K.

    Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.

  14. CSM research: Methods and application studies

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    1989-01-01

    Computational mechanics is that discipline of applied science and engineering devoted to the study of physical phenomena by means of computational methods based on mathematical modeling and simulation, utilizing digital computers. The discipline combines theoretical and applied mechanics, approximation theory, numerical analysis, and computer science. Computational mechanics has had a major impact on engineering analysis and design. When applied to structural mechanics, the discipline is referred to herein as computational structural mechanics. Complex structures being considered by NASA for the 1990's include composite primary aircraft structures and the space station. These structures will be much more difficult to analyze than today's structures and necessitate a major upgrade in computerized structural analysis technology. NASA has initiated a research activity in structural analysis called Computational Structural Mechanics (CSM). The broad objective of the CSM activity is to develop advanced structural analysis technology that will exploit modern and emerging computers, such as those with vector and/or parallel processing capabilities. Here, the current research directions for the Methods and Application Studies Team of the Langley CSM activity are described.

  15. Analytical and scale model research aimed at improved hangglider design

    NASA Technical Reports Server (NTRS)

    Kroo, I.; Chang, L. S.

    1979-01-01

    Research consisted of a theoretical analysis which attempts to predict aerodynamic characteristics using lifting surface theory and finite-element structural analysis as well as an experimental investigation using 1/5 scale elastically similar models in the NASA Ames 2m x 3m (7' x 10') wind tunnel. Experimental data were compared with theoretical results in the development of a computer program which may be used in the design and evaluation of ultralight gliders.

  16. Python for Information Theoretic Analysis of Neural Data

    PubMed Central

    Ince, Robin A. A.; Petersen, Rasmus S.; Swan, Daniel C.; Panzeri, Stefano

    2008-01-01

    Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources. PMID:19242557

  17. Preconditioner Circuit Analysis

    DTIC Science & Technology

    2011-09-01

    S) Matthew J. Nye 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 939435–000 8. PERFORMING... ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES) N/A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11...of the simulations and the theoretical computations. D. THESIS ORGANIZATION This thesis is organized into four chapters. The theoretical

  18. Exact Solutions of Burnt-Bridge Models for Molecular Motor Transport

    NASA Astrophysics Data System (ADS)

    Morozov, Alexander; Pronina, Ekaterina; Kolomeisky, Anatoly; Artyomov, Maxim

    2007-03-01

    Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called ``bridges''), is investigated theoretically by analyzing discrete-state stochastic ``burnt-bridge'' models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (``burned'') with a probability p, creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For general case of p<1 a new theoretical method is developed, and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics, periodic and random distribution of bridges and different burning dynamics are analyzed and compared. Theoretical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.

  19. [Activities of the Department of Electrical Engineering, Howard University

    NASA Technical Reports Server (NTRS)

    Yalamanchili, Raj C.

    1997-01-01

    Theoretical derivations, computer analysis and test data are provided to demonstrate that the cavity model is a feasible one to analyze thin-substrate, rectangular-patch microstrip antennas. Seven separate antennas were tested. Most of the antennas were designed to resonate at L-band frequencies (1-2 GHz). One antenna was designed to resonate at an S-band (2-4 GHz) frequency of 2.025 GHz. All dielectric substrates were made of Duroid, and were of varying thicknesses and relative dielectric constant values. Theoretical derivations to calculate radiated free space electromagnetic fields and antenna input impedance were performed. MATHEMATICA 2.2 software was used to generate Smith Chart input impedance plots, normalized relative power radiation plots and to perform other numerical manipulations. Network Analyzer tests were used to verify the data from the computer programming (such as input impedance and VSWR). Finally, tests were performed in an anechoic chamber to measure receive-mode polar power patterns in the E and H planes. Agreement between computer analysis and test data is presented. The antenna with the thickest substrate (e(sub r) = 2.33,62 mils thick) showed the worst match to theoretical impedance data. This is anticipated due to the fact that the cavity model generally loses accuracy when the dielectric substrate thickness exceeds 5% of the antenna's free space wavelength. A method of reducing computer execution time for impedance calculations is also presented.

  20. A theoretical method for the analysis and design of axisymmetric bodies. [flow distribution and incompressible fluids

    NASA Technical Reports Server (NTRS)

    Beatty, T. D.

    1975-01-01

    A theoretical method is presented for the computation of the flow field about an axisymmetric body operating in a viscous, incompressible fluid. A potential flow method was used to determine the inviscid flow field and to yield the boundary conditions for the boundary layer solutions. Boundary layer effects in the forces of displacement thickness and empirically modeled separation streamlines are accounted for in subsequent potential flow solutions. This procedure is repeated until the solutions converge. An empirical method was used to determine base drag allowing configuration drag to be computed.

  1. A preliminary design study for a cosmic X-ray spectrometer

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The results are described of theoretical and experimental investigations aimed at the development of a curved crystal cosmic X-ray spectrometer to be used at the focal plane of the large orbiting X-ray telescope on the third High Energy Astronomical Observatory. The effort was concentrated on the development of spectrometer concepts and their evaluation by theoretical analysis, computer simulation, and laboratory testing with breadboard arrangements of crystals and detectors. In addition, a computer-controlled facility for precision testing and evaluation of crystals in air and vacuum was constructed. A summary of research objectives and results is included.

  2. Theoretical analysis of the rotational barrier of ethane.

    PubMed

    Mo, Yirong; Gao, Jiali

    2007-02-01

    The understanding of the ethane rotation barrier is fundamental for structural theory and the conformational analysis of organic molecules and requires a consistent theoretical model to differentiate the steric and hyperconjugation effects. Due to recently renewed controversies over the barrier's origin, we developed a computational approach to probe the rotation barriers of ethane and its congeners in terms of steric repulsion, hyperconjugative interaction, and electronic and geometric relaxations. Our study reinstated that the conventional steric repulsion overwhelmingly dominates the barriers.

  3. The Development and Testing of a Tool for Analysis of Computer-Mediated Conferencing Transcripts.

    ERIC Educational Resources Information Center

    Fahy, Patrick J.; Crawford, Gail; Ally, Mohamed; Cookson, Peter; Keller, Verna; Prosser, Frank

    2000-01-01

    The Zhu model for analyzing computer mediated communications was further developed by an Athabasca University (Alberta) distance education research team based on ease of use, reliability, validity, theoretical support, and cross-discipline utility. Five classification categories of the new model are vertical questioning, horizontal questioning,…

  4. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    NASA Astrophysics Data System (ADS)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program coordinator Federico Carminati and the conference chair Denis Perret-Gallix for their global supervision. Further information on ACAT 2014 can be found at http://www.particle.cz/acat2014

  5. Noise studies of communication systems using the SYSTID computer aided analysis program

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Dawson, C. T.

    1973-01-01

    SYSTID computer aided design is a simple program for simulating data systems and communication links. A trial of the efficiency of the method was carried out by simulating a linear analog communication system to determine its noise performance and by comparing the SYSTID result with the result arrived at by theoretical calculation. It is shown that the SYSTID program is readily applicable to the analysis of these types of systems.

  6. A computational system for aerodynamic design and analysis of supersonic aircraft. Part 1: General description and theoretical development

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1976-01-01

    An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Schematics of the program structure and the individual overlays and subroutines are described.

  7. Theoretical and Experimental Particle Velocity in Cold Spray

    NASA Astrophysics Data System (ADS)

    Champagne, Victor K.; Helfritch, Dennis J.; Dinavahi, Surya P. G.; Leyman, Phillip F.

    2011-03-01

    In an effort to corroborate theoretical and experimental techniques used for cold spray particle velocity analysis, two theoretical and one experimental methods were used to analyze the operation of a nozzle accelerating aluminum particles in nitrogen gas. Two-dimensional (2D) axi-symmetric computations of the flow through the nozzle were performed using the Reynolds averaged Navier-Stokes code in a computational fluid dynamics platform. 1D, isentropic, gas-dynamic equations were solved for the same nozzle geometry and initial conditions. Finally, the velocities of particles exiting a nozzle of the same geometry and operated at the same initial conditions were measured by a dual-slit velocimeter. Exit plume particle velocities as determined by the three methods compared reasonably well, and differences could be attributed to frictional and particle distribution effects.

  8. Quantum chemical methods for the investigation of photoinitiated processes in biological systems: theory and applications.

    PubMed

    Dreuw, Andreas

    2006-11-13

    With the advent of modern computers and advances in the development of efficient quantum chemical computer codes, the meaningful computation of large molecular systems at a quantum mechanical level became feasible. Recent experimental effort to understand photoinitiated processes in biological systems, for instance photosynthesis or vision, at a molecular level also triggered theoretical investigations in this field. In this Minireview, standard quantum chemical methods are presented that are applicable and recently used for the calculation of excited states of photoinitiated processes in biological molecular systems. These methods comprise configuration interaction singles, the complete active space self-consistent field method, and time-dependent density functional theory and its variants. Semiempirical approaches are also covered. Their basic theoretical concepts and mathematical equations are briefly outlined, and their properties and limitations are discussed. Recent successful applications of the methods to photoinitiated processes in biological systems are described and theoretical tools for the analysis of excited states are presented.

  9. CPE--A New Perspective: The Impact of the Technology Revolution. Proceedings of the Computer Performance Evaluation Users Group Meeting (19th, San Francisco, California, October 25-28, 1983). Final Report. Reports on Computer Science and Technology.

    ERIC Educational Resources Information Center

    Mobray, Deborah, Ed.

    Papers on local area networks (LANs), modelling techniques, software improvement, capacity planning, software engineering, microcomputers and end user computing, cost accounting and chargeback, configuration and performance management, and benchmarking presented at this conference include: (1) "Theoretical Performance Analysis of Virtual…

  10. A tutorial on the use of ROC analysis for computer-aided diagnostic systems.

    PubMed

    Scheipers, Ulrich; Perrey, Christian; Siebers, Stefan; Hansen, Christian; Ermert, Helmut

    2005-07-01

    The application of the receiver operating characteristic (ROC) curve for computer-aided diagnostic systems is reviewed. A statistical framework is presented and different methods of evaluating the classification performance of computer-aided diagnostic systems, and, in particular, systems for ultrasonic tissue characterization, are derived. Most classifiers that are used today are dependent on a separation threshold, which can be chosen freely in many cases. The separation threshold separates the range of output values of the classification system into different target groups, thus conducting the actual classification process. In the first part of this paper, threshold specific performance measures, e.g., sensitivity and specificity, are presented. In the second part, a threshold-independent performance measure, the area under the ROC curve, is reviewed. Only the use of separation threshold-independent performance measures provides classification results that are overall representative for computer-aided diagnostic systems. The following text was motivated by the lack of a complete and definite discussion of the underlying subject in available textbooks, references and publications. Most manuscripts published so far address the theme of performance evaluation using ROC analysis in a manner too general to be practical for everyday use in the development of computer-aided diagnostic systems. Nowadays, the user of computer-aided diagnostic systems typically handles huge amounts of numerical data, not always distributed normally. Many assumptions made in more or less theoretical works on ROC analysis are no longer valid for real-life data. The paper aims at closing the gap between theoretical works and real-life data. The review provides the interested scientist with information needed to conduct ROC analysis and to integrate algorithms performing ROC analysis into classification systems while understanding the basic principles of classification.

  11. Optical nonlinearity and charge transfer analysis of pyrene adsorbed on silver: Computational and experimental investigations

    NASA Astrophysics Data System (ADS)

    Reeta Felscia, U.; Rajkumar, Beulah J. M.; Sankar, Pranitha; Philip, Reji; Briget Mary, M.

    2017-09-01

    The interaction of pyrene on silver has been investigated using both experimental and computational methods. Hyperpolarizabilities computed theoretically together with experimental nonlinear absorption from open aperture Z-scan measurements, point towards a possible use of pyrene adsorbed on silver in the rational design of NLO devices. Presence of a red shift in both simulated and experimental UV-Vis spectra confirms the adsorption on silver, which is due to the electrostatic interaction between silver and pyrene, inducing variations in the structural parameters of pyrene. Fukui calculations along with MEP plot predict the electrophilic nature of the silver cluster in the presence of pyrene, with NBO analysis revealing that the adsorption causes charge redistribution from the first three rings of pyrene towards the fourth ring, from where the 2p orbitals of carbon interact with the valence 5s orbitals of the cluster. This is further confirmed by the downshifting of ring breathing modes in both the experimental and theoretical Raman spectra.

  12. Vibrational spectroscopy (FT-IR and Laser-Raman) investigation, and computational (M06-2X and B3LYP) analysis on the structure of 4-(3-fluorophenyl)-1-(propan-2-ylidene)-thiosemicarbazone.

    PubMed

    Sert, Yusuf; Miroslaw, Barbara; Çırak, Çağrı; Doğan, Hatice; Szulczyk, Daniel; Struga, Marta

    2014-07-15

    In this study, the experimental and theoretical vibrational spectral analysis of 4-(3-fluorophenyl)-1-(propan-2-ylidene)-thiosemicarbazone have been carried out. The experimental FT-IR (4000-400 cm(-1)) and Laser-Raman spectra (4000-100 cm(-1)) have been recorded for the solid state samples. The theoretical vibrational frequencies and the optimized geometric parameters (bond lengths and angles) have been calculated for gas phase using density functional theory (DFT/B3LYP: Becke, 3-parameter, Lee-Yang-Parr) and M06-2X (the highly parametrized, empirical exchange correlation function) quantum chemical methods with 6-311++G(d,p) basis set. The diversity in molecular geometry of fluorophenyl substituted thiosemicarbazones has been discussed based on the X-ray crystal structure reports and theoretical calculation results from the literature. The assignments of the vibrational frequencies have been done on the basis of potential energy distribution (PED) analysis by using VEDA4 software. A good correlation was found between the computed and experimental geometric and vibrational data. In addition, the highest occupied (HOMO) and lowest unoccupied (LUMO) molecular orbital energy levels and other related molecular energy values of the compound have been determined using the same level of theoretical calculations. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Thermohydrodynamic Analysis of Cryogenic Liquid Turbulent Flow Fluid Film Bearings

    NASA Technical Reports Server (NTRS)

    San Andres, Luis

    1996-01-01

    This report describes a thermohydrodynamic analysis and computer programs for the prediction of the static and dynamic force response of fluid film bearings for cryogenic applications. The research performed addressed effectively the most important theoretical and practical issues related to the operation and performance of cryogenic fluid film bearings. Five computer codes have been licensed by the Texas A&M University to NASA centers and contractors and a total of 14 technical papers have been published.

  14. Analysis of reaction cross-section production in neutron induced fission reactions on uranium isotope using computer code COMPLET.

    PubMed

    Asres, Yihunie Hibstie; Mathuthu, Manny; Birhane, Marelgn Derso

    2018-04-22

    This study provides current evidence about cross-section production processes in the theoretical and experimental results of neutron induced reaction of uranium isotope on projectile energy range of 1-100 MeV in order to improve the reliability of nuclear stimulation. In such fission reactions of 235 U within nuclear reactors, much amount of energy would be released as a product that able to satisfy the needs of energy to the world wide without polluting processes as compared to other sources. The main objective of this work is to transform a related knowledge in the neutron-induced fission reactions on 235 U through describing, analyzing and interpreting the theoretical results of the cross sections obtained from computer code COMPLET by comparing with the experimental data obtained from EXFOR. The cross section value of 235 U(n,2n) 234 U, 235 U(n,3n) 233 U, 235 U(n,γ) 236 U, 235 U(n,f) are obtained using computer code COMPLET and the corresponding experimental values were browsed by EXFOR, IAEA. The theoretical results are compared with the experimental data taken from EXFOR Data Bank. Computer code COMPLET has been used for the analysis with the same set of input parameters and the graphs were plotted by the help of spreadsheet & Origin-8 software. The quantification of uncertainties stemming from both experimental data and computer code calculation plays a significant role in the final evaluated results. The calculated results for total cross sections were compared with the experimental data taken from EXFOR in the literature, and good agreement was found between the experimental and theoretical data. This comparison of the calculated data was analyzed and interpreted with tabulation and graphical descriptions, and the results were briefly discussed within the text of this research work. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. High temperature electrolytic recovery of oxygen from gaseous effluents from the carbo-chlorination of lunar anorthite and the hydrogenation of ilmenite: A theoretical study

    NASA Technical Reports Server (NTRS)

    Erstfield, T. E.; Williams, R. J.

    1979-01-01

    A thermodynamic analysis discusses the compositions of gaseous effluents from the reaction of carbon and chlorine and of hydrogen with lunar anorthite and ilmenite, respectively. The computations consider the effects of the indigenous volatiles on the solid/gas reactions and on the composition of the effluent gases. A theoretical parameterization of the high temperature electrolysis of such gases is given for several types of solid ceramic electrolytes, and the effect of oxygen removal on the effluents is computed. Potential chemical interactions between the gases and the ceramic electrolytes are analyzed and discussed.

  16. Theoretical prediction of airplane stability derivatives at subcritical speeds

    NASA Technical Reports Server (NTRS)

    Tulinius, J.; Clever, W.; Nieman, A.; Dunn, K.; Gaither, B.

    1973-01-01

    The theoretical development and application is described of an analysis for predicting the major static and rotary stability derivatives for a complete airplane. The analysis utilizes potential flow theory to compute the surface flow fields and pressures on any configuration that can be synthesized from arbitrary lifting bodies and nonplanar thick lifting panels. The pressures are integrated to obtain section and total configuration loads and moments due side slip, angle of attack, pitching motion, rolling motion, yawing motion, and control surface deflection. Subcritical compressibility is accounted for by means of the Gothert similarity rule.

  17. Spectroscopic investigation of some building blocks of organic conductors: A comparative study

    NASA Astrophysics Data System (ADS)

    Mukherjee, V.; Yadav, T.

    2017-04-01

    Theoretical molecular structures and IR and Raman spectra of di and tetra methyl substituted tetrathiafulvalene and tetraselenafulvalene molecules have been studied. These molecules belong to the organic conductor family and are immensely used as building blocks of several organic conducting devices. The Hartree-Fock and density functional theory with exchange functional B3LYP have been employed for computational purpose. We have also performed normal coordinate analysis to scale the theoretical frequencies and to calculate potential energy distributions for the conspicuous assignments. The exciting frequency and temperature dependent Raman spectra have also presented. Optimization results reveal that the sulphur derivatives possess boat shape while selenium derivatives possess planner structures. Natural bond orbitals analysis has also been performed to study second order interaction between donors and acceptors and to compute molecular orbital occupancy and energy.

  18. On the Hilbert-Huang Transform Theoretical Foundation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Blank, Karin; Huang, Norden E.

    2004-01-01

    The Hilbert-Huang Transform [HHT] is a novel empirical method for spectrum analysis of non-linear and non-stationary signals. The HHT is a recent development and much remains to be done to establish the theoretical foundation of the HHT algorithms. This paper develops the theoretical foundation for the convergence of the HHT sifting algorithm and it proves that the finest spectrum scale will always be the first generated by the HHT Empirical Mode Decomposition (EMD) algorithm. The theoretical foundation for cutting an extrema data points set into two parts is also developed. This then allows parallel signal processing for the HHT computationally complex sifting algorithm and its optimization in hardware.

  19. DEVELOPMENT OF WELDED SEAL FOR S3G REACTOR VESSEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, J.W.

    1958-01-01

    The development program consisted of preliminary design, welding accessibility and feasibility, pressure and displacement cycling, theoretical analysis and life computation, photoelastic analysis, and comparison of PWR straight sample cycling. Design ''C'' of the three primary designs considered proved more satisfactory from a fatigue life standpoint. (W.D. M.)

  20. Main rotor free wake geometry effects on blade air loads and response for helicopters in steady maneuvers. Volume 2: Program listings

    NASA Technical Reports Server (NTRS)

    Sadler, S. G.

    1972-01-01

    A mathematical model and computer program was implemented to study the main rotor free wake geometry effects on helicopter rotor blade air loads and response in steady maneuvers. Volume 1 (NASA CR-2110) contains the theoretical formulation and analysis of results. Volume 2 contains the computer program listing.

  1. High Performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions

    DTIC Science & Technology

    2016-08-30

    High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions A dedicated high-performance computer cluster was...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Computer cluster ...peer-reviewed journals: Final Report: High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions Report Title A dedicated

  2. Assessing collaborative computing: development of the Collaborative-Computing Observation Instrument (C-COI)

    NASA Astrophysics Data System (ADS)

    Israel, Maya; Wherfel, Quentin M.; Shehab, Saadeddine; Ramos, Evan A.; Metzger, Adam; Reese, George C.

    2016-07-01

    This paper describes the development, validation, and uses of the Collaborative Computing Observation Instrument (C-COI), a web-based analysis instrument that classifies individual and/or collaborative behaviors of students during computing problem-solving (e.g. coding, programming). The C-COI analyzes data gathered through video and audio screen recording software that captures students' computer screens as they program, and their conversations with their peers or adults. The instrument allows researchers to organize and quantify these data to track behavioral patterns that could be further analyzed for deeper understanding of persistence and/or collaborative interactions. The article provides a rationale for the C-COI including the development of a theoretical framework for measuring collaborative interactions in computer-mediated environments. This theoretical framework relied on the computer-supported collaborative learning literature related to adaptive help seeking, the joint problem-solving space in which collaborative computing occurs, and conversations related to outcomes and products of computational activities. Instrument development and validation also included ongoing advisory board feedback from experts in computer science, collaborative learning, and K-12 computing as well as classroom observations to test out the constructs in the C-COI. These processes resulted in an instrument with rigorous validation procedures and a high inter-rater reliability.

  3. A brief overview of computational structures technology related activities at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.

    1992-01-01

    The presentation gives a partial overview of research and development underway in the Structures Division of LeRC, which collectively is referred to as the Computational Structures Technology Program. The activities in the program are diverse and encompass four major categories: (1) composite materials and structures; (2) probabilistic analysis and reliability; (3) design optimization and expert systems; and (4) computational methods and simulation. The approach of the program is comprehensive and entails exploration of fundamental theories of structural mechanics to accurately represent the complex physics governing engine structural performance, formulation, and implementation of computational techniques and integrated simulation strategies to provide accurate and efficient solutions of the governing theoretical models by exploiting the emerging advances in computer technology, and validation and verification through numerical and experimental tests to establish confidence and define the qualities and limitations of the resulting theoretical models and computational solutions. The program comprises both in-house and sponsored research activities. The remainder of the presentation provides a sample of activities to illustrate the breadth and depth of the program and to demonstrate the accomplishments and benefits that have resulted.

  4. Simulation studies of the application of SEASAT data in weather and state of sea forecasting models

    NASA Technical Reports Server (NTRS)

    Cardone, V. J.; Greenwood, J. A.

    1979-01-01

    The design and analysis of SEASAT simulation studies in which the error structure of conventional analyses and forecasts is modeled realistically are presented. The development and computer implementation of a global spectral ocean wave model is described. The design of algorithms for the assimilation of theoretical wind data into computers and for the utilization of real wind data and wave height data in a coupled computer system are presented.

  5. Vector wind profile gust model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1979-01-01

    Work towards establishing a vector wind profile gust model for the Space Transportation System flight operations and trade studies is reported. To date, all the statistical and computational techniques required were established and partially implemented. An analysis of wind profile gust at Cape Kennedy within the theoretical framework is presented. The variability of theoretical and observed gust magnitude with filter type, altitude, and season is described. Various examples are presented which illustrate agreement between theoretical and observed gust percentiles. The preliminary analysis of the gust data indicates a strong variability with altitude, season, and wavelength regime. An extension of the analyses to include conditional distributions of gust magnitude given gust length, distributions of gust modulus, and phase differences between gust components has begun.

  6. Theoretical and experimental investigation of optical absorption anisotropy in β-Ga2O3.

    PubMed

    Ricci, F; Boschi, F; Baraldi, A; Filippetti, A; Higashiwaki, M; Kuramata, A; Fiorentini, V; Fornari, R

    2016-06-08

    The question of optical bandgap anisotropy in the monoclinic semiconductor β-Ga2O3 was revisited by combining accurate optical absorption measurements with theoretical analysis, performed using different advanced computation methods. As expected, the bandgap edge of bulk β-Ga2O3 was found to be a function of light polarization and crystal orientation, with the lowest onset occurring at polarization in the ac crystal plane around 4.5-4.6 eV; polarization along b unambiguously shifts the onset up by 0.2 eV. The theoretical analysis clearly indicates that the shift in the b onset is due to a suppression of the transition matrix elements of the three top valence bands at Γ point.

  7. Theoretical Characterizaiton of Visual Signatures

    NASA Astrophysics Data System (ADS)

    Kashinski, D. O.; Chase, G. M.; di Nallo, O. E.; Scales, A. N.; Vanderley, D. L.; Byrd, E. F. C.

    2015-05-01

    We are investigating the accuracy of theoretical models used to predict the visible, ultraviolet, and infrared spectra, as well as other properties, of product materials ejected from the muzzle of currently fielded systems. Recent advances in solid propellants has made the management of muzzle signature (flash) a principle issue in weapons development across the calibers. A priori prediction of the electromagnetic spectra of formulations will allow researchers to tailor blends that yield desired signatures and determine spectrographic detection ranges. Quantum chemistry methods at various levels of sophistication have been employed to optimize molecular geometries, compute unscaled vibrational frequencies, and determine the optical spectra of specific gas-phase species. Electronic excitations are being computed using Time Dependent Density Functional Theory (TD-DFT). A full statistical analysis and reliability assessment of computational results is currently underway. A comparison of theoretical results to experimental values found in the literature is used to assess any affects of functional choice and basis set on calculation accuracy. The status of this work will be presented at the conference. Work supported by the ARL, DoD HPCMP, and USMA.

  8. Research in progress and other activities of the Institute for Computer Applications in Science and Engineering

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics and computer science during the period April 1, 1993 through September 30, 1993. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustic and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.

  9. Solutions of burnt-bridge models for molecular motor transport.

    PubMed

    Morozov, Alexander Yu; Pronina, Ekaterina; Kolomeisky, Anatoly B; Artyomov, Maxim N

    2007-03-01

    Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called "bridges"), is investigated theoretically by analyzing discrete-state stochastic "burnt-bridge" models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed ("burned") with a probability p , creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into a one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For the general case of p<1 a theoretical method is developed and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics for periodic distribution of bridges and different burning dynamics are analyzed and compared. Analytical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.

  10. Mixing Categories and Modal Logics in the Quantum Setting

    NASA Astrophysics Data System (ADS)

    Cinà, Giovanni

    The study of the foundations of Quantum Mechanics, especially after the advent of Quantum Computation and Information, has benefited from the application of category-theoretic tools and modal logics to the analysis of Quantum processes: we witness a wealth of theoretical frameworks casted in either of the two languages. This paper explores the interplay of the two formalisms in the peculiar context of Quantum Theory. After a review of some influential abstract frameworks, we show how different modal logic frames can be extracted from the category of finite dimensional Hilbert spaces, connecting the Categorical Quantum Mechanics approach to some modal logics that have been proposed for Quantum Computing. We then apply a general version of the same technique to two other categorical frameworks, the `topos approach' of Doering and Isham and the sheaf-theoretic work on contextuality by Abramsky and Brandenburger, suggesting how some key features can be expressed with modal languages.

  11. Solutions of burnt-bridge models for molecular motor transport

    NASA Astrophysics Data System (ADS)

    Morozov, Alexander Yu.; Pronina, Ekaterina; Kolomeisky, Anatoly B.; Artyomov, Maxim N.

    2007-03-01

    Transport of molecular motors, stimulated by interactions with specific links between consecutive binding sites (called “bridges”), is investigated theoretically by analyzing discrete-state stochastic “burnt-bridge” models. When an unbiased diffusing particle crosses the bridge, the link can be destroyed (“burned”) with a probability p , creating a biased directed motion for the particle. It is shown that for probability of burning p=1 the system can be mapped into a one-dimensional single-particle hopping model along the periodic infinite lattice that allows one to calculate exactly all dynamic properties. For the general case of p<1 a theoretical method is developed and dynamic properties are computed explicitly. Discrete-time and continuous-time dynamics for periodic distribution of bridges and different burning dynamics are analyzed and compared. Analytical predictions are supported by extensive Monte Carlo computer simulations. Theoretical results are applied for analysis of the experiments on collagenase motor proteins.

  12. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft, supplemental data

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1975-01-01

    Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.

  13. Modeling of rolling element bearing mechanics. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Greenhill, Lyn M.; Merchant, David H.

    1994-01-01

    This report provides the user's manual for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings, duplex angular contact ball bearings, and cylindrical roller bearings. The model includes the defects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program, and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. This report addresses input instructions for and features of the computer codes. A companion report addresses the theoretical basis for the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  14. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 1, Theoretical background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, D.K.

    The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.

  15. Program Models A Laser Beam Focused In An Aerosol Spray

    NASA Technical Reports Server (NTRS)

    Barton, J. P.

    1996-01-01

    Monte Carlo analysis performed on packets of light. Program for Analysis of Laser Beam Focused Within Aerosol Spray (FLSPRY) developed for theoretical analysis of propagation of laser pulse optically focused within aerosol spray. Applied for example, to analyze laser ignition arrangement in which focused laser pulse used to ignite liquid aerosol fuel spray. Scattering and absorption of laser light by individual aerosol droplets evaluated by use of electromagnetic Lorenz-Mie theory. Written in FORTRAN 77 for both UNIX-based computers and DEC VAX-series computers. VAX version of program (LEW-16051). UNIX version (LEW-16065).

  16. Analysis of swimming motions.

    NASA Technical Reports Server (NTRS)

    Gallenstein, J.; Huston, R. L.

    1973-01-01

    This paper presents an analysis of swimming motion with specific attention given to the flutter kick, the breast-stroke kick, and the breast stroke. The analysis is completely theoretical. It employs a mathematical model of the human body consisting of frustrums of elliptical cones. Dynamical equations are written for this model including both viscous and inertia forces. These equations are then applied with approximated swimming strokes and solved numerically using a digital computer. The procedure is to specify the input of the swimming motion. The computer solution then provides the output displacement, velocity, and rotation or body roll of the swimmer.

  17. On the theory of drainage area for regular and non-regular points.

    PubMed

    Bonetti, S; Bragg, A D; Porporato, A

    2018-03-01

    The drainage area is an important, non-local property of a landscape, which controls surface and subsurface hydrological fluxes. Its role in numerous ecohydrological and geomorphological applications has given rise to several numerical methods for its computation. However, its theoretical analysis has lagged behind. Only recently, an analytical definition for the specific catchment area was proposed (Gallant & Hutchinson. 2011 Water Resour. Res. 47 , W05535. (doi:10.1029/2009WR008540)), with the derivation of a differential equation whose validity is limited to regular points of the watershed. Here, we show that such a differential equation can be derived from a continuity equation (Chen et al. 2014 Geomorphology 219 , 68-86. (doi:10.1016/j.geomorph.2014.04.037)) and extend the theory to critical and singular points both by applying Gauss's theorem and by means of a dynamical systems approach to define basins of attraction of local surface minima. Simple analytical examples as well as applications to more complex topographic surfaces are examined. The theoretical description of topographic features and properties, such as the drainage area, channel lines and watershed divides, can be broadly adopted to develop and test the numerical algorithms currently used in digital terrain analysis for the computation of the drainage area, as well as for the theoretical analysis of landscape evolution and stability.

  18. On the theory of drainage area for regular and non-regular points

    NASA Astrophysics Data System (ADS)

    Bonetti, S.; Bragg, A. D.; Porporato, A.

    2018-03-01

    The drainage area is an important, non-local property of a landscape, which controls surface and subsurface hydrological fluxes. Its role in numerous ecohydrological and geomorphological applications has given rise to several numerical methods for its computation. However, its theoretical analysis has lagged behind. Only recently, an analytical definition for the specific catchment area was proposed (Gallant & Hutchinson. 2011 Water Resour. Res. 47, W05535. (doi:10.1029/2009WR008540)), with the derivation of a differential equation whose validity is limited to regular points of the watershed. Here, we show that such a differential equation can be derived from a continuity equation (Chen et al. 2014 Geomorphology 219, 68-86. (doi:10.1016/j.geomorph.2014.04.037)) and extend the theory to critical and singular points both by applying Gauss's theorem and by means of a dynamical systems approach to define basins of attraction of local surface minima. Simple analytical examples as well as applications to more complex topographic surfaces are examined. The theoretical description of topographic features and properties, such as the drainage area, channel lines and watershed divides, can be broadly adopted to develop and test the numerical algorithms currently used in digital terrain analysis for the computation of the drainage area, as well as for the theoretical analysis of landscape evolution and stability.

  19. A Computer Program for Practical Semivariogram Modeling and Ordinary Kriging: A Case Study of Porosity Distribution in an Oil Field

    NASA Astrophysics Data System (ADS)

    Mert, Bayram Ali; Dag, Ahmet

    2017-12-01

    In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.

  20. Multidisciplinary analysis and design of printed wiring boards

    NASA Astrophysics Data System (ADS)

    Fulton, Robert E.; Hughes, Joseph L.; Scott, Waymond R., Jr.; Umeagukwu, Charles; Yeh, Chao-Pin

    1991-04-01

    Modern printed wiring board design depends on electronic prototyping using computer-based simulation and design tools. Existing electrical computer-aided design (ECAD) tools emphasize circuit connectivity with only rudimentary analysis capabilities. This paper describes a prototype integrated PWB design environment denoted Thermal Structural Electromagnetic Testability (TSET) being developed at Georgia Tech in collaboration with companies in the electronics industry. TSET provides design guidance based on enhanced electrical and mechanical CAD capabilities including electromagnetic modeling testability analysis thermal management and solid mechanics analysis. TSET development is based on a strong analytical and theoretical science base and incorporates an integrated information framework and a common database design based on a systematic structured methodology.

  1. An algol program for dissimilarity analysis: a divisive-omnithetic clustering technique

    USGS Publications Warehouse

    Tipper, J.C.

    1979-01-01

    Clustering techniques are used properly to generate hypotheses about patterns in data. Of the hierarchical techniques, those which are divisive and omnithetic possess many theoretically optimal properties. One such method, dissimilarity analysis, is implemented here in ALGOL 60, and determined to be competitive computationally with most other methods. ?? 1979.

  2. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    NASA Astrophysics Data System (ADS)

    Arezzini, S.; Carboni, A.; Caruso, G.; Ciampa, A.; Coscetti, S.; Mazzoni, E.; Piras, S.

    2014-06-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  3. Implementation of an experimental program to investigate the performance characteristics of OMEGA navigation

    NASA Technical Reports Server (NTRS)

    Baxa, E. G., Jr.

    1974-01-01

    A theoretical formulation of differential and composite OMEGA error is presented to establish hypotheses about the functional relationships between various parameters and OMEGA navigational errors. Computer software developed to provide for extensive statistical analysis of the phase data is described. Results from the regression analysis used to conduct parameter sensitivity studies on differential OMEGA error tend to validate the theoretically based hypothesis concerning the relationship between uncorrected differential OMEGA error and receiver separation range and azimuth. Limited results of measurement of receiver repeatability error and line of position measurement error are also presented.

  4. A computer program for predicting nonlinear uniaxial material responses using viscoplastic models

    NASA Technical Reports Server (NTRS)

    Chang, T. Y.; Thompson, R. L.

    1984-01-01

    A computer program was developed for predicting nonlinear uniaxial material responses using viscoplastic constitutive models. Four specific models, i.e., those due to Miller, Walker, Krieg-Swearengen-Rhode, and Robinson, are included. Any other unified model is easily implemented into the program in the form of subroutines. Analysis features include stress-strain cycling, creep response, stress relaxation, thermomechanical fatigue loop, or any combination of these responses. An outline is given on the theoretical background of uniaxial constitutive models, analysis procedure, and numerical integration methods for solving the nonlinear constitutive equations. In addition, a discussion on the computer program implementation is also given. Finally, seven numerical examples are included to demonstrate the versatility of the computer program developed.

  5. Man-machine analysis of translation and work tasks of Skylab films

    NASA Technical Reports Server (NTRS)

    Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.

    1979-01-01

    An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.

  6. Site Characterization in the Urban Area of Tijuana, B. C., Mexico by Means of: H/V Spectral Ratios, Spectral Analysis of Surface Waves, and Random Decrement Method

    NASA Astrophysics Data System (ADS)

    Tapia-Herrera, R.; Huerta-Lopez, C. I.; Martinez-Cruzado, J. A.

    2009-05-01

    Results of site characterization for an experimental site in the metropolitan area of Tijuana, B. C., Mexico are presented as part of the on-going research in which time series of earthquakes, ambient noise, and induced vibrations were processed with three different methods: H/V spectral ratios, Spectral Analysis of Surface Waves (SASW), and the Random Decrement Method, (RDM). Forward modeling using the wave propagation stiffness matrix method (Roësset and Kausel, 1981) was used to compute the theoretical SH/P, SV/P spectral ratios, and the experimental H/V spectral ratios were computed following the conventional concepts of Fourier analysis. The modeling/comparison between the theoretical and experimental H/V spectral ratios was carried out. For the SASW method the theoretical dispersion curves were also computed and compared with the experimental one, and finally the theoretical free vibration decay curve was compared with the experimental one obtained with the RDM. All three methods were tested with ambient noise, induced vibrations, and earthquake signals. Both experimental spectral ratios obtained with ambient noise as well as earthquake signals agree quite well with the theoretical spectral ratios, particularly at the fundamental vibration frequency of the recording site. Differences between the fundamental vibration frequencies are evident for sites located at alluvial fill (~0.6 Hz) and at sites located at conglomerate/sandstones fill (0.75 Hz). Shear wave velocities for the soft soil layers of the 4-layer discrete soil model ranges as low as 100 m/s and up to 280 m/s. The results with the SASW provided information that allows to identify low velocity layers, not seen before with the traditional seismic methods. The damping estimations obtained with the RDM are within the expected values, and the dominant frequency of the system also obtained with the RDM correlates within the range of plus-minus 20 % with the one obtained by means of the H/V spectral ratio.

  7. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  8. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.

    1992-01-01

    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  9. Blade loss transient dynamics analysis, volume 1. Task 2: TETRA 2 theoretical development

    NASA Technical Reports Server (NTRS)

    Gallardo, Vincente C.; Black, Gerald

    1986-01-01

    The theoretical development of the forced steady state analysis of the structural dynamic response of a turbine engine having nonlinear connecting elements is discussed. Based on modal synthesis, and the principle of harmonic balance, the governing relations are the compatibility of displacements at the nonlinear connecting elements. There are four displacement compatibility equations at each nonlinear connection, which are solved by iteration for the principle harmonic of the excitation frequency. The resulting computer program, TETRA 2, combines the original TETRA transient analysis (with flexible bladed disk) with the steady state capability. A more versatile nonlinear rub or bearing element which contains a hardening (or softening) spring, with or without deadband, is also incorporated.

  10. The NASTRAN theoretical manual

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Designed to accommodate additions and modifications, this commentary on NASTRAN describes the problem solving capabilities of the program in a narrative fashion and presents developments of the analytical and numerical procedures that underlie the program. Seventeen major sections and numerous subsections cover; the organizational aspects of the program, utility matrix routines, static structural analysis, heat transfer, dynamic structural analysis, computer graphics, special structural modeling techniques, error analysis, interaction between structures and fluids, and aeroelastic analysis.

  11. Modeling of rolling element bearing mechanics. Theoretical manual

    NASA Technical Reports Server (NTRS)

    Merchant, David H.; Greenhill, Lyn M.

    1994-01-01

    This report documents the theoretical basis for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings; duplex angular contact ball bearings; and cylindrical roller bearings. The model includes the effects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program; and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. A companion report addresses the input instructions for and features of the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  12. CASKS (Computer Analysis of Storage Casks): A microcomputer based analysis system for storage cask review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T.F.; Mok, G.C.; Carlson, R.W.

    1996-12-01

    CASKS is a microcomputer based computer system developed by LLNL to assist the Nuclear Regulatory Commission in performing confirmatory analyses for licensing review of radioactive-material storage cask designs. The analysis programs of the CASKS computer system consist of four modules--the impact analysis module, the thermal analysis module, the thermally-induced stress analysis module, and the pressure-induced stress analysis module. CASKS uses a series of menus to coordinate input programs, cask analysis programs, output programs, data archive programs and databases, so the user is able to run the system in an interactive environment. This paper outlines the theoretical background on the impactmore » analysis module and the yielding surface formulation. The close agreement between the CASKS analytical predictions and the results obtained form the two storage asks drop tests performed by SNL and by BNFL at Winfrith serves as the validation of the CASKS impact analysis module.« less

  13. Casks (computer analysis of storage casks): A microcomputer based analysis system for storage cask review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T.F.; Mok, G.C.; Carlson, R.W.

    1995-08-01

    CASKS is a microcomputer based computer system developed by LLNL to assist the Nuclear Regulatory Commission in performing confirmatory analyses for licensing review of radioactive-material storage cask designs. The analysis programs of the CASKS computer system consist of four modules: the impact analysis module, the thermal analysis module, the thermally-induced stress analysis module, and the pressure-induced stress analysis module. CASKS uses a series of menus to coordinate input programs, cask analysis programs, output programs, data archive programs and databases, so the user is able to run the system in an interactive environment. This paper outlines the theoretical background on themore » impact analysis module and the yielding surface formulation. The close agreement between the CASKS analytical predictions and the results obtained form the two storage casks drop tests performed by SNL and by BNFL at Winfrith serves as the validation of the CASKS impact analysis module.« less

  14. Distributed computing for membrane-based modeling of action potential propagation.

    PubMed

    Porras, D; Rogers, J M; Smith, W M; Pollard, A E

    2000-08-01

    Action potential propagation simulations with physiologic membrane currents and macroscopic tissue dimensions are computationally expensive. We, therefore, analyzed distributed computing schemes to reduce execution time in workstation clusters by parallelizing solutions with message passing. Four schemes were considered in two-dimensional monodomain simulations with the Beeler-Reuter membrane equations. Parallel speedups measured with each scheme were compared to theoretical speedups, recognizing the relationship between speedup and code portions that executed serially. A data decomposition scheme based on total ionic current provided the best performance. Analysis of communication latencies in that scheme led to a load-balancing algorithm in which measured speedups at 89 +/- 2% and 75 +/- 8% of theoretical speedups were achieved in homogeneous and heterogeneous clusters of workstations. Speedups in this scheme with the Luo-Rudy dynamic membrane equations exceeded 3.0 with eight distributed workstations. Cluster speedups were comparable to those measured during parallel execution on a shared memory machine.

  15. A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

    DOE PAGES

    Tuo, Rui; Jeff Wu, C. F.

    2016-07-19

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  16. Building Cognition: The Construction of Computational Representations for Scientific Discovery.

    PubMed

    Chandrasekharan, Sanjay; Nersessian, Nancy J

    2015-11-01

    Novel computational representations, such as simulation models of complex systems and video games for scientific discovery (Foldit, EteRNA etc.), are dramatically changing the way discoveries emerge in science and engineering. The cognitive roles played by such computational representations in discovery are not well understood. We present a theoretical analysis of the cognitive roles such representations play, based on an ethnographic study of the building of computational models in a systems biology laboratory. Specifically, we focus on a case of model-building by an engineer that led to a remarkable discovery in basic bioscience. Accounting for such discoveries requires a distributed cognition (DC) analysis, as DC focuses on the roles played by external representations in cognitive processes. However, DC analyses by and large have not examined scientific discovery, and they mostly focus on memory offloading, particularly how the use of existing external representations changes the nature of cognitive tasks. In contrast, we study discovery processes and argue that discoveries emerge from the processes of building the computational representation. The building process integrates manipulations in imagination and in the representation, creating a coupled cognitive system of model and modeler, where the model is incorporated into the modeler's imagination. This account extends DC significantly, and we present some of the theoretical and application implications of this extended account. Copyright © 2014 Cognitive Science Society, Inc.

  17. IMMAN: free software for information theory-based chemometric analysis.

    PubMed

    Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo

    2015-05-01

    The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software.

  18. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography

    PubMed Central

    Jørgensen, J. S.; Sidky, E. Y.

    2015-01-01

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620

  19. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.

    PubMed

    Jørgensen, J S; Sidky, E Y

    2015-06-13

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.

  20. An acoustic experimental and theoretical investigation of single disc propellers

    NASA Technical Reports Server (NTRS)

    Bumann, Elizabeth A.; Korkan, Kenneth D.

    1989-01-01

    An experimental study of the acoustic field associated with two, three, and four blade propeller configurations with a blade root angle of 50 deg was performed in the Texas A&M University 5 ft. x 6 ft. acoustically-insulated subsonic wind tunnel. A waveform analysis package was utilized to obtain experimental acoustic time histories, frequency spectra, and overall sound pressure level (OASPL) and served as a basis for comparison to the theoretical acoustic compact source theory of Succi (1979). Valid for subsonic tip speeds, the acoustic analysis replaced each blade by an array of spiraling point sources which exhibited a unique force vector and volume. The computer analysis of Succi was modified to include a propeller performance strip analysis which used a NACA 4-digit series airfoil data bank to calculate lift and drag for each blade segment given the geometry and motion of the propeller. Theoretical OASPL predictions were found to moderately overpredict experimental values for all operating conditions and propeller configurations studied.

  1. 1:1 Computing Programs: An Analysis of the Stages of Concerns of 1:1 Integration, Professional Development Training and Level of Classroom Use by Illinois High School Teachers

    ERIC Educational Resources Information Center

    Detering, Brad

    2017-01-01

    This research study, grounded in the theoretical framework of education change, used the Concerns-Based Adoption Model of change to examine the concerns of Illinois high school teachers and administrators regarding the implementation of 1:1 computing programs. A quantitative study of educators investigated the stages of concern and the mathematics…

  2. TRANDESNF: A computer program for transonic airfoil design and analysis in nonuniform flow

    NASA Technical Reports Server (NTRS)

    Chang, J. F.; Lan, C. Edward

    1987-01-01

    The use of a transonic airfoil code for analysis, inverse design, and direct optimization of an airfoil immersed in propfan slipstream is described. A summary of the theoretical method, program capabilities, input format, output variables, and program execution are described. Input data of sample test cases and the corresponding output are given.

  3. Transformation in the pharmaceutical industry--a systematic analysis of operational evidence.

    PubMed

    Shafiei, Nader; Ford, James L; Morecroft, Charles W; Lisboa, Paulo J; Taylor, Mark J; Mouzughi, Yusra

    2013-01-01

    Through systematic collection and trending of pharmaceutical data, operational evidence to verify existence of 14 factors affecting the ongoing pharmaceutical transformation has been compiled. These 14 factors are termed transformation triggers. The theoretical evidence in support of these triggers is carried forward from a systematic review of the literature that was conducted previously. Trends in operational evidence and the associated theoretical evidence were compared to identify areas of similarity and contrast. Areas of strong correlation between theoretical evidence and operational evidence included four transformation triggers: a fully integrated pharma network, personalized medicine, translational research, and pervasive computing. Key areas of contrast included three transformation triggers-namely, healthcare management focus, adaptive trials, and regulatory enforcement-for which the operational evidence was stronger than the theoretical evidence. The intent of this paper is to provide proof to demonstrate if there is any operational evidence that supports the 14 transformation triggers previously identified during the theoretical part of this research. The theoretical evidence for these triggers was carried forward to this paper for study from an operational perspective. The practical evidence established in this paper was compared with the corresponding theoretical evidence to identify areas of similarity and difference. This resulted in four triggers that had strong relationship between operational and theoretical evidence; they are a fully integrated pharma network, personalized medicine, translational research, and pervasive computing. The areas of difference included three transformation triggers for which the operational evidence was stronger than the theoretical evidence. These were healthcare management focus, adaptive trials, and regulatory enforcement.

  4. Topics in the optimization of millimeter-wave mixers

    NASA Technical Reports Server (NTRS)

    Siegel, P. H.; Kerr, A. R.; Hwang, W.

    1984-01-01

    A user oriented computer program for the analysis of single-ended Schottky diode mixers is described. The program is used to compute the performance of a 140 to 220 GHz mixer and excellent agreement with measurements at 150 and 180 GHz is obtained. A sensitivity analysis indicates the importance of various diode and mount characteristics on the mixer performance. A computer program for the analysis of varactor diode multipliers is described. The diode operates in either the reverse biased varactor mode or with substantial forward current flow where the conversion mechanism is predominantly resistive. A description and analysis of a new H-plane rectangular waveguide transformer is reported. The transformer is made quickly and easily in split-block waveguide using a standard slitting saw. It is particularly suited for use in the millimeter-wave band, replacing conventional electroformed stepped transformers. A theoretical analysis of the transformer is given and good agreement is obtained with measurements made at X-band.

  5. Interactive multi-mode blade impact analysis

    NASA Technical Reports Server (NTRS)

    Alexander, A.; Cornell, R. W.

    1978-01-01

    The theoretical methodology used in developing an analysis for the response of turbine engine fan blades subjected to soft-body (bird) impacts is reported, and the computer program developed using this methodology as its basis is described. This computer program is an outgrowth of two programs that were previously developed for the purpose of studying problems of a similar nature (a 3-mode beam impact analysis and a multi-mode beam impact analysis). The present program utilizes an improved missile model that is interactively coupled with blade motion which is more consistent with actual observations. It takes into account local deformation at the impact area, blade camber effects, and the spreading of the impacted missile mass on the blade surface. In addition, it accommodates plate-type mode shapes. The analysis capability in this computer program represents a significant improvement in the development of the methodology for evaluating potential fan blade materials and designs with regard to foreign object impact resistance.

  6. Unsteady flow model for circulation-control airfoils

    NASA Technical Reports Server (NTRS)

    Rao, B. M.

    1979-01-01

    An analysis and a numerical lifting surface method are developed for predicting the unsteady airloads on two-dimensional circulation control airfoils in incompressible flow. The analysis and the computer program are validated by correlating the computed unsteady airloads with test data and also with other theoretical solutions. Additionally, a mathematical model for predicting the bending-torsion flutter of a two-dimensional airfoil (a reference section of a wing or rotor blade) and a computer program using an iterative scheme are developed. The flutter program has a provision for using the CC airfoil airloads program or the Theodorsen hard flap solution to compute the unsteady lift and moment used in the flutter equations. The adopted mathematical model and the iterative scheme are used to perform a flutter analysis of a typical CC rotor blade reference section. The program seems to work well within the basic assumption of the incompressible flow.

  7. Framework for cognitive analysis of dynamic perfusion computed tomography with visualization of large volumetric data

    NASA Astrophysics Data System (ADS)

    Hachaj, Tomasz; Ogiela, Marek R.

    2012-10-01

    The proposed framework for cognitive analysis of perfusion computed tomography images is a fusion of image processing, pattern recognition, and image analysis procedures. The output data of the algorithm consists of: regions of perfusion abnormalities, anatomy atlas description of brain tissues, measures of perfusion parameters, and prognosis for infracted tissues. That information is superimposed onto volumetric computed tomography data and displayed to radiologists. Our rendering algorithm enables rendering large volumes on off-the-shelf hardware. This portability of rendering solution is very important because our framework can be run without using expensive dedicated hardware. The other important factors are theoretically unlimited size of rendered volume and possibility of trading of image quality for rendering speed. Such rendered, high quality visualizations may be further used for intelligent brain perfusion abnormality identification, and computer aided-diagnosis of selected types of pathologies.

  8. Computational Thermomechanical Modelling of Early-Age Silicate Composites

    NASA Astrophysics Data System (ADS)

    Vala, J.; Št'astník, S.; Kozák, V.

    2009-09-01

    Strains and stresses in early-age silicate composites, widely used in civil engineering, especially in fresh concrete mixtures, in addition to those caused by exterior mechanical loads, are results of complicated non-deterministic physical and chemical processes. Their numerical prediction at the macro-scale level requires the non-trivial physical analysis based on the thermodynamic principles, making use of micro-structural information from both theoretical and experimental research. The paper introduces a computational model, based on a nonlinear system of macroscopic equations of evolution, supplied with certain effective material characteristics, coming from the micro-scale analysis, and sketches the algorithm for its numerical analysis.

  9. A computational modeling of semantic knowledge in reading comprehension: Integrating the landscape model with latent semantic analysis.

    PubMed

    Yeari, Menahem; van den Broek, Paul

    2016-09-01

    It is a well-accepted view that the prior semantic (general) knowledge that readers possess plays a central role in reading comprehension. Nevertheless, computational models of reading comprehension have not integrated the simulation of semantic knowledge and online comprehension processes under a unified mathematical algorithm. The present article introduces a computational model that integrates the landscape model of comprehension processes with latent semantic analysis representation of semantic knowledge. In three sets of simulations of previous behavioral findings, the integrated model successfully simulated the activation and attenuation of predictive and bridging inferences during reading, as well as centrality estimations and recall of textual information after reading. Analyses of the computational results revealed new theoretical insights regarding the underlying mechanisms of the various comprehension phenomena.

  10. A computational perspective of vibrational and electronic analysis of potential photosensitizer 2-chlorothioxanthone

    NASA Astrophysics Data System (ADS)

    Ali, Narmeen; Mansha, Asim; Asim, Sadia; Zahoor, Ameer Fawad; Ghafoor, Sidra; Akbar, Muhammad Usman

    2018-03-01

    This paper deals with combined theoretical and experimental study of geometric, electronic and vibrational properties of 2-chlorothioxanthone (CTX) molecule which is potential photosensitizer. The FT-IR spectrum of CTX in solid phase was recorded in 4000-400 cm-1 region. The UV-Vis. absorption spectrum was also recorded in the laboratory as well as computed at DFT/B3LYP level in five different phases viz. gas, water, DMSO, acetone and ethanol. The quantum mechanics based theoretical IR and Raman spectra were also calculated for the title compound employing HF and DFT functional with 3-21G+, 6-31G+ and 6-311G+, 6-311G++ basis sets, respectively, and assignment of each vibrational frequency has been done on the basis of potential energy distribution (PED). A comparison has been made between theoretical and experimental vibrational spectra as well as for the UV-Vis. absorption spectra. The computed infra red & Raman spectra by DFT compared with experimental spectra along with reliable vibrational assignment based on PED. The calculated electronic properties, results of natural bonding orbital (NBO) analysis, charge distribution, dipole moment and energies have been reported in the paper. Bimolecular quenching of triplet state of CTX in the presence of triethylamine, 2-propanol triethylamine and diazobicyclooctane (DABCO) reflect the interactions between them. The bimolecular quenching rate constant is fastest for interaction of 3CTX in the presence of DABCO reflecting their stronger interactions.

  11. Conformational analysis and circular dichroism of bilirubin, the yellow pigment of jaundice

    NASA Astrophysics Data System (ADS)

    Lightner, David A.; Person, Richard; Peterson, Blake; Puzicha, Gisbert; Pu, Yu-Ming; Bojadziev, Stefan

    1991-06-01

    Conformational analysis of (4Z, 15Z)-bilirubin-IX(alpha) by molecular mechanics computations reveals a global energy minimum folded conformation. Powerful added stabilization is achieved through intramolecular hydrogen bonding. Theoretical treatment of bilirubin as a molecular exciton predicts an intense bisignate circular dichroism spectrum for the folded conformation: (Delta) (epsilon) is congruent to 270 L (DOT) mole-1 (DOT) cm-1 for the $OM450 nm electronic transition(s). Synthesis of bilirubin analogs with propionic acid groups methylated at the (alpha) or (beta) position introduces an allosteric effect that allows for an optical resolution of the pigments, with enantiomers exhibiting the theoretically predicted circular dichroism.

  12. Uncertainty quantification based on pillars of experiment, theory, and computation. Part I: Data analysis

    NASA Astrophysics Data System (ADS)

    Elishakoff, I.; Sarlin, N.

    2016-06-01

    In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.

  13. Developing the next generation of diverse computer scientists: the need for enhanced, intersectional computing identity theory

    NASA Astrophysics Data System (ADS)

    Rodriguez, Sarah L.; Lehman, Kathleen

    2017-10-01

    This theoretical paper explores the need for enhanced, intersectional computing identity theory for the purpose of developing a diverse group of computer scientists for the future. Greater theoretical understanding of the identity formation process specifically for computing is needed in order to understand how students come to understand themselves as computer scientists. To ensure that the next generation of computer scientists is diverse, this paper presents a case for examining identity development intersectionally, understanding the ways in which women and underrepresented students may have difficulty identifying as computer scientists and be systematically oppressed in their pursuit of computer science careers. Through a review of the available scholarship, this paper suggests that creating greater theoretical understanding of the computing identity development process will inform the way in which educational stakeholders consider computer science practices and policies.

  14. Electroosmosis in a Finite Cylindrical Pore: Simple Models of End Effects

    PubMed Central

    2015-01-01

    A theoretical model of electroosmosis through a circular pore of radius a that traverses a membrane of thickness h is investigated. Both the cylindrical surface of the pore and the outer surfaces of the membrane are charged. When h ≫ a, end effects are negligible, and the results of full numerical computations of electroosmosis in an infinite pore agree with theory. When h = 0, end effects dominate, and computations again agree with analysis. For intermediate values of h/a, an approximate analysis that combines these two limiting cases captures the main features of computational results when the Debye length κ–1 is small compared with the pore radius a. However, the approximate analysis fails when κ–1 ≫ a, when the charge cloud due to the charged cylindrical walls of the pore spills out of the ends of the pore, and the electroosmotic flow is reduced. When this spilling out is included in the analysis, agreement with computation is restored. PMID:25020257

  15. Cumulative reports and publications

    NASA Technical Reports Server (NTRS)

    1993-01-01

    A complete list of Institute for Computer Applications in Science and Engineering (ICASE) reports are listed. Since ICASE reports are intended to be preprints of articles that will appear in journals or conference proceedings, the published reference is included when it is available. The major categories of the current ICASE research program are: applied and numerical mathematics, including numerical analysis and algorithm development; theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and computer science.

  16. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  17. Theoretical Modeling of Molecular and Electron Kinetic Processes. Volume I. Theoretical Formulation of Analysis and Description of Computer Program.

    DTIC Science & Technology

    1979-01-01

    syn- thesis proceed s by ignoring unacceptable syntax or other errors , pro- tection against subsequent execution of a faulty reaction scheme can be...resulting TAPE9 . During subroutine syn thesis and reaction processing, a search is made (fo r each secondary electron collision encountered) to...program library, which can be cat- alogued and saved if any future specialized modifications (beyond the scope of the syn thesis capability of LASER

  18. Evaluating a Multivariate Directional Connectivity Measure for Use in Electroencephalogram (EEG) Network Analysis Using a Conductance-Based Neuron Network Model

    DTIC Science & Technology

    2015-03-01

    of 7 information -theoretic criteria plotted against the model order used . The legend is labeled according to the figures in which the power spectra...spectrum (Brovelli et al. 2004). 6 Fig. 2 Values of 7 information -theoretic criteria plotted against the model order used . The legend is labeled...Identification of directed influence: Granger causality, Kullback - Leibler divergence, and complexity. Neural Computation. 2012;24(7):1722–1739. doi:10.1162

  19. Practical Use of Computationally Frugal Model Analysis Methods

    DOE PAGES

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  20. The new 3-(tert-butyl)-1-(2-nitrophenyl)-1H-pyrazol-5-amine: Experimental and computational studies

    NASA Astrophysics Data System (ADS)

    Cuenú, Fernando; Muñoz-Patiño, Natalia; Torres, John Eduard; Abonia, Rodrigo; Toscano, Rubén A.; Cobo, J.

    2017-11-01

    The molecular and supramolecular structure of the title compound, 3-(tertbutyl)-1-(2-nitrophenyl)-1H-pyrazol-5-amine (2NPz) from the single crystal X-ray diffraction (SC-XRD) and spectroscopic data analysis is reported. The computational analysis of the structure, geometry optimization, vibrational frequencies, nuclear magnetic resonance and UV-Vis is also described and compared with experimental data. Satisfactory theoretical aspects were made for the molecule using density functional theory (DFT), with B3LYP and B3PW91 functionals, and Hartree-Fock (HF), with 6-311++G(d,p) basis set, using GAUSSIAN 09 program package without any constraint on the geometry. With VEDA 4 software, vibrational frequencies were assigned in terms of the potential energy distribution while, with the GaussSum software, the percentage contribution of the frontier orbitals at each transition of the electronic absorption spectrum was established. The obtained results indicated that optimized geometry could well reflect the molecular structural parameters from SC-XRD. Theoretical data obtained for the vibrational analysis and NMR spectra are consistent with experimental data.

  1. Denuded Data! Grounded Theory Using the NUDIST Computer Analysis Program: In Researching the Challenge to Teacher Self-Efficacy Posed by Students with Learning Disabilities in Australian Education.

    ERIC Educational Resources Information Center

    Burroughs-Lange, Sue G.; Lange, John

    This paper evaluates the effects of using the NUDIST (Non-numerical, Unstructured Data Indexing, Searching and Theorising) computer program to organize coded, qualitative data. The use of the software is discussed within the context of the study for which it was used: an Australian study that aimed to develop a theoretical understanding of the…

  2. Analytical evaluation of ILM sensors. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Kirk, R. J.

    1975-01-01

    The applicability of various sensing concepts to independent landing monitor systems was analyzed. Microwave landing system MLS accuracy requirements are presented along with a description of MLS airborne equipment. Computer programs developed during the analysis are described and include: a mathematical computer model for use in the performance assessment of reconnaissance sensor systems; a theoretical formulation of electromagnetic scattering to generate data at high incidence angles; atmospheric attenuation of microwaves; and microwave radiometry, programs

  3. Using a commercial CAD system for simultaneous input to theoretical aerodynamic programs and wind-tunnel model construction

    NASA Technical Reports Server (NTRS)

    Enomoto, F.; Keller, P.

    1984-01-01

    The Computer Aided Design (CAD) system's common geometry database was used to generate input for theoretical programs and numerically controlled (NC) tool paths for wind tunnel part fabrication. This eliminates the duplication of work in generating separate geometry databases for each type of analysis. Another advantage is that it reduces the uncertainty due to geometric differences when comparing theoretical aerodynamic data with wind tunnel data. The system was adapted to aerodynamic research by developing programs written in Design Analysis Language (DAL). These programs reduced the amount of time required to construct complex geometries and to generate input for theoretical programs. Certain shortcomings of the Design, Drafting, and Manufacturing (DDM) software limited the effectiveness of these programs and some of the Calma NC software. The complexity of aircraft configurations suggests that more types of surface and curve geometry should be added to the system. Some of these shortcomings may be eliminated as improved versions of DDM are made available.

  4. NRL Fact Book

    DTIC Science & Technology

    1985-04-01

    characteristics of targets Tank 9.1 m (30 ft) in diameter by 6.7 m (22 ft) deep , automated with computer con- trol and analysis for detailed studies of acoustic...structures; and conducts experiments in the deep ocean, in acoustically shallow water, and in the Arctic. The Division carries out theoretical and...Laser Materials-Application Center Failure Analysis and Fractography Staff Research Activity Areas Environmental Effects Microstructural characterization

  5. Control Theoretic Modeling for Uncertain Cultural Attitudes and Unknown Adversarial Intent

    DTIC Science & Technology

    2009-02-01

    Constructive computational tools. 15. SUBJECT TERMS social learning, social networks , multiagent systems, game theory 16. SECURITY CLASSIFICATION OF: a...over- reactionary behaviors; 3) analysis of rational social learning in networks : analysis of belief propagation in social networks in various...general methodology as a predictive device for social network formation and for communication network formation with constraints on the lengths of

  6. Supersonic second order analysis and optimization program user's manual

    NASA Technical Reports Server (NTRS)

    Clever, W. C.

    1984-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at supersonic and moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to conceptual configuration design level of effort. Second order small disturbance theory was utilized to meet this objective. Numerical codes were developed for analysis and design of relatively general three dimensional geometries. Results from the computations indicate good agreement with experimental results for a variety of wing, body, and wing-body shapes. Case computational time of one minute on a CDC 176 are typical for practical aircraft arrangement.

  7. Programming Probabilistic Structural Analysis for Parallel Processing Computer

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.

    1991-01-01

    The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.

  8. RELIABLE COMPUTATION OF HOMOGENEOUS AZEOTROPES. (R824731)

    EPA Science Inventory

    Abstract

    It is important to determine the existence and composition of homogeneous azeotropes in the analysis of phase behavior and in the synthesis and design of separation systems, from both theoretical and practical standpoints. A new method for reliably locating an...

  9. A theoretical study of heterojunction and graded band gap type solar cells

    NASA Technical Reports Server (NTRS)

    Chiang, J. P. C.; Hauser, J. R.

    1979-01-01

    The work performed concentrated on including multisun effects, high temperature effects, and electron irradiation effects into the computer analysis program for heterojunction and graded bandgap solar cells. These objectives were accomplished and the program is now available for such calculations.

  10. Validation of Flight Critical Control Systems

    DTIC Science & Technology

    1991-12-01

    1985. [8] Avizienis, A., and Lyu, M., "On the Effectiveness of Multiversion Software in Digital Avionics", AIAA Computers in Aerospace VI Conference...Experimentation and Modelling. NASA CR-165036, 1982. [12] Eckhardt, D. E.; and Lee, L. D.: A Theoretical Basis for the Analysis of Multiversion

  11. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  12. Modeling the state dependent impulse control for computer virus propagation under media coverage

    NASA Astrophysics Data System (ADS)

    Liang, Xiyin; Pei, Yongzhen; Lv, Yunfei

    2018-02-01

    A state dependent impulsive control model is proposed to model the spread of computer virus incorporating media coverage. By the successor function, the sufficient conditions for the existence and uniqueness of order-1 periodic solution are presented first. Secondly, for two classes of periodic solutions, the geometric property of successor function and the analogue of the Poincaré criterion are employed to obtain the stability results. These results show that the number of the infective computers is under the threshold all the time. Finally, the theoretic and numerical analysis show that media coverage can delay the spread of computer virus.

  13. Notes on numerical reliability of several statistical analysis programs

    USGS Publications Warehouse

    Landwehr, J.M.; Tasker, Gary D.

    1999-01-01

    This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

  14. Theoretical basis of the DOE-2 building energy use analysis program

    NASA Astrophysics Data System (ADS)

    Curtis, R. B.

    1981-04-01

    A user-oriented, public domain, computer program was developed that will enable architects and engineers to perform design and retrofit studies of the energy-use of buildings under realistic weather conditions. The DOE-2.1A has been named by the US DOE as the standard evaluation technique for the Congressionally mandated building energy performance standards (BEPS). A number of program design decisions were made that determine the breadth of applicability of DOE-2.1. Such design decisions are intrinsic to all building energy use analysis computer programs and determine the types of buildings or the kind of HVAC systems that can be modeled. In particular, the weighting factor method used in DOE-2 has both advantages and disadvantages relative to other computer programs.

  15. Developments in the application of the geometrical theory of diffraction and computer graphics to aircraft inter-antenna coupling analysis

    NASA Astrophysics Data System (ADS)

    Bogusz, Michael

    1993-01-01

    The need for a systematic methodology for the analysis of aircraft electromagnetic compatibility (EMC) problems is examined. The available computer aids used in aircraft EMC analysis are assessed and a theoretical basis is established for the complex algorithms which identify and quantify electromagnetic interactions. An overview is presented of one particularly well established aircraft antenna to antenna EMC analysis code, the Aircraft Inter-Antenna Propagation with Graphics (AAPG) Version 07 software. The specific new algorithms created to compute cone geodesics and their associated path losses and to graph the physical coupling path are discussed. These algorithms are validated against basic principles. Loss computations apply the uniform geometrical theory of diffraction and are subsequently compared to measurement data. The increased modelling and analysis capabilities of the newly developed AAPG Version 09 are compared to those of Version 07. Several models of real aircraft, namely the Electronic Systems Trainer Challenger, are generated and provided as a basis for this preliminary comparative assessment. Issues such as software reliability, algorithm stability, and quality of hardcopy output are also discussed.

  16. Combine experimental and theoretical investigation on an alkaloid-Dimethylisoborreverine

    NASA Astrophysics Data System (ADS)

    Singh, Swapnil; Singh, Harshita; Karthick, T.; Agarwal, Parag; Erande, Rohan D.; Dethe, Dattatraya H.; Tandon, Poonam

    2016-01-01

    A combined experimental (FT-IR, 1H and 13C NMR) and theoretical approach is used to study the structure and properties of antimalarial drug dimethylisoborreverine (DMIB). Conformational analysis, has been performed by plotting one dimensional potential energy curve that was computed using density functional theory (DFT) with B3LYP/6-31G method and predicted conformer A1 as the most stable conformer. After full geometry optimization, harmonic wavenumbers were computed for conformer A1 at the DFT/B3LYP/6-311++G(d,P) level. A complete vibrational assignment of all the vibrational modes have been performed on the bases of the potential energy distribution (PED) and theoretical results were found to be in good agreement with the observed data. To predict the solvent effect, the UV-Vis spectra were calculated in different solvents by polarizable continuum model using TD-DFT method. Molecular docking studies were performed to test the biological activity of the sample using SWISSDOCK web server and Hex 8.0.0 software. The molecular electrostatic potential (MESP) was plotted to identify the reactive sites of the molecule. Natural bond orbital (NBO) analysis was performed to get a deep insight of intramolecular charge transfer. Thermodynamical parameters were calculated to predict the direction of chemical reaction.

  17. Robust MOE Detector for DS-CDMA Systems with Signature Waveform Mismatch

    NASA Astrophysics Data System (ADS)

    Lin, Tsui-Tsai

    In this letter, a decision-directed MOE detector with excellent robustness against signature waveform mismatch is proposed for DS-CDMA systems. Both the theoretic analysis and computer simulation results demonstrate that the proposed detector can provide better SINR performance than that of conventional detectors.

  18. A new theoretical approach to analyze complex processes in cytoskeleton proteins.

    PubMed

    Li, Xin; Kolomeisky, Anatoly B

    2014-03-20

    Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.

  19. Diagnosis of cutaneous thermal burn injuries by multispectral imaging analysis

    NASA Technical Reports Server (NTRS)

    Anselmo, V. J.; Zawacki, B. E.

    1978-01-01

    Special photographic or television image analysis is shown to be a potentially useful technique to assist the physician in the early diagnosis of thermal burn injury. A background on the medical and physiological problems of burns is presented. The proposed methodology for burns diagnosis from both the theoretical and clinical points of view is discussed. The television/computer system constructed to accomplish this analysis is described, and the clinical results are discussed.

  20. Deformations of thick two-material cylinder under axially varying radial pressure

    NASA Technical Reports Server (NTRS)

    Patel, Y. A.

    1976-01-01

    Stresses and deformations in thick, short, composite cylinder subjected to axially varying radial pressure are studied. Effect of slippage at the interface is examined. In the NASTRAN finite element model, multipoint constraint feature is utilized. Results are compared with theoretical analysis and SAP-IV computer code. Results from NASTRAN computer code are in good agreement with the analytical solutions. Results suggest a considerable influence of interfacial slippage on the axial bending stresses in the cylinder.

  1. A study of the accuracy of neutrally buoyant bubbles used as flow tracers in air

    NASA Technical Reports Server (NTRS)

    Kerho, Michael F.

    1993-01-01

    Research has been performed to determine the accuracy of neutrally buoyant and near neutrally buoyant bubbles used as flow tracers in air. Theoretical, computational, and experimental results are presented to evaluate the dynamics of bubble trajectories and factors affecting their ability to trace flow-field streamlines. The equation of motion for a single bubble was obtained and evaluated using a computational scheme to determine the factors which affect a bubble's trajectory. A two-dimensional experiment was also conducted to experimentally determine bubble trajectories in the stagnation region of NACA 0012 airfoil at 0 deg angle of attack using a commercially available helium bubble generation system. Physical properties of the experimental bubble trajectories were estimated using the computational scheme. These properties included the density ratio and diameter of the individual bubbles. the helium bubble system was then used to visualize and document the flow field about a 30 deg swept semispan wing with simulated glaze ice. Results were compared to Navier-Stokes calculations and surface oil flow visualization. The theoretical and computational analysis have shown that neutrally buoyant bubbles will trace even the most complex flow patterns. Experimental analysis revealed that the use of bubbles to trace flow patterns should be limited to qualitative measurements unless care is taken to ensure neutral buoyancy. This is due to the difficulty in the production of neutrally buoyant bubbles.

  2. A simplified analysis of propulsion installation losses for computerized aircraft design

    NASA Technical Reports Server (NTRS)

    Morris, S. J., Jr.; Nelms, W. P., Jr.; Bailey, R. O.

    1976-01-01

    A simplified method is presented for computing the installation losses of aircraft gas turbine propulsion systems. The method has been programmed for use in computer aided conceptual aircraft design studies that cover a broad range of Mach numbers and altitudes. The items computed are: inlet size, pressure recovery, additive drag, subsonic spillage drag, bleed and bypass drags, auxiliary air systems drag, boundary-layer diverter drag, nozzle boattail drag, and the interference drag on the region adjacent to multiple nozzle installations. The methods for computing each of these installation effects are described and computer codes for the calculation of these effects are furnished. The results of these methods are compared with selected data for the F-5A and other aircraft. The computer program can be used with uninstalled engine performance information which is currently supplied by a cycle analysis program. The program, including comments, is about 600 FORTRAN statements long, and uses both theoretical and empirical techniques.

  3. Quantum Computation

    NASA Astrophysics Data System (ADS)

    Aharonov, Dorit

    In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I l out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity because any realistic model will inevitably be subjected to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review, I make these connections explicit by discussing the possible implications of quantum computation on fundamental physical questions such as the transition from quantum to classical physics.

  4. Basic and applied research related to the technology of space energy conversion systems

    NASA Technical Reports Server (NTRS)

    Hertzberg, A.; Mattick, A. T.; Bruckner, A. P.

    1988-01-01

    The first six months' research effort on the Liquid Droplet Radiator (LDR) focussed on experimental and theoretical studies of radiation by an LDR droplet cloud. Improvements in the diagnostics for the radiation facility have been made which have permitted an accurate experimental test of theoretical predictions of LDR radiation over a wide range of optical depths, using a cloud of Dow silicone oil droplets. In conjunction with these measurements an analysis was made of the evolution of the cylindrical droplet cloud generated by a 2300-hole orifice plate. This analysis indicates that a considerable degree of agglomeration of droplets occurs over the first meter of travel. Theoretical studies have centered on developments of an efficient means of computing the angular scattering distribution from droplets in an LDR droplet cloud, so that a parameter study can be carried out for LDR radiative performance vs fluid optical properties and cloud geometry.

  5. Free space optical ultra-wideband communications over atmospheric turbulence channels.

    PubMed

    Davaslioğlu, Kemal; Cağiral, Erman; Koca, Mutlu

    2010-08-02

    A hybrid impulse radio ultra-wideband (IR-UWB) communication system in which UWB pulses are transmitted over long distances through free space optical (FSO) links is proposed. FSO channels are characterized by random fluctuations in the received light intensity mainly due to the atmospheric turbulence. For this reason, theoretical detection error probability analysis is presented for the proposed system for a time-hopping pulse-position modulated (TH-PPM) UWB signal model under weak, moderate and strong turbulence conditions. For the optical system output distributed over radio frequency UWB channels, composite error analysis is also presented. The theoretical derivations are verified via simulation results, which indicate a computationally and spectrally efficient UWB-over-FSO system.

  6. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  7. Theoretic model and computer simulation of separating mixture metal particles from waste printed circuit board by electrostatic separator.

    PubMed

    Li, Jia; Xu, Zhenming; Zhou, Yaohe

    2008-05-30

    Traditionally, the mixture metals from waste printed circuit board (PCB) were sent to the smelt factory to refine pure copper. Some valuable metals (aluminum, zinc and tin) with low content in PCB were lost during smelt. A new method which used roll-type electrostatic separator (RES) to recovery low content metals in waste PCB was presented in this study. The theoretic model which was established from computing electric field and the analysis of forces on the particles was used to write a program by MATLAB language. The program was design to simulate the process of separating mixture metal particles. Electrical, material and mechanical factors were analyzed to optimize the operating parameters of separator. The experiment results of separating copper and aluminum particles by RES had a good agreement with computer simulation results. The model could be used to simulate separating other metal (tin, zinc, etc.) particles during the process of recycling waste PCBs by RES.

  8. Theoretical Characterization of Visual Signatures and Calculation of Approximate Global Harmonic Frequency Scaling Factors

    NASA Astrophysics Data System (ADS)

    Kashinski, D. O.; Nelson, R. G.; Chase, G. M.; di Nallo, O. E.; Byrd, E. F. C.

    2016-05-01

    We are investigating the accuracy of theoretical models used to predict the visible, ultraviolet, and infrared spectra, as well as other properties, of product materials ejected from the muzzle of currently fielded systems. Recent advances in solid propellants has made the management of muzzle signature (flash) a principle issue in weapons development across the calibers. A priori prediction of the electromagnetic spectra of formulations will allow researchers to tailor blends that yield desired signatures and determine spectrographic detection ranges. Quantum chemistry methods at various levels of sophistication have been employed to optimize molecular geometries, compute unscaled harmonic frequencies, and determine the optical spectra of specific gas-phase species. Electronic excitations are being computed using Time Dependent Density Functional Theory (TD-DFT). Calculation of approximate global harmonic frequency scaling factors for specific DFT functionals is also in progress. A full statistical analysis and reliability assessment of computational results is currently underway. Work supported by the ARL, DoD-HPCMP, and USMA.

  9. New vibro-acoustic paradigms in biological tissues with application to diagnosis of pulmonary disorders

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangling

    The fundamental objective of the present study is to improve our understanding of audible sound propagation in the pulmonary system and torso. A related applied objective is to assess the feasibility of using audible acoustics for diagnosis of specific pulmonary conditions, such as pneumothorax (PTX). To accomplish these objectives, this study includes theoretical, computational and experimental developments aimed at: (1) better identifying the mechanical dynamic properties of soft biological tissues found in the torso region, (2) investigating the mechanisms of sound attenuation that occur when a PTX is present using greatly simplified theoretical and computational models, and (3) exploring the feasibility and utility of more comprehensive and precise computational finite element models of audible sound propagation in the pulmonary system and torso that would aid in related diagnostic developments. Mechanical material properties of soft biological tissue are studied for the low audible frequency range. The sensitivity to shear viscoelastic material constants of theoretical solutions for radiation impedance and surface wave motion are compared. Theoretical solutions are also compared to experimental measurements and numerical results from finite element analysis. It is found that, while prior theoretical solutions for radiation impedance are accurate, use of such measurements to estimate shear viscoelastic constants is not as precise as the use of surface wave measurements. The feasibility of using audible sound for diagnosis of pneumothorax is studied. Simplified one- and two-dimensional theoretical and numerical models of sound transmission through the pulmonary system and chest region to the chest wall surface are developed to more clearly understand the mechanism of energy loss when a pneumothorax is present, relative to a baseline case. A canine study on which these models are based predicts significant decreases in acoustic transmission strength when a pneumothorax is presented, in qualitative agreement with experimental measurements in dogs. Finally, the feasibility of building three-dimensional computational models is studied based on CT images of human subject or combination of the Horsfield airway model with geometry of other parts approximate from medical illustration. Preliminary results from these models show the same trend of acoustic energy loss when a PTX is present.

  10. Editorial

    NASA Astrophysics Data System (ADS)

    Liu, Shuai

    Fractal represents a special feature of nature and functional objects. However, fractal based computing can be applied to many research domains because of its fixed property resisted deformation, variable parameters and many unpredictable changes. Theoretical research and practical application of fractal based computing have been hotspots for 30 years and will be continued. There are many pending issues awaiting solutions in this domain, thus this thematic issue containing 14 papers publishes the state-of-the-art developments in theorem and application of fractal based computing, including mathematical analysis and novel engineering applications. The topics contain fractal and multifractal features in application and solution of nonlinear odes and equation.

  11. On the Achievable Throughput Over TVWS Sensor Networks

    PubMed Central

    Caleffi, Marcello; Cacciapuoti, Angela Sara

    2016-01-01

    In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565

  12. Natural three-qubit interactions in one-way quantum computing

    NASA Astrophysics Data System (ADS)

    Tame, M. S.; Paternostro, M.; Kim, M. S.; Vedral, V.

    2006-02-01

    We address the effects of natural three-qubit interactions on the computational power of one-way quantum computation. A benefit of using more sophisticated entanglement structures is the ability to construct compact and economic simulations of quantum algorithms with limited resources. We show that the features of our study are embodied by suitably prepared optical lattices, where effective three-spin interactions have been theoretically demonstrated. We use this to provide a compact construction for the Toffoli gate. Information flow and two-qubit interactions are also outlined, together with a brief analysis of relevant sources of imperfection.

  13. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  14. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  15. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  16. A new tool called DISSECT for analysing large genomic data sets using a Big Data approach

    PubMed Central

    Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert

    2015-01-01

    Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010

  17. Introduction to bioinformatics.

    PubMed

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  18. Toward Theory-Based Instruction in Scientific Problem Solving.

    ERIC Educational Resources Information Center

    Heller, Joan I.; And Others

    Several empirical and theoretical analyses related to scientific problem-solving are reviewed, including: detailed studies of individuals at different levels of expertise, and computer models simulating some aspects of human information processing during problem solving. Analysis of these studies has revealed many facets about the nature of the…

  19. A Coding Scheme to Analyse the Online Asynchronous Discussion Forums of University Students

    ERIC Educational Resources Information Center

    Biasutti, Michele

    2017-01-01

    The current study describes the development of a content analysis coding scheme to examine transcripts of online asynchronous discussion groups in higher education. The theoretical framework comprises the theories regarding knowledge construction in computer-supported collaborative learning (CSCL) based on a sociocultural perspective. The coding…

  20. Interoperating Cloud-based Virtual Farms

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.

    2015-12-01

    The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.

  1. Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiao, Xiangmin; Einstein, Daniel R.; Dyedov, Volodymyr

    2010-03-24

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stabilitymore » and consistency tests. These concepts lend themselves to robust numerical techniques including eigenvalue analysis, weighted least squares approximations, and numerical minimization, resulting in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.« less

  2. Analysis of ballistic transport in nanoscale devices by using an accelerated finite element contact block reduction approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, H.; Li, G., E-mail: gli@clemson.edu

    2014-08-28

    An accelerated Finite Element Contact Block Reduction (FECBR) approach is presented for computational analysis of ballistic transport in nanoscale electronic devices with arbitrary geometry and unstructured mesh. Finite element formulation is developed for the theoretical CBR/Poisson model. The FECBR approach is accelerated through eigen-pair reduction, lead mode space projection, and component mode synthesis techniques. The accelerated FECBR is applied to perform quantum mechanical ballistic transport analysis of a DG-MOSFET with taper-shaped extensions and a DG-MOSFET with Si/SiO{sub 2} interface roughness. The computed electrical transport properties of the devices obtained from the accelerated FECBR approach and associated computational cost as amore » function of system degrees of freedom are compared with those obtained from the original CBR and direct inversion methods. The performance of the accelerated FECBR in both its accuracy and efficiency is demonstrated.« less

  3. Partial correlation-based functional connectivity analysis for functional near-infrared spectroscopy signals

    NASA Astrophysics Data System (ADS)

    Akın, Ata

    2017-12-01

    A theoretical framework, a partial correlation-based functional connectivity (PC-FC) analysis to functional near-infrared spectroscopy (fNIRS) data, is proposed. This is based on generating a common background signal from a high passed version of fNIRS data averaged over all channels as the regressor in computing the PC between pairs of channels. This approach has been employed to real data collected during a Stroop task. The results show a strong significance in the global efficiency (GE) metric computed by the PC-FC analysis for neutral, congruent, and incongruent stimuli (NS, CS, IcS; GEN=0.10±0.009, GEC=0.11±0.01, GEIC=0.13±0.015, p=0.0073). A positive correlation (r=0.729 and p=0.0259) is observed between the interference of reaction times (incongruent-neutral) and interference of GE values (GEIC-GEN) computed from [HbO] signals.

  4. 2001 Flight Mechanics Symposium

    NASA Technical Reports Server (NTRS)

    Lynch, John P. (Editor)

    2001-01-01

    This conference publication includes papers and abstracts presented at the Flight Mechanics Symposium held on June 19-21, 2001. Sponsored by the Guidance, Navigation and Control Center of Goddard Space Flight Center, this symposium featured technical papers on a wide range of issues related to attitude/orbit determination, prediction and control; attitude simulation; attitude sensor calibration; theoretical foundation of attitude computation; dynamics model improvements; autonomous navigation; constellation design and formation flying; estimation theory and computational techniques; Earth environment mission analysis and design; and, spacecraft re-entry mission design and operations.

  5. User's guide for a computer program to analyze the LRC 16 ft transonic dynamics tunnel cable mount system

    NASA Technical Reports Server (NTRS)

    Barbero, P.; Chin, J.

    1973-01-01

    The theoretical derivation of the set of equations is discussed which is applicable to modeling the dynamic characteristics of aeroelastically-scaled models flown on the two-cable mount system in a 16 ft transonic dynamics tunnel. The computer program provided for the analysis is also described. The program calculates model trim conditions as well as 3 DOF longitudinal and lateral/directional dynamic conditions for various flying cable and snubber cable configurations. Sample input and output are included.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  7. VIBRA: An interactive computer program for steady-state vibration response analysis of linear damped structures

    NASA Technical Reports Server (NTRS)

    Bowman, L. M.

    1984-01-01

    An interactive steady state frequency response computer program with graphics is documented. Single or multiple forces may be applied to the structure using a modal superposition approach to calculate response. The method can be reapplied to linear, proportionally damped structures in which the damping may be viscous or structural. The theoretical approach and program organization are described. Example problems, user instructions, and a sample interactive session are given to demonstate the program's capability in solving a variety of problems.

  8. Toward integration of in vivo molecular computing devices: successes and challenges

    PubMed Central

    Hayat, Sikander; Hinze, Thomas

    2008-01-01

    The computing power unleashed by biomolecule based massively parallel computational units has been the focus of many interdisciplinary studies that couple state of the art ideas from mathematical logic, theoretical computer science, bioengineering, and nanotechnology to fulfill some computational task. The output can influence, for instance, release of a drug at a specific target, gene expression, cell population, or be a purely mathematical entity. Analysis of the results of several studies has led to the emergence of a general set of rules concerning the implementation and optimization of in vivo computational units. Taking two recent studies on in vivo computing as examples, we discuss the impact of mathematical modeling and simulation in the field of synthetic biology and on in vivo computing. The impact of the emergence of gene regulatory networks and the potential of proteins acting as “circuit wires” on the problem of interconnecting molecular computing device subunits is also highlighted. PMID:19404433

  9. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  10. Proof test of the computer program BUCKY for plasticity problems

    NASA Technical Reports Server (NTRS)

    Smith, James P.

    1994-01-01

    A theoretical equation describing the elastic-plastic deformation of a cantilever beam subject to a constant pressure is developed. The theoretical result is compared numerically to the computer program BUCKY for the case of an elastic-perfectly plastic specimen. It is shown that the theoretical and numerical results compare favorably in the plastic range. Comparisons are made to another research code to further validate the BUCKY results. This paper serves as a quality test for the computer program BUCKY developed at NASA Johnson Space Center.

  11. PREFACE: 1st International Workshop on Theoretical and Computational Physics: Condensed Matter, Soft Matter and Materials Physics & 38th National Conference on Theoretical Physics

    NASA Astrophysics Data System (ADS)

    2014-09-01

    This volume contains selected papers presented at the 38th National Conference on Theoretical Physics (NCTP-38) and the 1st International Workshop on Theoretical and Computational Physics: Condensed Matter, Soft Matter and Materials Physics (IWTCP-1). Both the conference and the workshop were held from 29 July to 1 August 2013 in Pullman hotel, Da Nang, Vietnam. The IWTCP-1 was a new activity of the Vietnamese Theoretical Physics Society (VTPS) organized in association with the 38th National Conference on Theoretical Physics (NCTP-38), the most well-known annual scientific forum dedicated to the dissemination of the latest development in the field of theoretical physics within the country. The IWTCP-1 was also an External Activity of the Asia Pacific Center for Theoretical Physics (APCTP). The overriding goal of the IWTCP is to provide an international forum for scientists and engineers from academia to share ideas, problems and solution relating to the recent advances in theoretical physics as well as in computational physics. The main IWTCP motivation is to foster scientific exchanges between the Vietnamese theoretical and computational physics community and world-wide scientists as well as to promote high-standard level of research and education activities for young physicists in the country. About 110 participants coming from 10 countries participated in the conference and the workshop. 4 invited talks, 18 oral contributions and 46 posters were presented at the conference. In the workshop we had one keynote lecture and 9 invited talks presented by international experts in the fields of theoretical and computational physics, together with 14 oral and 33 poster contributions. The proceedings were edited by Nguyen Tri Lan, Trinh Xuan Hoang, and Nguyen Ai Viet. We would like to thank all invited speakers, participants and sponsors for making the conference and the workshop successful. Nguyen Ai Viet Chair of NCTP-38 and IWTCP-1

  12. Metabolic reconstruction, constraint-based analysis and game theory to probe genome-scale metabolic networks.

    PubMed

    Ruppin, Eytan; Papin, Jason A; de Figueiredo, Luis F; Schuster, Stefan

    2010-08-01

    With the advent of modern omics technologies, it has become feasible to reconstruct (quasi-) whole-cell metabolic networks and characterize them in more and more detail. Computer simulations of the dynamic behavior of such networks are difficult due to a lack of kinetic data and to computational limitations. In contrast, network analysis based on appropriate constraints such as the steady-state condition (constraint-based analysis) is feasible and allows one to derive conclusions about the system's metabolic capabilities. Here, we review methods for the reconstruction of metabolic networks, modeling techniques such as flux balance analysis and elementary flux modes and current progress in their development and applications. Game-theoretical methods for studying metabolic networks are discussed as well. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Computing the modal mass from the state space model in combined experimental-operational modal analysis

    NASA Astrophysics Data System (ADS)

    Cara, Javier

    2016-05-01

    Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.

  14. Theoretical Analysis of Photoelectron Spectra of Pure and Mixed Metal Clusters: Disentangling Size, Structure, and Composition Effects

    DOE PAGES

    Acioli, Paulo H.; Jellinek, Julius

    2017-07-14

    A theoretical/computational description and analysis of the spectra of electron binding energies of Al 12 -, Al 13 - and Al 12Ni- clusters, which differ in size and/or composition by a single atom yet possess strikingly different measured photoelectron spectra, is presented. It is shown that the measured spectra can not only be reproduced computationally with quantitative fidelity – this is achieved through a combination of state-of-the-art density functional theory with a highly accurate scheme for conversion of the Kohn-Sham eigenenergies into electron binding energies – but also explained in terms of the effects of size, structure/symmetry and composition. Furthermore,more » a new methodology is developed and applied that provides for disentanglement and differential assignment of the separate roles played by size, structure/symmetry and composition in defining the observed differences in the measured spectra. The methodology is general and applicable to any finite system, homogeneous or heterogeneous. Finally, we project that in combination with advances in synthesis techniques this methodology will become an indispensable computation-based aid in the design of controlled synthesis protocols for manufacture of nanosystems and nanodevices with precisely desired electronic and other characteristics.« less

  15. Theoretical Analysis of Photoelectron Spectra of Pure and Mixed Metal Clusters: Disentangling Size, Structure, and Composition Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acioli, Paulo H.; Jellinek, Julius

    A theoretical/computational description and analysis of the spectra of electron binding energies of Al 12 -, Al 13 - and Al 12Ni- clusters, which differ in size and/or composition by a single atom yet possess strikingly different measured photoelectron spectra, is presented. It is shown that the measured spectra can not only be reproduced computationally with quantitative fidelity – this is achieved through a combination of state-of-the-art density functional theory with a highly accurate scheme for conversion of the Kohn-Sham eigenenergies into electron binding energies – but also explained in terms of the effects of size, structure/symmetry and composition. Furthermore,more » a new methodology is developed and applied that provides for disentanglement and differential assignment of the separate roles played by size, structure/symmetry and composition in defining the observed differences in the measured spectra. The methodology is general and applicable to any finite system, homogeneous or heterogeneous. Finally, we project that in combination with advances in synthesis techniques this methodology will become an indispensable computation-based aid in the design of controlled synthesis protocols for manufacture of nanosystems and nanodevices with precisely desired electronic and other characteristics.« less

  16. PAGANI Toolkit: Parallel graph-theoretical analysis package for brain network big data.

    PubMed

    Du, Haixiao; Xia, Mingrui; Zhao, Kang; Liao, Xuhong; Yang, Huazhong; Wang, Yu; He, Yong

    2018-05-01

    The recent collection of unprecedented quantities of neuroimaging data with high spatial resolution has led to brain network big data. However, a toolkit for fast and scalable computational solutions is still lacking. Here, we developed the PArallel Graph-theoretical ANalysIs (PAGANI) Toolkit based on a hybrid central processing unit-graphics processing unit (CPU-GPU) framework with a graphical user interface to facilitate the mapping and characterization of high-resolution brain networks. Specifically, the toolkit provides flexible parameters for users to customize computations of graph metrics in brain network analyses. As an empirical example, the PAGANI Toolkit was applied to individual voxel-based brain networks with ∼200,000 nodes that were derived from a resting-state fMRI dataset of 624 healthy young adults from the Human Connectome Project. Using a personal computer, this toolbox completed all computations in ∼27 h for one subject, which is markedly less than the 118 h required with a single-thread implementation. The voxel-based functional brain networks exhibited prominent small-world characteristics and densely connected hubs, which were mainly located in the medial and lateral fronto-parietal cortices. Moreover, the female group had significantly higher modularity and nodal betweenness centrality mainly in the medial/lateral fronto-parietal and occipital cortices than the male group. Significant correlations between the intelligence quotient and nodal metrics were also observed in several frontal regions. Collectively, the PAGANI Toolkit shows high computational performance and good scalability for analyzing connectome big data and provides a friendly interface without the complicated configuration of computing environments, thereby facilitating high-resolution connectomics research in health and disease. © 2018 Wiley Periodicals, Inc.

  17. A Computational and Theoretical Study of Conductance in Hydrogen-bonded Molecular Junctions

    NASA Astrophysics Data System (ADS)

    Wimmer, Michael

    This thesis is devoted to the theoretical and computational study of electron transport in molecular junctions where one or more hydrogen bonds are involved in the process. While electron transport through covalent bonds has been extensively studied, in recent work the focus has been shifted towards hydrogen-bonded systems due to their ubiquitous presence in biological systems and their potential in forming nano-junctions between molecular electronic devices and biological systems. This analysis allows us to significantly expand our comprehension of the experimentally observed result that the inclusion of hydrogen bonding in a molecular junction significantly impacts its transport properties, a fact that has important implications for our understanding of transport through DNA, and nano-biological interfaces in general. In part of this work I have explored the implications of quasiresonant transport in short chains of weakly-bonded molecular junctions involving hydrogen bonds. I used theoretical and computational analysis to interpret recent experiments and explain the role of Fano resonances in the transmission properties of the junction. In a different direction, I have undertaken the study of the transversal conduction through nucleotide chains that involve a variable number of different hydrogen bonds, e.g. NH˙˙˙O, OH˙˙˙O, and NH˙˙˙N, which are the three most prevalent hydrogen bonds in biological systems and organic electronics. My effort here has focused on the analysis of electronic descriptors that allow a simplified conceptual and computational understanding of transport properties. Specifically, I have expanded our previous work where the molecular polarizability was used as a conductance descriptor to include the possibility of atomic and bond partitions of the molecular polarizability. This is important because it affords an alternative molecular description of conductance that is not based on the conventional view of molecular orbitals as transport channels. My findings suggest that the hydrogen-bond networks are crucial in understanding the conductance of these junctions. A broader impact of this work pertains the fact that characterizing transport through hydrogen bonding networks may help in developing faster and cost-effective approaches to personalized medicine, to advance DNA sequencing and implantable electronics, and to progress in the design and application of new drugs.

  18. Conceptualizing, Designing, and Investigating Locative Media Use in Urban Space

    NASA Astrophysics Data System (ADS)

    Diamantaki, Katerina; Rizopoulos, Charalampos; Charitos, Dimitris; Kaimakamis, Nikos

    This chapter investigates the social implications of locative media (LM) use and attempts to outline a theoretical framework that may support the design and implementation of location-based applications. Furthermore, it stresses the significance of physical space and location awareness as important factors that influence both human-computer interaction and computer-mediated communication. The chapter documents part of the theoretical aspect of the research undertaken as part of LOcation-based Communication Urban NETwork (LOCUNET), a project that aims to investigate the way users interact with one another (human-computer-human interaction aspect) and with the location-based system itself (human-computer interaction aspect). A number of relevant theoretical approaches are discussed in an attempt to provide a holistic theoretical background for LM use. Additionally, the actual implementation of the LOCUNET system is described and some of the findings are discussed.

  19. Empirical analysis of RNA robustness and evolution using high-throughput sequencing of ribozyme reactions.

    PubMed

    Hayden, Eric J

    2016-08-15

    RNA molecules provide a realistic but tractable model of a genotype to phenotype relationship. This relationship has been extensively investigated computationally using secondary structure prediction algorithms. Enzymatic RNA molecules, or ribozymes, offer access to genotypic and phenotypic information in the laboratory. Advancements in high-throughput sequencing technologies have enabled the analysis of sequences in the lab that now rivals what can be accomplished computationally. This has motivated a resurgence of in vitro selection experiments and opened new doors for the analysis of the distribution of RNA functions in genotype space. A body of computational experiments has investigated the persistence of specific RNA structures despite changes in the primary sequence, and how this mutational robustness can promote adaptations. This article summarizes recent approaches that were designed to investigate the role of mutational robustness during the evolution of RNA molecules in the laboratory, and presents theoretical motivations, experimental methods and approaches to data analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A Generalized Information Theoretical Model for Quantum Secret Sharing

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming

    2016-11-01

    An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.

  1. Improvements to a method for the geometrically nonlinear analysis of compressively loaded stiffened composite panels

    NASA Technical Reports Server (NTRS)

    Stoll, Frederick

    1993-01-01

    The NLPAN computer code uses a finite-strip approach to the analysis of thin-walled prismatic composite structures such as stiffened panels. The code can model in-plane axial loading, transverse pressure loading, and constant through-the-thickness thermal loading, and can account for shape imperfections. The NLPAN code represents an attempt to extend the buckling analysis of the VIPASA computer code into the geometrically nonlinear regime. Buckling mode shapes generated using VIPASA are used in NLPAN as global functions for representing displacements in the nonlinear regime. While the NLPAN analysis is approximate in nature, it is computationally economical in comparison with finite-element analysis, and is thus suitable for use in preliminary design and design optimization. A comprehensive description of the theoretical approach of NLPAN is provided. A discussion of some operational considerations for the NLPAN code is included. NLPAN is applied to several test problems in order to demonstrate new program capabilities, and to assess the accuracy of the code in modeling various types of loading and response. User instructions for the NLPAN computer program are provided, including a detailed description of the input requirements and example input files for two stiffened-panel configurations.

  2. A theoretical study of alpha star populations in loaded nuclear emulsions

    USGS Publications Warehouse

    Senftle, F.E.; Farley, T.A.; Stieff, L.R.

    1954-01-01

    This theoretical study of the alpha star populations in loaded emulsions was undertaken in an effort to find a quantitative method for the analysis of less than microgram amounts of thorium in the presence of larger amounts of uranium. Analytical expressions for each type of star from each of the significantly contributing members of the uranium and thorium series as well as summation formulas for the whole series have been computed. The analysis for thorium may be made by determining the abundance of five-branched stars in a loaded nuclear emulsion and comparing of observed and predicted star populations. The comparison may also be used to check the half-lives of several members of the uranium and thorium series. ?? 1954.

  3. Generalized Mulliken-Hush analysis of electronic coupling interactions in compressed pi-stacked porphyrin-bridge-quinone systems.

    PubMed

    Zheng, Jieru; Kang, Youn K; Therien, Michael J; Beratan, David N

    2005-08-17

    Donor-acceptor interactions were investigated in a series of unusually rigid, cofacially compressed pi-stacked porphyrin-bridge-quinone systems. The two-state generalized Mulliken-Hush (GMH) approach was used to compute the coupling matrix elements. The theoretical coupling values evaluated with the GMH method were obtained from configuration interaction calculations using the INDO/S method. The results of this analysis are consistent with the comparatively soft distance dependences observed for both the charge separation and charge recombination reactions. Theoretical studies of model structures indicate that the phenyl units dominate the mediation of the donor-acceptor coupling and that the relatively weak exponential decay of rate with distance arises from the compression of this pi-electron stack.

  4. The response function of modulated grid Faraday cup plasma instruments

    NASA Technical Reports Server (NTRS)

    Barnett, A.; Olbert, S.

    1986-01-01

    Modulated grid Faraday cup plasma analyzers are a very useful tool for making in situ measurements of space plasmas. One of their great attributes is that their simplicity permits their angular response function to be calculated theoretically. An expression is derived for this response function by computing the trajectories of the charged particles inside the cup. The Voyager Plasma Science (PLS) experiment is used as a specific example. Two approximations to the rigorous response function useful for data analysis are discussed. The theoretical formulas were tested by multi-sensor analysis of solar wind data. The tests indicate that the formulas represent the true cup response function for all angles of incidence with a maximum error of only a few percent.

  5. Theoretical investigation of cyromazine tautomerism using density functional theory and Møller–Plesset perturbation theory methods

    USDA-ARS?s Scientific Manuscript database

    A computational chemistry analysis of six unique tautomers of cyromazine, a pesticide used for fly control, was performed with density functional theory (DFT) and canonical second order Møller–Plesset perturbation theory (MP2) methods to gain insight into the contributions of molecular structure to ...

  6. Recursion and the Competence/Performance Distinction in AGL Tasks

    ERIC Educational Resources Information Center

    Lobina, David J.

    2011-01-01

    The term "recursion" is used in at least four distinct theoretical senses within cognitive science. Some of these senses in turn relate to the different levels of analysis described by David Marr some 20 years ago; namely, the underlying competence capacity (the "computational" level), the performance operations used in real-time processing (the…

  7. The Use of Images in Intelligent Advisor Systems.

    ERIC Educational Resources Information Center

    Boulet, Marie-Michele

    This paper describes the intelligent advisor system, named CODAMA, used in teaching a university-level systems analysis and design course. The paper discusses: (1) the use of CODAMA to assist students to transfer theoretical knowledge to the practical; (2) details of how CODAMA is applied in conjunction with a computer-aided software engineering…

  8. Aerodynamic data banks for Clark-Y, NACA 4-digit and NACA 16-series airfoil families

    NASA Technical Reports Server (NTRS)

    Korkan, K. D.; Camba, J., III; Morris, P. M.

    1986-01-01

    With the renewed interest in propellers as means of obtaining thrust and fuel efficiency in addition to the increased utilization of the computer, a significant amount of progress was made in the development of theoretical models to predict the performance of propeller systems. Inherent in the majority of the theoretical performance models to date is the need for airfoil data banks which provide lift, drag, and moment coefficient values as a function of Mach number, angle-of-attack, maximum thickness to chord ratio, and Reynolds number. Realizing the need for such data, a study was initiated to provide airfoil data banks for three commonly used airfoil families in propeller design and analysis. The families chosen consisted of the Clark-Y, NACA 16 series, and NACA 4 digit series airfoils. The various component of each computer code, the source of the data used to create the airfoil data bank, the limitations of each data bank, program listing, and a sample case with its associated input-output are described. Each airfoil data bank computer code was written to be used on the Amdahl Computer system, which is IBM compatible and uses Fortran.

  9. Computational assessment of hemodynamics-based diagnostic tools using a database of virtual subjects: Application to three case studies.

    PubMed

    Willemet, Marie; Vennin, Samuel; Alastruey, Jordi

    2016-12-08

    Many physiological indexes and algorithms based on pulse wave analysis have been suggested in order to better assess cardiovascular function. Because these tools are often computed from in-vivo hemodynamic measurements, their validation is time-consuming, challenging, and biased by measurement errors. Recently, a new methodology has been suggested to assess theoretically these computed tools: a database of virtual subjects generated using numerical 1D-0D modeling of arterial hemodynamics. The generated set of simulations encloses a wide selection of healthy cases that could be encountered in a clinical study. We applied this new methodology to three different case studies that demonstrate the potential of our new tool, and illustrated each of them with a clinically relevant example: (i) we assessed the accuracy of indexes estimating pulse wave velocity; (ii) we validated and refined an algorithm that computes central blood pressure; and (iii) we investigated theoretical mechanisms behind the augmentation index. Our database of virtual subjects is a new tool to assist the clinician: it provides insight into the physical mechanisms underlying the correlations observed in clinical practice. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Particokinetics: computational analysis of the superparamagnetic iron oxide nanoparticles deposition process

    PubMed Central

    Cárdenas, Walter HZ; Mamani, Javier B; Sibov, Tatiana T; Caous, Cristofer A; Amaro, Edson; Gamarra, Lionel F

    2012-01-01

    Background Nanoparticles in suspension are often utilized for intracellular labeling and evaluation of toxicity in experiments conducted in vitro. The purpose of this study was to undertake a computational modeling analysis of the deposition kinetics of a magnetite nanoparticle agglomerate in cell culture medium. Methods Finite difference methods and the Crank–Nicolson algorithm were used to solve the equation of mass transport in order to analyze concentration profiles and dose deposition. Theoretical data were confirmed by experimental magnetic resonance imaging. Results Different behavior in the dose fraction deposited was found for magnetic nanoparticles up to 50 nm in diameter when compared with magnetic nanoparticles of a larger diameter. Small changes in the dispersion factor cause variations of up to 22% in the dose deposited. The experimental data confirmed the theoretical results. Conclusion These findings are important in planning for nanomaterial absorption, because they provide valuable information for efficient intracellular labeling and control toxicity. This model enables determination of the in vitro transport behavior of specific magnetic nanoparticles, which is also relevant to other models that use cellular components and particle absorption processes. PMID:22745539

  11. Coupled Structural, Thermal, Phase-change and Electromagnetic Analysis for Superconductors, Volume 2

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.

    1996-01-01

    Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromag subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermel and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. Volume 1 describes mostly formulation specific problems. Volume 2 describes generalization of those formulations.

  12. A computational model-based validation of Guyton's analysis of cardiac output and venous return curves

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Cohen, R. J.; Mark, R. G.

    2002-01-01

    Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.

  13. Multiscale analysis of information dynamics for linear multivariate processes.

    PubMed

    Faes, Luca; Montalto, Alessandro; Stramaglia, Sebastiano; Nollo, Giandomenico; Marinazzo, Daniele

    2016-08-01

    In the study of complex physical and physiological systems represented by multivariate time series, an issue of great interest is the description of the system dynamics over a range of different temporal scales. While information-theoretic approaches to the multiscale analysis of complex dynamics are being increasingly used, the theoretical properties of the applied measures are poorly understood. This study introduces for the first time a framework for the analytical computation of information dynamics for linear multivariate stochastic processes explored at different time scales. After showing that the multiscale processing of a vector autoregressive (VAR) process introduces a moving average (MA) component, we describe how to represent the resulting VARMA process using statespace (SS) models and how to exploit the SS model parameters to compute analytical measures of information storage and information transfer for the original and rescaled processes. The framework is then used to quantify multiscale information dynamics for simulated unidirectionally and bidirectionally coupled VAR processes, showing that rescaling may lead to insightful patterns of information storage and transfer but also to potentially misleading behaviors.

  14. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  15. NOAA/DOE CWP structural analysis package. [CWPFLY, CWPEXT, COTEC, and XOTEC codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pompa, J.A.; Lunz, D.F.

    1979-09-01

    The theoretical development and computer code user's manual for analysis of the Ocean Thermal Energy Conversion (OTEC) plant cold water pipe (CWP) are presented. The analysis of the CWP includes coupled platform/CWP loadngs and dynamic responses. This report with the exception of the Introduction and Appendix F was orginally published as Hydronautics, Inc., Technical Report No. 7825-2 (by Barr, Chang, and Thasanatorn) in November 1978. A detailed theoretical development of the equations describing the coupled platform/CWP system and preliminary validation efforts are described. The appendices encompass a complete user's manual, describing the inputs, outputs and operation of the four componentmore » programs, and detail changes and updates implemented since the original release of the code by Hydronautics. The code itself is available through NOAA's Office of Ocean Technology and Engineering Services.« less

  16. Comparison of real and computer-simulated outcomes of LASIK refractive surgery

    NASA Astrophysics Data System (ADS)

    Cano, Daniel; Barbero, Sergio; Marcos, Susana

    2004-06-01

    Computer simulations of alternative LASIK ablation patterns were performed for corneal elevation maps of 13 real myopic corneas (range of myopia, -2.0 to -11.5 D). The computationally simulated ablation patterns were designed with biconic surfaces (standard Munnerlyn pattern, parabolic pattern, and biconic pattern) or with aberrometry measurements (customized pattern). Simulated results were compared with real postoperative outcomes. Standard LASIK refractive surgery for myopia increased corneal asphericity and spherical aberration. Computations with the theoretical Munnerlyn ablation pattern did not increase the corneal asphericity and spherical aberration. The theoretical parabolic pattern induced a slight increase of asphericity and spherical aberration, explaining only 40% of the clinically found increase. The theoretical biconic pattern controlled corneal spherical aberration. Computations showed that the theoretical customized pattern can correct high-order asymmetric aberrations. Simulations of changes in efficiency due to reflection and nonnormal incidence of the laser light showed a further increase in corneal asphericity. Consideration of these effects with a parabolic pattern accounts for 70% of the clinical increase in asphericity.

  17. ATLAS, an integrated structural analysis and design system. Volume 5: System demonstration problems

    NASA Technical Reports Server (NTRS)

    Samuel, R. A. (Editor)

    1979-01-01

    One of a series of documents describing the ATLAS System for structural analysis and design is presented. A set of problems is described that demonstrate the various analysis and design capabilities of the ATLAS System proper as well as capabilities available by means of interfaces with other computer programs. Input data and results for each demonstration problem are discussed. Results are compared to theoretical solutions or experimental data where possible. Listings of all input data are included.

  18. Principal polynomial analysis.

    PubMed

    Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus

    2014-11-01

    This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.

  19. The Importance of Proving the Null

    PubMed Central

    Gallistel, C. R.

    2010-01-01

    Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? PMID:19348549

  20. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    PubMed

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  1. Bifilar analysis users manual, volume 2

    NASA Technical Reports Server (NTRS)

    Cassarino, S. J.

    1980-01-01

    The digital computer program developed to study the vibration response of a coupled rotor/bifilar/airframe coupled system is described. The theoretical development of the rotor/airframe system equations of motion is provided. The fuselage and bifilar absorber equations of motion are discussed. The modular block approach used in the make-up of this computer program is described. The input data needed to run the rotor and bifilar absorber analyses is described. Sample output formats are presented and discussed. The results for four test cases, which use the major logic paths of the computer program, are presented. The overall program structure is discussed in detail. The FORTRAN subroutines are described in detail.

  2. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine

    PubMed Central

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2016-01-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473

  3. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T.

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less

  4. Numerical analysis of stiffened shells of revolution. Volume 1: Theory manual for STARS-2S, 2B, 2V digital computer programs

    NASA Technical Reports Server (NTRS)

    Svalbonas, V.

    1973-01-01

    The theoretical analysis background for the STARS-2 (shell theory automated for rotational structures) program is presented. The theory involved in the axisymmetric nonlinear and unsymmetric linear static analyses, and the stability and vibrations (including critical rotation speed) analyses involving axisymmetric prestress are discussed. The theory for nonlinear static, stability, and vibrations analyses, involving shells with unsymmetric loadings are included.

  5. Theoretical modeling and experimental analysis of solar still integrated with evacuated tubes

    NASA Astrophysics Data System (ADS)

    Panchal, Hitesh; Awasthi, Anuradha

    2017-06-01

    In this present research work, theoretical modeling of single slope, single basin solar still integrated with evacuated tubes has been performed based on energy balance equations. Major variables like water temperature, inner glass cover temperature and distillate output has been computed based on theoretical modeling. The experimental setup has been made from locally available materials and installed at Gujarat Power Engineering and Research Institute, Mehsana, Gujarat, India (23.5880°N, 72.3693°E) with 0.04 m depth during 6 months of time interval. From the series of experiments, it is found considerable increment in average distillate output of a solar still when integrated with evacuated tubes not only during daytime but also from night time. In all experimental cases, the correlation of coefficient (r) and root mean square percentage deviation of theoretical modeling and experimental study found good agreement with 0.97 < r < 0.98 and 10.22 < e < 38.4% respectively.

  6. Virtual deposition plant

    NASA Astrophysics Data System (ADS)

    Tikhonravov, Alexander

    2005-09-01

    A general structure of the software for computational manufacturing experiments is discussed. It is shown that computational experiments can be useful for checking feasibility properties of theoretical designs and for finding the most practical theoretical design for a given production environment.

  7. Real-time flutter analysis

    NASA Technical Reports Server (NTRS)

    Walker, R.; Gupta, N.

    1984-01-01

    The important algorithm issues necessary to achieve a real time flutter monitoring system; namely, the guidelines for choosing appropriate model forms, reduction of the parameter convergence transient, handling multiple modes, the effect of over parameterization, and estimate accuracy predictions, both online and for experiment design are addressed. An approach for efficiently computing continuous-time flutter parameter Cramer-Rao estimate error bounds were developed. This enables a convincing comparison of theoretical and simulation results, as well as offline studies in preparation for a flight test. Theoretical predictions, simulation and flight test results from the NASA Drones for Aerodynamic and Structural Test (DAST) Program are compared.

  8. Theoretical performance of cross-wind axis turbines with results for a catenary vertical axis configuration

    NASA Technical Reports Server (NTRS)

    Muraca, R. J.; Stephens, M. V.; Dagenhart, J. R.

    1975-01-01

    A general analysis capable of predicting performance characteristics of cross-wind axis turbines was developed, including the effects of airfoil geometry, support struts, blade aspect ratio, windmill solidity, blade interference and curved flow. The results were compared with available wind tunnel results for a catenary blade shape. A theoretical performance curve for an aerodynamically efficient straight blade configuration was also presented. In addition, a linearized analytical solution applicable for straight configurations was developed. A listing of the computer program developed for numerical solutions of the general performance equations is included in the appendix.

  9. Calculative techniques for transonic flows about certain classes of wing-body combinations, phase 2

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Spreiter, J. R.

    1972-01-01

    Theoretical analysis and associated computer programs were developed for predicting properties of transonic flows about certain classes of wing-body combinations. The procedures used are based on the transonic equivalence rule and employ either an arbitrarily-specified solution or the local linerization method for determining the nonlifting transonic flow about the equivalent body. The class of wind planform shapes include wings having sweptback trailing edges and finite tip chord. Theoretical results are presented for surface and flow-field pressure distributions for both nonlifting and lifting situations at Mach number one.

  10. Infinity Computer and Calculus

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.

    2007-09-01

    Traditional computers work with finite numbers. Situations where the usage of infinite or infinitesimal quantities is required are studied mainly theoretically. In this survey talk, a new computational methodology (that is not related to nonstandard analysis) is described. It is based on the principle `The part is less than the whole' applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as particular cases of a unique framework. The new methodology allows us to introduce the Infinity Computer working with all these numbers (its simulator is presented during the lecture). The new computational paradigm both gives possibilities to execute computations of a new type and simplifies fields of mathematics where infinity and/or infinitesimals are encountered. Numerous examples of the usage of the introduced computational tools are given during the lecture.

  11. Dynamics of an HBV Model with Drug Resistance Under Intermittent Antiviral Therapy

    NASA Astrophysics Data System (ADS)

    Zhang, Ben-Gong; Tanaka, Gouhei; Aihara, Kazuyuki; Honda, Masao; Kaneko, Shuichi; Chen, Luonan

    2015-06-01

    This paper studies the dynamics of the hepatitis B virus (HBV) model and the therapy regimens of HBV disease. First, we propose a new mathematical model of HBV with drug resistance, and then analyze its qualitative and dynamical properties. Combining the clinical data and theoretical analysis, we demonstrate that our model is biologically plausible and also computationally viable. Second, we demonstrate that the intermittent antiviral therapy regimen is one of the possible strategies to treat this kind of complex disease. There are two main advantages of this regimen, i.e. it not only may delay the development of drug resistance, but also may reduce the duration of on-treatment time compared with the long-term continuous medication. Moreover, such an intermittent antiviral therapy can reduce the adverse side effects. Our theoretical model and computational results provide qualitative insight into the progression of HBV, and also a possible new therapy for HBV disease.

  12. Structural study, NCA, FT-IR, FT-Raman spectral investigations, NBO analysis, thermodynamic functions of N-acetyl-l-phenylalanine.

    PubMed

    Raja, B; Balachandran, V; Revathi, B

    2015-03-05

    The FT-IR and FT-Raman spectra of N-acetyl-l-phenylalanine were recorded and analyzed. Natural bond orbital analysis has been carried out for various intramolecular interactions that are responsible for the stabilization of the molecule. HOMO-LUMO energy gap has been computed with the help of density functional theory. The statistical thermodynamic functions (heat capacity, entropy, vibrational partition function and Gibbs energy) were obtained for the range of temperature 100-1000K. The polarizability, first hyperpolarizability, anisotropy polarizability invariant has been computed using quantum chemical calculations. The infrared and Raman spectra were also predicted from the calculated intensities. Comparison of the experimental and theoretical spectra values provides important information about the ability of the computational method to describe the vibrational modes. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Principles of Experimental Design for Big Data Analysis.

    PubMed

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  14. Principles of Experimental Design for Big Data Analysis

    PubMed Central

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2016-01-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686

  15. Modeling for IFOG Vibration Error Based on the Strain Distribution of Quadrupolar Fiber Coil

    PubMed Central

    Gao, Zhongxing; Zhang, Yonggang; Zhang, Yunhao

    2016-01-01

    Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational procedures are mainly based on the strain distribution of quadrupolar fiber coil measured by stress analyzer. The definition of asymmetry of strain distribution (ASD) is given in the paper to evaluate the winding quality of the coil. The established model reveals that the high ASD and the variable fiber elastic modulus in large strain situation are two dominant reasons that give rise to nonreciprocity phase shift in IFOG under vibration. Furthermore, theoretical analysis and computational results indicate that vibration errors of both open-loop and closed-loop IFOG increase with the raise of vibrational amplitude, vibrational frequency and ASD. Finally, an estimation of vibration-induced IFOG errors in aircraft is done according to the proposed model. Our work is meaningful in designing IFOG coils to achieve a better anti-vibration performance. PMID:27455257

  16. Computational crystallization

    PubMed Central

    Altan, Irem; Charbonneau, Patrick; Snell, Edward H.

    2016-01-01

    Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. PMID:26792536

  17. Computer program system for dynamic simulation and stability analysis of passive and actively controlled spacecraft. Volume 1. Theory

    NASA Technical Reports Server (NTRS)

    Bodley, C. S.; Devers, D. A.; Park, C. A.

    1975-01-01

    A theoretical development and associated digital computer program system is presented. The dynamic system (spacecraft) is modeled as an assembly of rigid and/or flexible bodies not necessarily in a topological tree configuration. The computer program system may be used to investigate total system dynamic characteristics including interaction effects between rigid and/or flexible bodies, control systems, and a wide range of environmental loadings. Additionally, the program system may be used for design of attitude control systems and for evaluation of total dynamic system performance including time domain response and frequency domain stability analyses. Volume 1 presents the theoretical developments including a description of the physical system, the equations of dynamic equilibrium, discussion of kinematics and system topology, a complete treatment of momentum wheel coupling, and a discussion of gravity gradient and environmental effects. Volume 2, is a program users' guide and includes a description of the overall digital program code, individual subroutines and a description of required program input and generated program output. Volume 3 presents the results of selected demonstration problems that illustrate all program system capabilities.

  18. Synthesis, spectroscopic investigation and theoretical studies of 2-((E)-(2-(2-cyanoacetyl)hydrazono)methyl)-4-((E)-phenyldiazenyl)phenyl methyl carbonate

    NASA Astrophysics Data System (ADS)

    Arokiasamy, A.; Manikandan, G.; Thanikachalam, V.; Gokula Krishnan, K.

    2017-04-01

    Synthesis and computational optimization studies have been carried out by Hartree-Fock (HF) and Density Functional Theory (DFT-B3LYP) methods with 6-31+G(d, p) basis set for 2-((E)-(2-(2-cyanoacetyl)hydrazono)methyl)-4-((E)-phenyldiazenyl)phenyl methyl carbonate (CHPMC). The stable configuration of CHPMC was confirmed theoretically by potential energy surface scan analysis. The complete vibrational assignments were performed on the basis of total energy distribution (TED) analysis. The vibrational properties studied by IR and Raman spectroscopic data complemented by quantum chemical calculations support the formation of intramolecular hydrogen bond. Furthermore, the UV-Vis spectra are interpreted in terms of TD-DFT quantum chemical calculations. The shapes of the simulated absorption spectra are in good agreement with the experimental data. The comparison between the experimental and theoretical values of FT-IR, FT-Raman vibrational spectra, NMR (1H and 13C) and UV-Vis spectra have also been discussed.

  19. Experimental and computational study on molecular structure and vibrational analysis of an antihyperglycemic biomolecule: Gliclazide

    NASA Astrophysics Data System (ADS)

    Karakaya, Mustafa; Kürekçi, Mehmet; Eskiyurt, Buse; Sert, Yusuf; Çırak, Çağrı

    2015-01-01

    In present study, the experimental and theoretical harmonic vibrational frequencies of gliclazide molecule have been investigated. The experimental FT-IR (400-4000 cm-1) and Laser-Raman spectra (100-4000 cm-1) of the molecule in the solid phase were recorded. Theoretical vibrational frequencies and geometric parameters (bond lengths and bond angles) have been calculated using ab initio Hartree Fock (HF), density functional theory (B3LYP hybrid function) methods with 6-311++G(d,p) and 6-31G(d,p) basis sets by Gaussian 09W program. The assignments of the vibrational frequencies were performed by potential energy distribution (PED) analysis by using VEDA 4 program. Theoretical optimized geometric parameters and vibrational frequencies have been compared with the corresponding experimental data, and they have been shown to be in a good agreement with each other. Also, the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energies have been found.

  20. Experimental and computational study on molecular structure and vibrational analysis of an antihyperglycemic biomolecule: gliclazide.

    PubMed

    Karakaya, Mustafa; Kürekçi, Mehmet; Eskiyurt, Buse; Sert, Yusuf; Çırak, Çağrı

    2015-01-25

    In present study, the experimental and theoretical harmonic vibrational frequencies of gliclazide molecule have been investigated. The experimental FT-IR (400-4000 cm(-1)) and Laser-Raman spectra (100-4000 cm(-1)) of the molecule in the solid phase were recorded. Theoretical vibrational frequencies and geometric parameters (bond lengths and bond angles) have been calculated using ab initio Hartree Fock (HF), density functional theory (B3LYP hybrid function) methods with 6-311++G(d,p) and 6-31G(d,p) basis sets by Gaussian 09W program. The assignments of the vibrational frequencies were performed by potential energy distribution (PED) analysis by using VEDA 4 program. Theoretical optimized geometric parameters and vibrational frequencies have been compared with the corresponding experimental data, and they have been shown to be in a good agreement with each other. Also, the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energies have been found. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Theoretical Assessment of the Impact of Climatic Factors in a Vibrio Cholerae Model.

    PubMed

    Kolaye, G; Damakoa, I; Bowong, S; Houe, R; Békollè, D

    2018-05-04

    A mathematical model for Vibrio Cholerae (V. Cholerae) in a closed environment is considered, with the aim of investigating the impact of climatic factors which exerts a direct influence on the bacterial metabolism and on the bacterial reservoir capacity. We first propose a V. Cholerae mathematical model in a closed environment. A sensitivity analysis using the eFast method was performed to show the most important parameters of the model. After, we extend this V. cholerae model by taking account climatic factors that influence the bacterial reservoir capacity. We present the theoretical analysis of the model. More precisely, we compute equilibria and study their stabilities. The stability of equilibria was investigated using the theory of periodic cooperative systems with a concave nonlinearity. Theoretical results are supported by numerical simulations which further suggest the necessity to implement sanitation campaigns of aquatic environments by using suitable products against the bacteria during the periods of growth of aquatic reservoirs.

  2. Theoretical Analysis of the Mechanism of Fracture Network Propagation with Stimulated Reservoir Volume (SRV) Fracturing in Tight Oil Reservoirs.

    PubMed

    Su, Yuliang; Ren, Long; Meng, Fankun; Xu, Chen; Wang, Wendong

    2015-01-01

    Stimulated reservoir volume (SRV) fracturing in tight oil reservoirs often induces complex fracture-network growth, which has a fundamentally different formation mechanism from traditional planar bi-winged fracturing. To reveal the mechanism of fracture network propagation, this paper employs a modified displacement discontinuity method (DDM), mechanical mechanism analysis and initiation and propagation criteria for the theoretical model of fracture network propagation and its derivation. A reasonable solution of the theoretical model for a tight oil reservoir is obtained and verified by a numerical discrete method. Through theoretical calculation and computer programming, the variation rules of formation stress fields, hydraulic fracture propagation patterns (FPP) and branch fracture propagation angles and pressures are analyzed. The results show that during the process of fracture propagation, the initial orientation of the principal stress deflects, and the stress fields at the fracture tips change dramatically in the region surrounding the fracture. Whether the ideal fracture network can be produced depends on the geological conditions and on the engineering treatments. This study has both theoretical significance and practical application value by contributing to a better understanding of fracture network propagation mechanisms in unconventional oil/gas reservoirs and to the improvement of the science and design efficiency of reservoir fracturing.

  3. Theoretical Analysis of the Mechanism of Fracture Network Propagation with Stimulated Reservoir Volume (SRV) Fracturing in Tight Oil Reservoirs

    PubMed Central

    Su, Yuliang; Ren, Long; Meng, Fankun; Xu, Chen; Wang, Wendong

    2015-01-01

    Stimulated reservoir volume (SRV) fracturing in tight oil reservoirs often induces complex fracture-network growth, which has a fundamentally different formation mechanism from traditional planar bi-winged fracturing. To reveal the mechanism of fracture network propagation, this paper employs a modified displacement discontinuity method (DDM), mechanical mechanism analysis and initiation and propagation criteria for the theoretical model of fracture network propagation and its derivation. A reasonable solution of the theoretical model for a tight oil reservoir is obtained and verified by a numerical discrete method. Through theoretical calculation and computer programming, the variation rules of formation stress fields, hydraulic fracture propagation patterns (FPP) and branch fracture propagation angles and pressures are analyzed. The results show that during the process of fracture propagation, the initial orientation of the principal stress deflects, and the stress fields at the fracture tips change dramatically in the region surrounding the fracture. Whether the ideal fracture network can be produced depends on the geological conditions and on the engineering treatments. This study has both theoretical significance and practical application value by contributing to a better understanding of fracture network propagation mechanisms in unconventional oil/gas reservoirs and to the improvement of the science and design efficiency of reservoir fracturing. PMID:25966285

  4. Experimental and theoretical investigations on the antioxidant activity of isoorientin from Crotalaria globosa

    NASA Astrophysics Data System (ADS)

    Deepha, V.; Praveena, R.; Sivakumar, Raman; Sadasivam, K.

    2014-03-01

    The increasing interests in naturally occurring flavonoids are well known for their bioactivity as antioxidants. The present investigations with combined experimental and theoretical methods are employed to determine the radical scavenging activity and phytochemicals present in Crotalaria globosa, a novel plant source. Preliminary quantification of ethanolic extract of leaves shows high phenolic and flavonoid content than root extract; also it is validated through DPPHrad assay. Further analysis is carried out with successive extracts of leaves of varying polarity of solvents. In DPPHrad and FRAP assays, ethyl acetate fraction (EtOAc) exhibit higher scavenging activity followed by ethanol fraction (EtOH) whereas in NOS assay ethanol fraction is slightly predominant over the EtOAc fraction. The LC-MS analysis provides tentative information about the presence of flavonoid C-glycoside in EtOAc fraction (yellow solid). Presence of flavonoid isorientin has been confirmed through isolation (PTLC) and detected by spectroscopy methods (UV-visible and 1H NMR). Utilizing B3LYP/6-311G (d,p) level of theory the structure and reactivity of flavonoid isoorientin theoretically have been explored. The analysis of the theoretical Bond dissociation energy values, for all Osbnd H sites of isoorientin reveals that minimum energy is required to dissociate H-atom from B-ring than A and C-rings. In order to validate the antioxidant characteristics of isoorientin the relevant molecular descriptors IP, HOMO-LUMO, Mulliken spin density analysis and molecular electrostatic potential surfaces have been computed and interpreted. From experimental and theoretical results, it is proved that isoorientin can act as potent antiradical scavenger in oxidative system.

  5. Numerical and experimental analysis of a thin liquid film on a rotating disk related to development of a spacecraft absorption cooling system

    NASA Technical Reports Server (NTRS)

    Faghri, Amir; Swanson, Theodore D.

    1989-01-01

    The numerical and experimental analysis of a thin liquid film on a rotating and a stationary disk related to the development of an absorber unit for a high capacity spacecraft absorption cooling system, is described. The creation of artificial gravity by the use of a centrifugal field was focused upon in this report. Areas covered include: (1) One-dimensional computation of thin liquid film flows; (2) Experimental measurement of film height and visualization of flow; (3) Two-dimensional computation of the free surface flow of a thin liquid film using a pressure optimization method; (4) Computation of heat transfer in two-dimensional thin film flow; (5) Development of a new computational methodology for the free surface flows using a permeable wall; (6) Analysis of fluid flow and heat transfer in a thin film in the presence and absence of gravity; and (7) Comparison of theoretical prediction and experimental data. The basic phenomena related to fluid flow and heat transfer on rotating systems reported here can also be applied to other areas of space systems.

  6. Comparing DNA damage-processing pathways by computer analysis of chromosome painting data.

    PubMed

    Levy, Dan; Vazquez, Mariel; Cornforth, Michael; Loucas, Bradford; Sachs, Rainer K; Arsuaga, Javier

    2004-01-01

    Chromosome aberrations are large-scale illegitimate rearrangements of the genome. They are indicative of DNA damage and informative about damage processing pathways. Despite extensive investigations over many years, the mechanisms underlying aberration formation remain controversial. New experimental assays such as multiplex fluorescent in situ hybridyzation (mFISH) allow combinatorial "painting" of chromosomes and are promising for elucidating aberration formation mechanisms. Recently observed mFISH aberration patterns are so complex that computer and graph-theoretical methods are needed for their full analysis. An important part of the analysis is decomposing a chromosome rearrangement process into "cycles." A cycle of order n, characterized formally by the cyclic graph with 2n vertices, indicates that n chromatin breaks take part in a single irreducible reaction. We here describe algorithms for computing cycle structures from experimentally observed or computer-simulated mFISH aberration patterns. We show that analyzing cycles quantitatively can distinguish between different aberration formation mechanisms. In particular, we show that homology-based mechanisms do not generate the large number of complex aberrations, involving higher-order cycles, observed in irradiated human lymphocytes.

  7. The vehicle design evaluation program - A computer-aided design procedure for transport aircraft

    NASA Technical Reports Server (NTRS)

    Oman, B. H.; Kruse, G. S.; Schrader, O. E.

    1977-01-01

    The vehicle design evaluation program is described. This program is a computer-aided design procedure that provides a vehicle synthesis capability for vehicle sizing, external load analysis, structural analysis, and cost evaluation. The vehicle sizing subprogram provides geometry, weight, and balance data for aircraft using JP, hydrogen, or methane fuels. The structural synthesis subprogram uses a multistation analysis for aerodynamic surfaces and fuselages to develop theoretical weights and geometric dimensions. The parts definition subprogram uses the geometric data from the structural analysis and develops the predicted fabrication dimensions, parts material raw stock buy requirements, and predicted actual weights. The cost analysis subprogram uses detail part data in conjunction with standard hours, realization factors, labor rates, and material data to develop the manufacturing costs. The program is used to evaluate overall design effects on subsonic commercial type aircraft due to parameter variations.

  8. Frequency-Domain Identification Of Aeroelastic Modes

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.; Tischler, Mark B.

    1991-01-01

    Report describes flight measurements and frequency-domain analyses of aeroelastic vibrational modes of wings of XV-15 tilt-rotor aircraft. Begins with description of flight-test methods. Followed by brief discussion of methods of analysis, which include Fourier-transform computations using chirp z transformers, use of coherence and other spectral functions, and methods and computer programs to obtain frequencies and damping coefficients from measurements. Includes brief description of results of flight tests and comparisions among various experimental and theoretical results. Ends with section on conclusions and recommended improvements in techniques.

  9. Distributed sensor networks: a cellular nonlinear network perspective.

    PubMed

    Haenggi, Martin

    2003-12-01

    Large-scale networks of integrated wireless sensors become increasingly tractable. Advances in hardware technology and engineering design have led to dramatic reductions in size, power consumption, and cost for digital circuitry, and wireless communications. Networking, self-organization, and distributed operation are crucial ingredients to harness the sensing, computing, and computational capabilities of the nodes into a complete system. This article shows that those networks can be considered as cellular nonlinear networks (CNNs), and that their analysis and design may greatly benefit from the rich theoretical results available for CNNs.

  10. A survey of computational aerodynamics in the United States

    NASA Technical Reports Server (NTRS)

    Gessow, A.; Morris, D. J.

    1977-01-01

    Programs in theoretical and computational aerodynamics in the United States are described. Those aspects of programs that relate to aeronautics are detailed. The role of analysis at various levels of sophistication is discussed as well as the inverse solution techniques that are of primary importance in design methodology. The research is divided into the broad categories of application for boundary layer flow, Navier-Stokes turbulence modeling, internal flows, two-dimensional configurations, subsonic and supersonic aircraft, transonic aircraft, and the space shuttle. A survey of representative work in each area is presented.

  11. A study of the dynamics of rotating space stations with elastically connected counterweight and attached flexible appendages. Volume 1: Theory

    NASA Technical Reports Server (NTRS)

    Austin, F.; Markowitz, J.; Goldenberg, S.; Zetkov, G. A.

    1973-01-01

    The formulation of a mathematical model for predicting the dynamic behavior of rotating flexible space station configurations was conducted. The overall objectives of the study were: (1) to develop the theoretical techniques for determining the behavior of a realistically modeled rotating space station, (2) to provide a versatile computer program for the numerical analysis, and (3) to present practical concepts for experimental verification of the analytical results. The mathematical model and its associated computer program are described.

  12. Oklahoma Center for High Energy Physics (OCHEP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, S; Strauss, M J; Snow, J

    2012-02-29

    The DOE EPSCoR implementation grant, with the support from the State of Oklahoma and from the three universities, Oklahoma State University, University of Oklahoma and Langston University, resulted in establishing of the Oklahoma Center for High Energy Physics (OCHEP) in 2004. Currently, OCHEP continues to flourish as a vibrant hub for research in experimental and theoretical particle physics and an educational center in the State of Oklahoma. All goals of the original proposal were successfully accomplished. These include foun- dation of a new experimental particle physics group at OSU, the establishment of a Tier 2 computing facility for the Largemore » Hadron Collider (LHC) and Tevatron data analysis at OU and organization of a vital particle physics research center in Oklahoma based on resources of the three universities. OSU has hired two tenure-track faculty members with initial support from the grant funds. Now both positions are supported through OSU budget. This new HEP Experimental Group at OSU has established itself as a full member of the Fermilab D0 Collaboration and LHC ATLAS Experiment and has secured external funds from the DOE and the NSF. These funds currently support 2 graduate students, 1 postdoctoral fellow, and 1 part-time engineer. The grant initiated creation of a Tier 2 computing facility at OU as part of the Southwest Tier 2 facility, and a permanent Research Scientist was hired at OU to maintain and run the facility. Permanent support for this position has now been provided through the OU university budget. OCHEP represents a successful model of cooperation of several universities, providing the establishment of critical mass of manpower, computing and hardware resources. This led to increasing Oklahoma's impact in all areas of HEP, theory, experiment, and computation. The Center personnel are involved in cutting edge research in experimental, theoretical, and computational aspects of High Energy Physics with the research areas ranging from the search for new phenomena at the Fermilab Tevatron and the CERN Large Hadron Collider to theoretical modeling, computer simulation, detector development and testing, and physics analysis. OCHEP faculty members participating on the D0 collaboration at the Fermilab Tevatron and on the ATLAS collaboration at the CERN LHC have made major impact on the Standard Model (SM) Higgs boson search, top quark studies, B physics studies, and measurements of Quantum Chromodynamics (QCD) phenomena. The OCHEP Grid computing facility consists of a large computer cluster which is playing a major role in data analysis and Monte Carlo productions for both the D0 and ATLAS experiments. Theoretical efforts are devoted to new ideas in Higgs bosons physics, extra dimensions, neutrino masses and oscillations, Grand Unified Theories, supersymmetric models, dark matter, and nonperturbative quantum field theory. Theory members are making major contributions to the understanding of phenomena being explored at the Tevatron and the LHC. They have proposed new models for Higgs bosons, and have suggested new signals for extra dimensions, and for the search of supersymmetric particles. During the seven year period when OCHEP was partially funded through the DOE EPSCoR implementation grant, OCHEP members published over 500 refereed journal articles and made over 200 invited presentations at major conferences. The Center is also involved in education and outreach activities by offering summer research programs for high school teachers and college students, and organizing summer workshops for high school teachers, sometimes coordinating with the Quarknet programs at OSU and OU. The details of the Center can be found in http://ochep.phy.okstate.edu.« less

  13. A Model for New Linkages for Prior Learning Assessment

    ERIC Educational Resources Information Center

    Kalz, Marco; van Bruggen, Jan; Giesbers, Bas; Waterink, Wim; Eshuis, Jannes; Koper, Rob

    2008-01-01

    Purpose: The purpose of this paper is twofold: first the paper aims to sketch the theoretical basis for the use of electronic portfolios for prior learning assessment; second it endeavours to introduce latent semantic analysis (LSA) as a powerful method for the computation of semantic similarity between texts and a basis for a new observation link…

  14. Measuring Prevalence of Other-Oriented Transactive Contributions Using an Automated Measure of Speech Style Accommodation

    ERIC Educational Resources Information Center

    Gweon, Gahgene; Jain, Mahaveer; McDonough, John; Raj, Bhiksha; Rose, Carolyn P.

    2013-01-01

    This paper contributes to a theory-grounded methodological foundation for automatic collaborative learning process analysis. It does this by illustrating how insights from the social psychology and sociolinguistics of speech style provide a theoretical framework to inform the design of a computational model. The purpose of that model is to detect…

  15. The Learners' Experience of Variation: Following Students' Threads of Learning Physics in Computer Simulation Sessions

    ERIC Educational Resources Information Center

    Ingerman, Ake; Linder, Cedric; Marshall, Delia

    2009-01-01

    This article attempts to describe students' process of learning physics using the notion of experiencing variation as the basic mechanism for learning, and thus explores what variation, with respect to a particular object of learning, that students experience in their process of constituting understanding. Theoretically, the analysis relies on…

  16. A Theoretical Analysis of Learning with Graphics--Implications for Computer Graphics Design.

    ERIC Educational Resources Information Center

    ChanLin, Lih-Juan

    This paper reviews the literature pertinent to learning with graphics. The dual coding theory provides explanation about how graphics are stored and precessed in semantic memory. The level of processing theory suggests how graphics can be employed in learning to encourage deeper processing. In addition to dual coding theory and level of processing…

  17. BOPACE 3-D (the Boeing Plastic Analysis Capability for 3-dimensional Solids Using Isoparametric Finite Elements)

    NASA Technical Reports Server (NTRS)

    Vos, R. G.; Straayer, J. W.

    1975-01-01

    The BOPACE 3-D is a finite element computer program, which provides a general family of three-dimensional isoparametric solid elements, and includes a new algorithm for improving the efficiency of the elastic-plastic-creep solution procedure. Theoretical, user, and programmer oriented sections are presented to describe the program.

  18. Nuclear system that burns its own wastes shows promise

    NASA Technical Reports Server (NTRS)

    Atchison, K.

    1975-01-01

    A nuclear fission energy system, capable of eliminating a significant amount of its radioactive wastes by burning them, is described. A theoretical investigation of this system conducted by computer analysis, is based on use of gaseous fuel nuclear reactors. Gaseous core reactors using a uranium plasma fuel are studied along with development for space propulsion.

  19. A microcomputer program for analysis of nucleic acid hybridization data

    PubMed Central

    Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.

    1982-01-01

    The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017

  20. Preliminary eddy current modelling for the large angle magnetic suspension test fixture

    NASA Technical Reports Server (NTRS)

    Britcher, Colin

    1994-01-01

    This report presents some recent developments in the mathematical modeling of the Large Angle Magnetic Suspension Test Fixture (LAMSTF) at NASA Langley Research Center. It is shown that these effects are significant, but may be amenable to analysis, modeling and measurement. A theoretical framework is presented, together with a comparison of computed and experimental data.

  1. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.

  2. PLANS: A finite element program for nonlinear analysis of structures. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Pifko, A.; Levine, H. S.; Armen, H., Jr.

    1975-01-01

    The PLANS system is described which is a finite element program for nonlinear analysis. The system represents a collection of special purpose computer programs each associated with a distinct physical problem class. Modules of PLANS specifically referenced and described in detail include: (1) REVBY, for the plastic analysis of bodies of revolution; (2) OUT-OF-PLANE, for the plastic analysis of 3-D built-up structures where membrane effects are predominant; (3) BEND, for the plastic analysis of built-up structures where bending and membrane effects are significant; (4) HEX, for the 3-D elastic-plastic analysis of general solids; and (5) OUT-OF-PLANE-MG, for material and geometrically nonlinear analysis of built-up structures. The SATELLITE program for data debugging and plotting of input geometries is also described. The theoretical foundations upon which the analysis is based are presented. Discussed are the form of the governing equations, the methods of solution, plasticity theories available, a general system description and flow of the programs, and the elements available for use.

  3. Asymptotic approximation method of force reconstruction: Application and analysis of stationary random forces

    NASA Astrophysics Data System (ADS)

    Sanchez, J.

    2018-06-01

    In this paper, the application and analysis of the asymptotic approximation method to a single degree-of-freedom has recently been produced. The original concepts are summarized, and the necessary probabilistic concepts are developed and applied to single degree-of-freedom systems. Then, these concepts are united, and the theoretical and computational models are developed. To determine the viability of the proposed method in a probabilistic context, numerical experiments are conducted, and consist of a frequency analysis, analysis of the effects of measurement noise, and a statistical analysis. In addition, two examples are presented and discussed.

  4. Cosmochemistry: Understanding the Solar System through analysis of extraterrestrial materials.

    PubMed

    MacPherson, Glenn J; Thiemens, Mark H

    2011-11-29

    Cosmochemistry is the chemical analysis of extraterrestrial materials. This term generally is taken to mean laboratory analysis, which is the cosmochemistry gold standard because of the ability for repeated analysis under highly controlled conditions using the most advanced instrumentation unhindered by limitations in power, space, or environment. Over the past 40 y, advances in technology have enabled telescopic and spacecraft instruments to provide important data that significantly complement the laboratory data. In this special edition, recent advances in the state of the art of cosmochemistry are presented, which range from instrumental analysis of meteorites to theoretical-computational and astronomical observations.

  5. Information-Theoretical Complexity Analysis of Selected Elementary Chemical Reactions

    NASA Astrophysics Data System (ADS)

    Molina-Espíritu, M.; Esquivel, R. O.; Dehesa, J. S.

    We investigate the complexity of selected elementary chemical reactions (namely, the hydrogenic-abstraction reaction and the identity SN2 exchange reaction) by means of the following single and composite information-theoretic measures: disequilibrium (D), exponential entropy(L), Fisher information (I), power entropy (J), I-D, D-L and I-J planes and Fisher-Shannon (FS) and Lopez-Mancini-Calbet (LMC) shape complexities. These quantities, which are functionals of the one-particle density, are computed in both position (r) and momentum (p) spaces. The analysis revealed that the chemically significant regions of these reactions can be identified through most of the single information-theoretic measures and the two-component planes, not only the ones which are commonly revealed by the energy, such as the reactant/product (R/P) and the transition state (TS), but also those that are not present in the energy profile such as the bond cleavage energy region (BCER), the bond breaking/forming regions (B-B/F) and the charge transfer process (CT). The analysis of the complexities shows that the energy profile of the abstraction reaction bears the same information-theoretical features of the LMC and FS measures, however for the identity SN2 exchange reaction does not hold a simple behavior with respect to the LMC and FS measures. Most of the chemical features of interest (BCER, B-B/F and CT) are only revealed when particular information-theoretic aspects of localizability (L or J), uniformity (D) and disorder (I) are considered.

  6. Evolutionary game theory for physical and biological scientists. I. Training and validating population dynamics equations.

    PubMed

    Liao, David; Tlsty, Thea D

    2014-08-06

    Failure to understand evolutionary dynamics has been hypothesized as limiting our ability to control biological systems. An increasing awareness of similarities between macroscopic ecosystems and cellular tissues has inspired optimism that game theory will provide insights into the progression and control of cancer. To realize this potential, the ability to compare game theoretic models and experimental measurements of population dynamics should be broadly disseminated. In this tutorial, we present an analysis method that can be used to train parameters in game theoretic dynamics equations, used to validate the resulting equations, and used to make predictions to challenge these equations and to design treatment strategies. The data analysis techniques in this tutorial are adapted from the analysis of reaction kinetics using the method of initial rates taught in undergraduate general chemistry courses. Reliance on computer programming is avoided to encourage the adoption of these methods as routine bench activities.

  7. Angle-of-Arrival Assisted GNSS Collaborative Positioning.

    PubMed

    Huang, Bin; Yao, Zheng; Cui, Xiaowei; Lu, Mingquan

    2016-06-20

    For outdoor and global navigation satellite system (GNSS) challenged scenarios, collaborative positioning algorithms are proposed to fuse information from GNSS satellites and terrestrial wireless systems. This paper derives the Cramer-Rao lower bound (CRLB) and algorithms for the angle-of-arrival (AOA)-assisted GNSS collaborative positioning. Based on the CRLB model and collaborative positioning algorithms, theoretical analysis are performed to specify the effects of various factors on the accuracy of collaborative positioning, including the number of users, their distribution and AOA measurements accuracy. Besides, the influences of the relative location of the collaborative users are also discussed in order to choose appropriate neighboring users, which is in favor of reducing computational complexity. Simulations and actual experiment are carried out with several GNSS receivers in different scenarios, and the results are consistent with theoretical analysis.

  8. Insight into the theoretical and experimental studies of 1-phenyl-3-methyl-4-benzoyl-5-pyrazolone N(4)-methyl-N(4)- phenylthiosemicarbazone - A potential NLO material

    NASA Astrophysics Data System (ADS)

    Sangeetha, K. G.; Aravindakshan, K. K.; Safna Hussan, K. P.

    2017-12-01

    The synthesis, geometrical parameters, spectroscopic studies, optimised molecular structure, vibrational analysis, Mullikan population analysis, MEP, NBO, frontier molecular orbitals and NLO effects of 1-phenyl-3-methyl-4-benzoyl-5-pyrazolone N-(4)-methyl-N-(4)-phenylthiosemicarbazone, C25H23N5OS (L1) have been communicated in this paper. A combined experimental and theoretical approach was used to explore the structure and properties of the compound. For computational studies, Gaussian 09 program was used. Starting geometry of molecule was taken from X-ray refinement data and has been optimized by using DFT (B3LYP) method with the 6-31+G (d, p) basis sets. NBO analysis gave insight into the strongly delocalized structure, responsible for the nonlinearity and hence the stability of the molecule. Frontier molecular orbitals have been defined to forecast the global reactivity descriptors of L1. The computed first-order hyperpolarizability (β) of the compound is 2 times higher than that of urea and this account for its nonlinear optical property. Simultaneously, a molecular docking study of the compound was performed using GLIDE Program. For this, three biological enzymes, histone deacetylase, ribonucleotide reductase and DNA methyl transferase, were selected as receptor molecules.

  9. Model-based diagnosis through Structural Analysis and Causal Computation for automotive Polymer Electrolyte Membrane Fuel Cell systems

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare

    2017-07-01

    The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.

  10. Reactivity of etoricoxib based on computational study of molecular orbitals, molecular electrostatic potential surface and Mulliken charge analysis

    NASA Astrophysics Data System (ADS)

    Sachdeva, Ritika; Soni, Abhinav; Singh, V. P.; Saini, G. S. S.

    2018-05-01

    Etoricoxib is one of the selective cyclooxygenase inhibitor drug which plays a significant role in the pharmacological management of arthritis and pain. The theoretical investigation of its reactivity is done using Density Functional Theory calculations. Molecular Electrostatic Potential Surface of etoricoxib and its Mulliken atomic charge distribution are used for the prediction of its electrophilic and nucleophilic sites. The detailed analysis of its frontier molecular orbitals is also done.

  11. Computational manufacturing as a bridge between design and production.

    PubMed

    Tikhonravov, Alexander V; Trubetskov, Michael K

    2005-11-10

    Computational manufacturing of optical coatings is a research area that can be placed between theoretical designing and practical manufacturing in the same way that computational physics can be placed between theoretical and experimental physics. Investigations in this area have been performed for more than 30 years under the name of computer simulation of manufacturing and monitoring processes. Our goal is to attract attention to the increasing importance of computational manufacturing at the current state of the art in the design and manufacture of optical coatings and to demonstrate possible applications of this research tool.

  12. Computational manufacturing as a bridge between design and production

    NASA Astrophysics Data System (ADS)

    Tikhonravov, Alexander V.; Trubetskov, Michael K.

    2005-11-01

    Computational manufacturing of optical coatings is a research area that can be placed between theoretical designing and practical manufacturing in the same way that computational physics can be placed between theoretical and experimental physics. Investigations in this area have been performed for more than 30 years under the name of computer simulation of manufacturing and monitoring processes. Our goal is to attract attention to the increasing importance of computational manufacturing at the current state of the art in the design and manufacture of optical coatings and to demonstrate possible applications of this research tool.

  13. Computation of elementary modes: a unifying framework and the new binary approach

    PubMed Central

    Gagneur, Julien; Klamt, Steffen

    2004-01-01

    Background Metabolic pathway analysis has been recognized as a central approach to the structural analysis of metabolic networks. The concept of elementary (flux) modes provides a rigorous formalism to describe and assess pathways and has proven to be valuable for many applications. However, computing elementary modes is a hard computational task. In recent years we assisted in a multiplication of algorithms dedicated to it. We require a summarizing point of view and a continued improvement of the current methods. Results We show that computing the set of elementary modes is equivalent to computing the set of extreme rays of a convex cone. This standard mathematical representation provides a unified framework that encompasses the most prominent algorithmic methods that compute elementary modes and allows a clear comparison between them. Taking lessons from this benchmark, we here introduce a new method, the binary approach, which computes the elementary modes as binary patterns of participating reactions from which the respective stoichiometric coefficients can be computed in a post-processing step. We implemented the binary approach in FluxAnalyzer 5.1, a software that is free for academics. The binary approach decreases the memory demand up to 96% without loss of speed giving the most efficient method available for computing elementary modes to date. Conclusions The equivalence between elementary modes and extreme ray computations offers opportunities for employing tools from polyhedral computation for metabolic pathway analysis. The new binary approach introduced herein was derived from this general theoretical framework and facilitates the computation of elementary modes in considerably larger networks. PMID:15527509

  14. Dynamics of global supply chain and electric power networks: Models, pricing analysis, and computations

    NASA Astrophysics Data System (ADS)

    Matsypura, Dmytro

    In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following coauthored papers: Nagurney, Cruz, and Matsypura (2003), Nagurney and Matsypura (2004, 2005, 2006), Matsypura and Nagurney (2005), Matsypura, Nagurney, and Liu (2006).

  15. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    NASA Technical Reports Server (NTRS)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  16. Information-Theoretical Analysis of EEG Microstate Sequences in Python.

    PubMed

    von Wegner, Frederic; Laufs, Helmut

    2018-01-01

    We present an open-source Python package to compute information-theoretical quantities for electroencephalographic data. Electroencephalography (EEG) measures the electrical potential generated by the cerebral cortex and the set of spatial patterns projected by the brain's electrical potential on the scalp surface can be clustered into a set of representative maps called EEG microstates. Microstate time series are obtained by competitively fitting the microstate maps back into the EEG data set, i.e., by substituting the EEG data at a given time with the label of the microstate that has the highest similarity with the actual EEG topography. As microstate sequences consist of non-metric random variables, e.g., the letters A-D, we recently introduced information-theoretical measures to quantify these time series. In wakeful resting state EEG recordings, we found new characteristics of microstate sequences such as periodicities related to EEG frequency bands. The algorithms used are here provided as an open-source package and their use is explained in a tutorial style. The package is self-contained and the programming style is procedural, focusing on code intelligibility and easy portability. Using a sample EEG file, we demonstrate how to perform EEG microstate segmentation using the modified K-means approach, and how to compute and visualize the recently introduced information-theoretical tests and quantities. The time-lagged mutual information function is derived as a discrete symbolic alternative to the autocorrelation function for metric time series and confidence intervals are computed from Markov chain surrogate data. The software package provides an open-source extension to the existing implementations of the microstate transform and is specifically designed to analyze resting state EEG recordings.

  17. Strain energy release rates of composite interlaminar end-notch and mixed-mode fracture: A sublaminate/ply level analysis and a computer code

    NASA Technical Reports Server (NTRS)

    Valisetty, R. R.; Chamis, C. C.

    1987-01-01

    A computer code is presented for the sublaminate/ply level analysis of composite structures. This code is useful for obtaining stresses in regions affected by delaminations, transverse cracks, and discontinuities related to inherent fabrication anomalies, geometric configurations, and loading conditions. Particular attention is focussed on those layers or groups of layers (sublaminates) which are immediately affected by the inherent flaws. These layers are analyzed as homogeneous bodies in equilibrium and in isolation from the rest of the laminate. The theoretical model used to analyze the individual layers allows the relevant stresses and displacements near discontinuities to be represented in the form of pure exponential-decay-type functions which are selected to eliminate the exponential-precision-related difficulties in sublaminate/ply level analysis. Thus, sublaminate analysis can be conducted without any restriction on the maximum number of layers, delaminations, transverse cracks, or other types of discontinuities. In conjunction with the strain energy release rate (SERR) concept and composite micromechanics, this computational procedure is used to model select cases of end-notch and mixed-mode fracture specimens. The computed stresses are in good agreement with those from a three-dimensional finite element analysis. Also, SERRs compare well with limited available experimental data.

  18. Electro-optical processing of phased array data

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1973-01-01

    An on-line spatial light modulator for application as the input transducer for a real-time optical data processing system is described. The use of such a device in the analysis and processing of radar data in real time is reported. An interface from the optical processor to a control digital computer was designed, constructed, and tested. The input transducer, optical system, and computer interface have been operated in real time with real time radar data with the input data returns recorded on the input crystal, processed by the optical system, and the output plane pattern digitized, thresholded, and outputted to a display and storage in the computer memory. The correlation of theoretical and experimental results is discussed.

  19. Dynamic properties of epidemic spreading on finite size complex networks

    NASA Astrophysics Data System (ADS)

    Li, Ying; Liu, Yang; Shan, Xiu-Ming; Ren, Yong; Jiao, Jian; Qiu, Ben

    2005-11-01

    The Internet presents a complex topological structure, on which computer viruses can easily spread. By using theoretical analysis and computer simulation methods, the dynamic process of disease spreading on finite size networks with complex topological structure is investigated. On the finite size networks, the spreading process of SIS (susceptible-infected-susceptible) model is a finite Markov chain with an absorbing state. Two parameters, the survival probability and the conditional infecting probability, are introduced to describe the dynamic properties of disease spreading on finite size networks. Our results can help understanding computer virus epidemics and other spreading phenomena on communication and social networks. Also, knowledge about the dynamic character of virus spreading is helpful for adopting immunity policy.

  20. A digital computer program for the dynamic interaction simulation of controls and structure (DISCOS), volume 1

    NASA Technical Reports Server (NTRS)

    Bodley, C. S.; Devers, A. D.; Park, A. C.; Frisch, H. P.

    1978-01-01

    A theoretical development and associated digital computer program system for the dynamic simulation and stability analysis of passive and actively controlled spacecraft are presented. The dynamic system (spacecraft) is modeled as an assembly of rigid and/or flexible bodies not necessarily in a topological tree configuration. The computer program system is used to investigate total system dynamic characteristics, including interaction effects between rigid and/or flexible bodies, control systems, and a wide range of environmental loadings. In addition, the program system is used for designing attitude control systems and for evaluating total dynamic system performance, including time domain response and frequency domain stability analyses.

  1. An inlet analysis for the NASA hypersonic research engine aerothermodynamic integration model

    NASA Technical Reports Server (NTRS)

    Andrews, E. H., Jr.; Russell, J. W.; Mackley, E. A.; Simmonds, A. L.

    1974-01-01

    A theoretical analysis for the inlet of the NASA Hypersonic Research Engine (HRE) Aerothermodynamic Integration Model (AIM) has been undertaken by use of a method-of-characteristics computer program. The purpose of the analysis was to obtain pretest information on the full-scale HRE inlet in support of the experimental AIM program (completed May 1974). Mass-flow-ratio and additive-drag-coefficient schedules were obtained that well defined the range effected in the AIM tests. Mass-weighted average inlet total-pressure recovery, kinetic energy efficiency, and throat Mach numbers were obtained.

  2. Tug-of-war lacunarity—A novel approach for estimating lacunarity

    NASA Astrophysics Data System (ADS)

    Reiss, Martin A.; Lemmerer, Birgit; Hanslmeier, Arnold; Ahammer, Helmut

    2016-11-01

    Modern instrumentation provides us with massive repositories of digital images that will likely only increase in the future. Therefore, it has become increasingly important to automatize the analysis of digital images, e.g., with methods from pattern recognition. These methods aim to quantify the visual appearance of captured textures with quantitative measures. As such, lacunarity is a useful multi-scale measure of texture's heterogeneity but demands high computational efforts. Here we investigate a novel approach based on the tug-of-war algorithm, which estimates lacunarity in a single pass over the image. We computed lacunarity for theoretical and real world sample images, and found that the investigated approach is able to estimate lacunarity with low uncertainties. We conclude that the proposed method combines low computational efforts with high accuracy, and that its application may have utility in the analysis of high-resolution images.

  3. Quantum-Theoretical Methods and Studies Relating to Properties of Materials

    DTIC Science & Technology

    1989-12-19

    particularly sensitive to the behavior of the electron distribution close to the nuclei, which contributes only to E(l). Although the above results were...other condensed phases. So it was a useful test case to test the behavior of the theoretical computations for the gas phase relative to that in the...increasingly complicated and time- comsuming electron-correlation approximations should assure a small error in the theoret- ically computed enthalpy for a

  4. BEST3D user's manual: Boundary Element Solution Technology, 3-Dimensional Version 3.0

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The theoretical basis and programming strategy utilized in the construction of the computer program BEST3D (boundary element solution technology - three dimensional) and detailed input instructions are provided for the use of the program. An extensive set of test cases and sample problems is included in the manual and is also available for distribution with the program. The BEST3D program was developed under the 3-D Inelastic Analysis Methods for Hot Section Components contract (NAS3-23697). The overall objective of this program was the development of new computer programs allowing more accurate and efficient three-dimensional thermal and stress analysis of hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The BEST3D program allows both linear and nonlinear analysis of static and quasi-static elastic problems and transient dynamic analysis for elastic problems. Calculation of elastic natural frequencies and mode shapes is also provided.

  5. Combining the Finite Element Method with Structural Connectome-based Analysis for Modeling Neurotrauma: Connectome Neurotrauma Mechanics

    PubMed Central

    Kraft, Reuben H.; Mckee, Phillip Justin; Dagro, Amy M.; Grafton, Scott T.

    2012-01-01

    This article presents the integration of brain injury biomechanics and graph theoretical analysis of neuronal connections, or connectomics, to form a neurocomputational model that captures spatiotemporal characteristics of trauma. We relate localized mechanical brain damage predicted from biofidelic finite element simulations of the human head subjected to impact with degradation in the structural connectome for a single individual. The finite element model incorporates various length scales into the full head simulations by including anisotropic constitutive laws informed by diffusion tensor imaging. Coupling between the finite element analysis and network-based tools is established through experimentally-based cellular injury thresholds for white matter regions. Once edges are degraded, graph theoretical measures are computed on the “damaged” network. For a frontal impact, the simulations predict that the temporal and occipital regions undergo the most axonal strain and strain rate at short times (less than 24 hrs), which leads to cellular death initiation, which results in damage that shows dependence on angle of impact and underlying microstructure of brain tissue. The monotonic cellular death relationships predict a spatiotemporal change of structural damage. Interestingly, at 96 hrs post-impact, computations predict no network nodes were completely disconnected from the network, despite significant damage to network edges. At early times () network measures of global and local efficiency were degraded little; however, as time increased to 96 hrs the network properties were significantly reduced. In the future, this computational framework could help inform functional networks from physics-based structural brain biomechanics to obtain not only a biomechanics-based understanding of injury, but also neurophysiological insight. PMID:22915997

  6. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE PAGES

    McDonnell, J. D.; Schunck, N.; Higdon, D.; ...

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  7. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonnell, J. D.; Schunck, N.; Higdon, D.

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  8. Finite-data-size study on practical universal blind quantum computation

    NASA Astrophysics Data System (ADS)

    Zhao, Qiang; Li, Qiong

    2018-07-01

    The universal blind quantum computation with weak coherent pulses protocol is a practical scheme to allow a client to delegate a computation to a remote server while the computation hidden. However, in the practical protocol, a finite data size will influence the preparation efficiency in the remote blind qubit state preparation (RBSP). In this paper, a modified RBSP protocol with two decoy states is studied in the finite data size. The issue of its statistical fluctuations is analyzed thoroughly. The theoretical analysis and simulation results show that two-decoy-state case with statistical fluctuation is closer to the asymptotic case than the one-decoy-state case with statistical fluctuation. Particularly, the two-decoy-state protocol can achieve a longer communication distance than the one-decoy-state case in this statistical fluctuation situation.

  9. Properties of the numerical algorithms for problems of quantum information technologies: Benefits of deep analysis

    NASA Astrophysics Data System (ADS)

    Chernyavskiy, Andrey; Khamitov, Kamil; Teplov, Alexey; Voevodin, Vadim; Voevodin, Vladimir

    2016-10-01

    In recent years, quantum information technologies (QIT) showed great development, although, the way of the implementation of QIT faces the serious difficulties, some of which are challenging computational tasks. This work is devoted to the deep and broad analysis of the parallel algorithmic properties of such tasks. As an example we take one- and two-qubit transformations of a many-qubit quantum state, which are the most critical kernels of many important QIT applications. The analysis of the algorithms uses the methodology of the AlgoWiki project (algowiki-project.org) and consists of two parts: theoretical and experimental. Theoretical part includes features like sequential and parallel complexity, macro structure, and visual information graph. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia) and includes the analysis of locality and memory access, scalability and the set of more specific dynamic characteristics of realization. This approach allowed us to obtain bottlenecks and generate ideas of efficiency improvement.

  10. MALDI-MS analysis and theoretical evaluation of olanzapine as a UV laser desorption ionization (LDI) matrix.

    PubMed

    Musharraf, Syed Ghulam; Ameer, Mariam; Ali, Arslan

    2017-01-05

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) being soft ionization technique, has become a method of choice for high-throughput analysis of proteins and peptides. In this study, we have explored the potential of atypical anti-psychotic drug olanzapine (OLZ) as a matrix for MALDI-MS analysis of peptides aided with the theoretical studies. Seven small peptides were employed as target analytes to check performance of olanzapine and compared with conventional MALDI matrix α-cyano-4-hydroxycinnamic acid (HCCA). All peptides were successfully detected when olanzapine was used as a matrix. Moreover, peptides angiotensin Ι and angiotensin ΙΙ were detected with better S/N ratio and resolution with this method as compared to their analysis by HCCA. Computational studies were performed to determine the thermochemical properties of olanzapine in order to further evaluate its similarity to MALDI matrices which were found in good agreement with the data of existing MALDI matrices. Copyright © 2016. Published by Elsevier B.V.

  11. Analysis of the tunable asymmetric fiber F-P cavity for fiber sensor edge-filter demodulation

    NASA Astrophysics Data System (ADS)

    Chen, Haitao; Liang, Youcheng

    2014-12-01

    An asymmetric fiber (Fabry-Pérot,F-P) interferometric cavity with good linearity and wide dynamic range is successfully designed basing on optical thin film characteristic matrix theory; by choosing the material of two different thin metallic layers, the asymmetric fiber F-P interferometric cavity is fabricated by depositing the multi-layer thin films on the optical fiber's end face. The demodulation method for the wavelength shift of fiber Bragg grating (FBG) sensor basing on the F-P cavity is demonstrated and a theoretical formula is obtained. And the experimental results coincide well with computational results obtained from the theoretical model.

  12. Evol and ProDy for bridging protein sequence evolution and structural dynamics

    PubMed Central

    Mao, Wenzhi; Liu, Ying; Chennubhotla, Chakra; Lezon, Timothy R.; Bahar, Ivet

    2014-01-01

    Correlations between sequence evolution and structural dynamics are of utmost importance in understanding the molecular mechanisms of function and their evolution. We have integrated Evol, a new package for fast and efficient comparative analysis of evolutionary patterns and conformational dynamics, into ProDy, a computational toolbox designed for inferring protein dynamics from experimental and theoretical data. Using information-theoretic approaches, Evol coanalyzes conservation and coevolution profiles extracted from multiple sequence alignments of protein families with their inferred dynamics. Availability and implementation: ProDy and Evol are open-source and freely available under MIT License from http://prody.csb.pitt.edu/. Contact: bahar@pitt.edu PMID:24849577

  13. Improve SSME power balance model

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.

    1992-01-01

    Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.

  14. Aeropropulsion 1987. Session 2: Aeropropulsion Structures Research

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Aeropropulsion systems present unique problems to the structural engineer. The extremes in operating temperatures, rotational effects, and behaviors of advanced material systems combine into complexities that require advances in many scientific disciplines involved in structural analysis and design procedures. This session provides an overview of the complexities of aeropropulsion structures and the theoretical, computational, and experimental research conducted to achieve the needed advances.

  15. Aspects of CO2 laser engraving of printing cylinders.

    PubMed

    Atanasov, P A; Maeno, K; Manolov, V P

    1999-03-20

    Results of the experimental and theoretical investigations of CO(2) laser-engraved cylinders are presented. The processed surfaces of test samples are examined by a phase-stepping laser interferometer, digital microscope, and computer-controlled profilometer. Fourier analysis is made on the patterns parallel to the axis of the laser-scribed test ceramic cylinders. The problem of the visually observed banding is discussed.

  16. Multivariate Density Estimation and Remote Sensing

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1983-01-01

    Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.

  17. Theoretical Foundations of Software Technology.

    DTIC Science & Technology

    1983-02-14

    major research interests are software testing, aritificial intelligence , pattern recogu- tion, and computer graphics. Dr. Chandranekaran is currently...produce PASCAL language code for the problems. Because of its relationship to many issues in Artificial Intelligence , we also investigated problems of...analysis to concurmt-prmcess software re- are not " intelligent " enough to discover these by themselves, ouirl more complex control flow models. The PAF

  18. Language and Discourse Analysis with Coh-Metrix: Applications from Educational Material to Learning Environments at Scale

    ERIC Educational Resources Information Center

    Dowell, Nia M. M.; Graesser, Arthur\tC.; Cai, Zhiqiang

    2016-01-01

    The goal of this article is to preserve and distribute the information presented at the LASI (2014) workshop on Coh-Metrix, a theoretically grounded, computational linguistics facility that analyzes texts on multiple levels of language and discourse. The workshop focused on the utility of Coh-Metrix in discourse theory and educational practice. We…

  19. Theoretical Framework for Interaction Game Design

    DTIC Science & Technology

    2016-05-19

    modeling. We take a data-driven quantitative approach to understand conversational behaviors by measuring conversational behaviors using advanced sensing...current state of the art, human computing is considered to be a reasonable approach to break through the current limitation. To solicit high quality and...proper resources in conversation to enable smooth and effective interaction. The last technique is about conversation measurement , analysis, and

  20. Coupled Structural, Thermal, Phase-Change and Electromagnetic Analysis for Superconductors. Volume 1

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.

    1996-01-01

    Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromagnetic subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase-change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermal and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. This volume, Volume 1, describes mostly formulations for specific problems. Volume 2 describes generalization of those formulations.

  1. DFT analysis and spectral characteristics of Celecoxib a potent COX-2 inhibitor

    NASA Astrophysics Data System (ADS)

    Vijayakumar, B.; Kannappan, V.; Sathyanarayanamoorthi, V.

    2016-10-01

    Extensive quantum mechanical studies are carried out on Celecoxib (CXB), a new generation drug to understand the vibrational and electronic spectral characteristics of the molecule. The vibrational frequencies of CXB are computed by HF and B3LYP methods with 6-311++G (d, p) basis set. The theoretical scaled vibrational frequencies have been assigned and they agreed satisfactorily with experimental FT-IR and Raman frequencies. The theoretical maximum wavelength of absorption of CXB are calculated in water and ethanol by TD-DFT method and these values are compared with experimentally determined λmax values. The spectral and Natural bonds orbital (NBO) analysis in conjunction with spectral data established the presence of intra molecular interactions such as mesomeric, hyperconjugative and steric effects in CXB. The electron density at various positions and reactivity descriptors of CXB indicate that the compound functions as a nucleophile and establish that aromatic ring system present in the molecule is the site of drug action. Electronic distribution and HOMO - LUMO energy values of CXB are discussed in terms of intra-molecular interactions. Computed values of Mulliken charges and thermodynamic properties of CXB are reported.

  2. An Analysis and Procedure for Determining Space Environmental Sink Temperatures With Selected Computational Results

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    The purpose of this report was to analyze the heat-transfer problem posed by the determination of spacecraft temperatures and to incorporate the theoretically derived relationships in the computational code TSCALC. The basis for the code was a theoretical analysis of the thermal radiative equilibrium in space, particularly in the Solar System. Beginning with the solar luminosity, the code takes into account these key variables: (1) the spacecraft-to-Sun distance expressed in astronomical units (AU), where 1 AU represents the average Sun-to-Earth distance of 149.6 million km; (2) the angle (arc degrees) at which solar radiation is incident upon a spacecraft surface (ILUMANG); (3) the spacecraft surface temperature (a radiator or photovoltaic array) in kelvin, the surface absorptivity-to-emissivity ratio alpha/epsilon with respect to the solar radiation and (alpha/epsilon)(sub 2) with respect to planetary radiation; and (4) the surface view factor to space F. Outputs from the code have been used to determine environmental temperatures in various Earth orbits. The code was also utilized as a subprogram in the design of power system radiators for deep-space probes.

  3. A roadmap for optimal control: the right way to commute.

    PubMed

    Ross, I Michael

    2005-12-01

    Optimal control theory is the foundation for many problems in astrodynamics. Typical examples are trajectory design and optimization, relative motion control of distributed space systems and attitude steering. Many such problems in astrodynamics are solved by an alternative route of mathematical analysis and deep physical insight, in part because of the perception that an optimal control framework generates hard problems. Although this is indeed true of the Bellman and Pontryagin frameworks, the covector mapping principle provides a neoclassical approach that renders hard problems easy. That is, although the origins of this philosophy can be traced back to Bernoulli and Euler, it is essentially modern as a result of the strong linkage between approximation theory, set-valued analysis and computing technology. Motivated by the broad success of this approach, mission planners are now conceiving and demanding higher performance from space systems. This has resulted in new set of theoretical and computational problems. Recently, under the leadership of NASA-GRC, several workshops were held to address some of these problems. This paper outlines the theoretical issues stemming from practical problems in astrodynamics. Emphasis is placed on how it pertains to advanced mission design problems.

  4. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    PubMed

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  5. Evaluating the Theoretic Adequacy and Applied Potential of Computational Models of the Spacing Effect.

    PubMed

    Walsh, Matthew M; Gluck, Kevin A; Gunzelmann, Glenn; Jastrzembski, Tiffany; Krusmark, Michael

    2018-06-01

    The spacing effect is among the most widely replicated empirical phenomena in the learning sciences, and its relevance to education and training is readily apparent. Yet successful applications of spacing effect research to education and training is rare. Computational modeling can provide the crucial link between a century of accumulated experimental data on the spacing effect and the emerging interest in using that research to enable adaptive instruction. In this paper, we review relevant literature and identify 10 criteria for rigorously evaluating computational models of the spacing effect. Five relate to evaluating the theoretic adequacy of a model, and five relate to evaluating its application potential. We use these criteria to evaluate a novel computational model of the spacing effect called the Predictive Performance Equation (PPE). Predictive Performance Equation combines elements of earlier models of learning and memory including the General Performance Equation, Adaptive Control of Thought-Rational, and the New Theory of Disuse, giving rise to a novel computational account of the spacing effect that performs favorably across the complete sets of theoretic and applied criteria. We implemented two other previously published computational models of the spacing effect and compare them to PPE using the theoretic and applied criteria as guides. Copyright © 2018 Cognitive Science Society, Inc.

  6. Mono- and binuclear non-heme iron chemistry from a theoretical perspective.

    PubMed

    Rokob, Tibor András; Chalupský, Jakub; Bím, Daniel; Andrikopoulos, Prokopis C; Srnec, Martin; Rulíšek, Lubomír

    2016-09-01

    In this minireview, we provide an account of the current state-of-the-art developments in the area of mono- and binuclear non-heme enzymes (NHFe and NHFe2) and the smaller NHFe(2) synthetic models, mostly from a theoretical and computational perspective. The sheer complexity, and at the same time the beauty, of the NHFe(2) world represents a challenge for experimental as well as theoretical methods. We emphasize that the concerted progress on both theoretical and experimental side is a conditio sine qua non for future understanding, exploration and utilization of the NHFe(2) systems. After briefly discussing the current challenges and advances in the computational methodology, we review the recent spectroscopic and computational studies of NHFe(2) enzymatic and inorganic systems and highlight the correlations between various experimental data (spectroscopic, kinetic, thermodynamic, electrochemical) and computations. Throughout, we attempt to keep in mind the most fascinating and attractive phenomenon in the NHFe(2) chemistry, which is the fact that despite the strong oxidative power of many reactive intermediates, the NHFe(2) enzymes perform catalysis with high selectivity. We conclude with our personal viewpoint and hope that further developments in quantum chemistry and especially in the field of multireference wave function methods are needed to have a solid theoretical basis for the NHFe(2) studies, mostly by providing benchmarking and calibration of the computationally efficient and easy-to-use DFT methods.

  7. Development and application of structural dynamics analysis capabilities

    NASA Technical Reports Server (NTRS)

    Heinemann, Klaus W.; Hozaki, Shig

    1994-01-01

    Extensive research activities were performed in the area of multidisciplinary modeling and simulation of aerospace vehicles that are relevant to NASA Dryden Flight Research Facility. The efforts involved theoretical development, computer coding, and debugging of the STARS code. New solution procedures were developed in such areas as structures, CFD, and graphics, among others. Furthermore, systems-oriented codes were developed for rendering the code truly multidisciplinary and rather automated in nature. Also, work was performed in pre- and post-processing of engineering analysis data.

  8. Fourier analysis and signal processing by use of the Moebius inversion formula

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.

    1990-01-01

    A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.

  9. The importance of proving the null.

    PubMed

    Gallistel, C R

    2009-04-01

    Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? (c) 2009 APA, all rights reserved

  10. Introduction to the focus issue: fifty years of chaos: applied and theoretical.

    PubMed

    Hikihara, Takashi; Holmes, Philip; Kambe, Tsutomu; Rega, Giuseppe

    2012-12-01

    The discovery of deterministic chaos in the late nineteenth century, its subsequent study, and the development of mathematical and computational methods for its analysis have substantially influenced the sciences. Chaos is, however, only one phenomenon in the larger area of dynamical systems theory. This Focus Issue collects 13 papers, from authors and research groups representing the mathematical, physical, and biological sciences, that were presented at a symposium held at Kyoto University from November 28 to December 2, 2011. The symposium, sponsored by the International Union of Theoretical and Applied Mechanics, was called 50 Years of Chaos: Applied and Theoretical. Following some historical remarks to provide a background for the last 50 years, and for chaos, this Introduction surveys the papers and identifies some common themes that appear in them and in the theory of dynamical systems.

  11. Theoretical and computational foundations of management class simulation

    Treesearch

    Denie Gerold

    1978-01-01

    Investigations on complicated, complex, and not well-ordered systems are possible only with the aid of mathematical methods and electronic data processing. Simulation as a method of operations research is particularly suitable for this purpose. Theoretical and computational foundations of management class simulation must be integrated into the planning systems of...

  12. Rapid communication: Computational simulation and analysis of a candidate for the design of a novel silk-based biopolymer.

    PubMed

    Golas, Ewa I; Czaplewski, Cezary

    2014-09-01

    This work theoretically investigates the mechanical properties of a novel silk-derived biopolymer as polymerized in silico from sericin and elastin-like monomers. Molecular Dynamics simulations and Steered Molecular Dynamics were the principal computational methods used, the latter of which applies an external force onto the system and thereby enables an observation of its response to stress. The models explored herein are single-molecule approximations, and primarily serve as tools in a rational design process for the preliminary assessment of properties in a new material candidate. © 2014 Wiley Periodicals, Inc.

  13. On ``Overestimation-free Computational Version of Interval Analysis''

    NASA Astrophysics Data System (ADS)

    Popova, Evgenija D.

    2013-10-01

    The transformation of interval parameters into trigonometric functions, proposed in Int. J. Comput. Meth. Eng. Sci. Mech., vol. 13, pp. 319-328 (2012), is not motivated in comparison to the infinitely many equivalent algebraic transformations. The conclusions about the efficacy of the methodology used are based on incorrect comparisons between solutions of different problems. We show theoretically, and in the examples considered in the commented article, that changing the number of the parameters in a system of linear algebraic equations may change the initial problem, respectively, its solution set. We also correct various misunderstandings and bugs that appear in the article noted above.

  14. Comparative analysis of numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.

    2017-07-01

    Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.

  15. Development INTERDATA 8/32 computer system

    NASA Technical Reports Server (NTRS)

    Sonett, C. P.

    1983-01-01

    The capabilities of the Interdata 8/32 minicomputer were examined regarding data and word processing, editing, retrieval, and budgeting as well as data management demands of the user groups in the network. Based on four projected needs: (1) a hands on (open shop) computer for data analysis with large core and disc capability; (2) the expected requirements of the NASA data networks; (3) the need for intermittent large core capacity for theoretical modeling; (4) the ability to access data rapidly either directly from tape or from core onto hard copy, the system proved useful and adequate for the planned requirements.

  16. Vibrational spectra and ab initio analysis of tert-butyl, trimethylsilyl, and trimethylgermyl derivatives of 3,3-dimethylcyclopropene III. 3,3-Dimethyl-1-(trimethylsilyl)cyclopropene

    NASA Astrophysics Data System (ADS)

    De Maré, G. R.; Panchenko, Yu. N.; Abramenkov, A. V.; Baird, M. S.; Tverezovsky, V. V.; Nizovtsev, A. V.; Bolesov, I. G.

    2003-07-01

    The experimental Raman and IR vibrational spectra of 3,3-dimethyl-1-(trimethylsilyl)cyclopropene in the liquid phase were recorded. Total geometry optimisation was carried out at the HF/6-31G* level and the HF/6-31G*//HF/6-31G* force field was computed. This force field was corrected by scale factors determined previously (using Pulay's method) for correction of the HF/6-31G*//HF/6-31G* force fields of 3,3-dimethylbutene-1, 1-methyl-, 1,2-dimethyl-, and 3,3-dimethylcyclopropene. The theoretical vibrational frequencies calculated from the scaled quantum mechanical force field and the theoretical intensities obtained from the quantum mechanical calculation were used to construct predicted spectra and to perform the vibrational analysis of the experimental spectra.

  17. Theoretical analysis of impact in composite plates

    NASA Technical Reports Server (NTRS)

    Moon, F. C.

    1973-01-01

    The calculated stresses and displacements induced anisotropic plates by short duration impact forces are presented. The theoretical model attempts to model the response of fiber composite turbine fan blades to impact by foreign objects such as stones and hailstones. In this model the determination of the impact force uses the Hertz impact theory. The plate response treats the laminated blade as an equivalent anisotropic material using a form of Mindlin's theory for crystal plates. The analysis makes use of a computational tool called the fast Fourier transform. Results are presented in the form of stress contour plots in the plane of the plate for various times after impact. Examination of the maximum stresses due to impact versus ply layup angle reveals that the + or - 15 deg layup angle gives lower flexural stresses than 0 deg, + or - 30 deg and + or - 45 deg. cases.

  18. Optimal Location through Distributed Algorithm to Avoid Energy Hole in Mobile Sink WSNs

    PubMed Central

    Qing-hua, Li; Wei-hua, Gui; Zhi-gang, Chen

    2014-01-01

    In multihop data collection sensor network, nodes near the sink need to relay on remote data and, thus, have much faster energy dissipation rate and suffer from premature death. This phenomenon causes energy hole near the sink, seriously damaging the network performance. In this paper, we first compute energy consumption of each node when sink is set at any point in the network through theoretical analysis; then we propose an online distributed algorithm, which can adjust sink position based on the actual energy consumption of each node adaptively to get the actual maximum lifetime. Theoretical analysis and experimental results show that the proposed algorithms significantly improve the lifetime of wireless sensor network. It lowers the network residual energy by more than 30% when it is dead. Moreover, the cost for moving the sink is relatively smaller. PMID:24895668

  19. Component-specific modeling

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1985-01-01

    A series of interdisciplinary modeling and analysis techniques that were specialized to address three specific hot section components are presented. These techniques will incorporate data as well as theoretical methods from many diverse areas including cycle and performance analysis, heat transfer analysis, linear and nonlinear stress analysis, and mission analysis. Building on the proven techniques already available in these fields, the new methods developed will be integrated into computer codes to provide an accurate, and unified approach to analyzing combustor burner liners, hollow air cooled turbine blades, and air cooled turbine vanes. For these components, the methods developed will predict temperature, deformation, stress and strain histories throughout a complete flight mission.

  20. Theoretical development and first-principles analysis of strongly correlated systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chen

    A variety of quantum many-body methods have been developed for studying the strongly correlated electron systems. We have also proposed a computationally efficient and accurate approach, named the correlation matrix renormalization (CMR) method, to address the challenges. The initial implementation of the CMR method is designed for molecules which have theoretical advantages, including small size of system, manifest mechanism and strongly correlation effect such as bond breaking process. The theoretic development and benchmark tests of the CMR method are included in this thesis. Meanwhile, ground state total energy is the most important property of electronic calculations. We also investigated anmore » alternative approach to calculate the total energy, and extended this method for magnetic anisotropy energy (MAE) of ferromagnetic materials. In addition, another theoretical tool, dynamical mean- field theory (DMFT) on top of the DFT , has also been used in electronic structure calculations for an Iridium oxide to study the phase transition, which results from an interplay of the d electrons' internal degrees of freedom.« less

  1. Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems

    PubMed Central

    Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.

    2014-01-01

    The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545

  2. Computational crystallization.

    PubMed

    Altan, Irem; Charbonneau, Patrick; Snell, Edward H

    2016-07-15

    Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. 1-Amino-4-hydroxy-9,10-anthraquinone - An analogue of anthracycline anticancer drugs, interacts with DNA and induces apoptosis in human MDA-MB-231 breast adinocarcinoma cells: Evaluation of structure-activity relationship using computational, spectroscopic and biochemical studies.

    PubMed

    Mondal, Palash; Roy, Sanjay; Loganathan, Gayathri; Mandal, Bitapi; Dharumadurai, Dhanasekaran; Akbarsha, Mohammad A; Sengupta, Partha Sarathi; Chattopadhyay, Shouvik; Guin, Partha Sarathi

    2015-12-01

    The X-ray diffraction and spectroscopic properties of 1-amino-4-hydroxy-9,10-anthraquinone (1-AHAQ), a simple analogue of anthracycline chemotherapeutic drugs were studied by adopting experimental and computational methods. The optimized geometrical parameters obtained from computational methods were compared with the results of X-ray diffraction analysis and the two were found to be in reasonably good agreement. X-ray diffraction study, Density Functional Theory (DFT) and natural bond orbital (NBO) analysis indicated two types of hydrogen bonds in the molecule. The IR spectra of 1-AHAQ were studied by Vibrational Energy Distribution Analysis (VEDA) using potential energy distribution (PED) analysis. The electronic spectra were studied by TDDFT computation and compared with the experimental results. Experimental and theoretical results corroborated each other to a fair extent. To understand the biological efficacy of 1-AHAQ, it was allowed to interact with calf thymus DNA and human breast adino-carcinoma cell MDA-MB-231. It was found that the molecule induces apoptosis in this adinocarcinoma cell, with little, if any, cytotoxic effect in HBL-100 normal breast epithelial cell.

  4. WTAQ - A computer program for aquifer-test analysis of confined and unconfined aquifers

    USGS Publications Warehouse

    Barlow, P.M.; Moench, A.F.

    2004-01-01

    Computer program WTAQ was developed to implement a Laplace-transform analytical solution for axial-symmetric flow to a partially penetrating, finite-diameter well in a homogeneous and anisotropic unconfined (water-table) aquifer. The solution accounts for wellbore storage and skin effects at the pumped well, delayed response at an observation well, and delayed or instantaneous drainage from the unsaturated zone. For the particular case of zero drainage from the unsaturated zone, the solution simplifies to that of axial-symmetric flow in a confined aquifer. WTAQ calculates theoretical time-drawdown curves for the pumped well and observation wells and piezometers. The theoretical curves are used with measured time-drawdown data to estimate hydraulic parameters of confined or unconfined aquifers by graphical type-curve methods or by automatic parameter-estimation methods. Parameters that can be estimated are horizontal and vertical hydraulic conductivity, specific storage, and specific yield. A sample application illustrates use of WTAQ for estimating hydraulic parameters of a hypothetical, unconfined aquifer by type-curve methods. Copyright ASCE 2004.

  5. Computer programs of information processing of nuclear physical methods as a demonstration material in studying nuclear physics and numerical methods

    NASA Astrophysics Data System (ADS)

    Bateev, A. B.; Filippov, V. P.

    2017-01-01

    The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.

  6. Systematic theoretical investigation of the zero-field splitting in Gd(III) complexes: Wave function and density functional approaches

    NASA Astrophysics Data System (ADS)

    Khan, Shehryar; Kubica-Misztal, Aleksandra; Kruk, Danuta; Kowalewski, Jozef; Odelius, Michael

    2015-01-01

    The zero-field splitting (ZFS) of the electronic ground state in paramagnetic ions is a sensitive probe of the variations in the electronic and molecular structure with an impact on fields ranging from fundamental physical chemistry to medical applications. A detailed analysis of the ZFS in a series of symmetric Gd(III) complexes is presented in order to establish the applicability and accuracy of computational methods using multiconfigurational complete-active-space self-consistent field wave functions and of density functional theory calculations. The various computational schemes are then applied to larger complexes Gd(III)DOTA(H2O)-, Gd(III)DTPA(H2O)2-, and Gd(III)(H2O)83+ in order to analyze how the theoretical results compare to experimentally derived parameters. In contrast to approximations based on density functional theory, the multiconfigurational methods produce results for the ZFS of Gd(III) complexes on the correct order of magnitude.

  7. Regioselectivity of intermolecular Pauson-Khand reaction of aliphatic alkynes: experimental and theoretical study of the effect of alkyne polarization.

    PubMed

    Fager-Jokela, Erika; Muuronen, Mikko; Khaizourane, Héléa; Vázquez-Romero, Ana; Verdaguer, Xavier; Riera, Antoni; Helaja, Juho

    2014-11-21

    Generally judged poor electronic regioselectivity of alkyne insertion in intermolecular Pauson-Khand reaction (PKR) has severely restricted its synthetic applications. In our previous rational study concerning diarylalkynes (Fager-Jokela, E.; Muuronen, M.; Patzschke, M.; Helaja, J. J. Org. Chem. 2012, 77, 9134-9147), both experimental and theoretical results indicated that purely electronic factors, i.e., alkyne polarization via resonance effect, induced the observed modest regioselectivity. In the present work, we substantiate that the alkyne polarization via inductive effect can result notable, synthetically valuable regioselectivity. Computational study at DFT level was performed to disclose the electronic origin of the selectivity. Overall, the NBO charges of alkynes correlated qualitatively with regioisomer outcome. In a detailed computational PKR case study, the obtained Boltzmann distributions of the transition state (TS) populations correlate closely with experimental regioselectivity. Analysis of the TS-structures revealed that weak interactions, e.g., hydrogen bonding and steric repulsion, affect the regioselectivity and can easily override the electronic guidance.

  8. The outcomes of anxiety, confidence, and self-efficacy with Internet health information retrieval in older adults: a pilot study.

    PubMed

    Chu, Adeline; Mastel-Smith, Beth

    2010-01-01

    Technology has a great impact on nursing practice. With the increasing numbers of older Americans using computers and the Internet in recent years, nurses have the capability to deliver effective and efficient health education to their patients and the community. Based on the theoretical framework of Bandura's self-efficacy theory, the pilot project reported findings from a 5-week computer course on Internet health searches in older adults, 65 years or older, at a senior activity learning center. Twelve participants were recruited and randomized to either the intervention or the control group. Measures of computer anxiety, computer confidence, and computer self-efficacy scores were analyzed at baseline, at the end of the program, and 6 weeks after the completion of the program. Analysis was conducted with repeated-measures analysis of variance. Findings showed participants who attended a structured computer course on Internet health information retrieval reported lowered anxiety and increased confidence and self-efficacy at the end of the 5-week program and 6 weeks after the completion of the program as compared with participants who were not in the program. The study demonstrated that a computer course can help reduce anxiety and increase confidence and self-efficacy in online health searches in older adults.

  9. Research in Computational Astrobiology

    NASA Technical Reports Server (NTRS)

    Chaban, Galina; Jaffe, Richard; Liang, Shoudan; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.

    2002-01-01

    We present results from several projects in the new field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. We have developed a procedure for calculating long-range effects in molecular dynamics using a plane wave expansion of the electrostatic potential. This method is expected to be highly efficient for simulating biological systems on massively parallel supercomputers. We have perform genomics analysis on a family of actin binding proteins. We have performed quantum mechanical calculations on carbon nanotubes and nucleic acids, which simulations will allow us to investigate possible sources of organic material on the early earth. Finally, we have developed a model of protobiological chemistry using neural networks.

  10. The detectability of brown dwarfs - Predictions and uncertainties

    NASA Technical Reports Server (NTRS)

    Nelson, L. A.; Rappaport, S.; Joss, P. C.

    1993-01-01

    In order to determine the likelihood for the detection of isolated brown dwarfs in ground-based observations as well as in future spaced-based astronomy missions, and in order to evaluate the significance of any detections that might be made, we must first know the expected surface density of brown dwarfs on the celestial sphere as a function of limiting magnitude, wavelength band, and Galactic latitude. It is the purpose of this paper to provide theoretical estimates of this surface density, as well as the range of uncertainty in these estimates resulting from various theoretical uncertainties. We first present theoretical cooling curves for low-mass stars that we have computed with the latest version of our stellar evolution code. We use our evolutionary results to compute theoretical brown-dwarf luminosity functions for a wide range of assumed initial mass functions and stellar birth rate functions. The luminosity functions, in turn, are utilized to compute theoretical surface density functions for brown dwarfs on the celestial sphere. We find, in particular, that for reasonable theoretical assumptions, the currently available upper bounds on the brown-dwarf surface density are consistent with the possibility that brown dwarfs contribute a substantial fraction of the mass of the Galactic disk.

  11. Addition of higher order plate and shell elements into NASTRAN computer program

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Goglia, G. L.

    1976-01-01

    Two higher order plate elements, the linear strain triangular membrane element and the quintic bending element, along with a shallow shell element, suitable for inclusion into the NASTRAN (NASA Structural Analysis) program are described. Additions to the NASTRAN Theoretical Manual, Users' Manual, Programmers' Manual and the NASTRAN Demonstration Problem Manual, for inclusion of these elements into the NASTRAN program are also presented.

  12. Thermal/structural Tailoring of Engine Blades (T/SEAEBL). Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.; Clevenger, W. B.

    1994-01-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual describes the T/STAEBL data block structure and system organization. The approximate analysis and optimization modules are detailed, and a validation test case is provided.

  13. Planetary atmospheric physics and solar physics research

    NASA Technical Reports Server (NTRS)

    1973-01-01

    An overview is presented on current and planned research activities in the major areas of solar physics, planetary atmospheres, and space astronomy. The approach to these unsolved problems involves experimental techniques, theoretical analysis, and the use of computers to analyze the data from space experiments. The point is made that the research program is characterized by each activity interacting with the other activities in the laboratory.

  14. Thermal/structural tailoring of engine blades (T/SEAEBL). Theoretical manual

    NASA Astrophysics Data System (ADS)

    Brown, K. W.; Clevenger, W. B.

    1994-03-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual describes the T/STAEBL data block structure and system organization. The approximate analysis and optimization modules are detailed, and a validation test case is provided.

  15. COED Transactions, Vol. X, No. 6, June 1978. Concentric-Tube Heat Exchanger Analysis and Data Reduction.

    ERIC Educational Resources Information Center

    Marcovitz, Alan B., Ed.

    Four computer programs written in FORTRAN and BASIC develop theoretical predictions and data reduction for a junior-senior level heat exchanger experiment. Programs may be used at the terminal in the laboratory to check progress of the experiment or may be used in the batch mode for interpretation of final information for a formal report. Several…

  16. Artifacts Of Spectral Analysis Of Instrument Readings

    NASA Technical Reports Server (NTRS)

    Wise, James H.

    1995-01-01

    Report presents experimental and theoretical study of some of artifacts introduced by processing outputs of two nominally identical low-frequency-reading instruments; high-sensitivity servo-accelerometers mounted together and operating, in conjunction with signal-conditioning circuits, as seismometers. Processing involved analog-to-digital conversion with anti-aliasing filtering, followed by digital processing including frequency weighting and computation of different measures of power spectral density (PSD).

  17. An In-Depth Analysis of Teaching Themes and the Quality of Teaching in Higher Education: Evidence from the Programming Education Environments

    ERIC Educational Resources Information Center

    Xia, Belle Selene

    2017-01-01

    Education research in computer science has emphasized the research of web-based learning environments as a result of the latest technological advancement in higher education. Our research aim is to offer new insights on the different teaching strategies in programming education both from a theoretical and empirical point of view as a response to…

  18. Improving CMD Areal Density Analysis: Algorithms and Strategies

    NASA Astrophysics Data System (ADS)

    Wilson, R. E.

    2014-06-01

    Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD¡¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.

  19. Integrating a Music Curriculum into an External Degree Program Using Computer Assisted Instruction.

    ERIC Educational Resources Information Center

    Brinkley, Robert C.

    This paper outlines the method and theoretical basis for establishing and implementing an independent study music curriculum. The curriculum combines practical and theoretical paradigms and leads to an external degree. The computer, in direct interaction with the student, is the primary instructional tool, and the teacher is involved in indirect…

  20. Analysis of four-body decay of D meson

    NASA Astrophysics Data System (ADS)

    Estabar, T.; Mehraban, H.

    2017-01-01

    The aim of this work is to provide a phenomenological analysis of the contribution of D0 meson to f0(980)π+π-(f 0(980) → π+π-), K+K-K¯∗(982)0(K¯∗(982)0 → π+K-) and ϕ(π+π-) S-wave(ϕ → K+K-) quasi-three-body decays. Such that the analysis of mentioned four-body decays is summarized into three-body decay and several channels are observed. Based on the factorization approach, hadronic three-body decays receive both resonant and nonresonant contributions. We compute both contributions of three-body decays. As, there are tree, penguin, emission, and emission annihilation diagrams for these decay modes. Our theoretical model for D0 → ϕ(ππ) S-wave decay is based on the QCD factorization to quasi-two body followed by S-wave. This model for this decay following experimental information which demonstrated two pion interaction in the S-wave is introduced by the scalar resonance. The theoretical values are (1.82 ± 0.24) × 10-4, (4.46 ± 0.41) × 10-5 and (1.1 ± 0.18) × 10-4, while the experimental results of them are (1.8 ± 0.5) × 10-4, (4.4 ± 1.7) × 10-5 and (2.5 ± 0.33) × 10-4, respectively. Comparing computation analysis values with experimental values show that our results are in agreement with them.

  1. Improved Statistics for Genome-Wide Interaction Analysis

    PubMed Central

    Ueki, Masao; Cordell, Heather J.

    2012-01-01

    Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new “joint effects” statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al.'s originally-proposed statistics, on account of the inflated error rate that can result. PMID:22496670

  2. The Computable Catchment: An executable document for model-data software sharing, reproducibility and interactive visualization

    NASA Astrophysics Data System (ADS)

    Gil, Y.; Duffy, C.

    2015-12-01

    This paper proposes the concept of a "Computable Catchment" which is used to develop a collaborative platform for watershed modeling and data analysis. The object of the research is a sharable, executable document similar to a pdf, but one that includes documentation of the underlying theoretical concepts, interactive computational/numerical resources, linkage to essential data repositories and the ability for interactive model-data visualization and analysis. The executable document for each catchment is stored in the cloud with automatic provisioning and a unique identifier allowing collaborative model and data enhancements for historical hydroclimatic reconstruction and/or future landuse or climate change scenarios to be easily reconstructed or extended. The Computable Catchment adopts metadata standards for naming all variables in the model and the data. The a-priori or initial data is derived from national data sources for soils, hydrogeology, climate, and land cover available from the www.hydroterre.psu.edu data service (Leonard and Duffy, 2015). The executable document is based on Wolfram CDF or Computable Document Format with an interactive open-source reader accessible by any modern computing platform. The CDF file and contents can be uploaded to a website or simply shared as a normal document maintaining all interactive features of the model and data. The Computable Catchment concept represents one application for Geoscience Papers of the Future representing an extensible document that combines theory, models, data and analysis that are digitally shared, documented and reused among research collaborators, students, educators and decision makers.

  3. FT-IR, Laser-Raman spectra and computational analysis of 5-Methyl-3-phenylisoxazole-4-carboxylic acid.

    PubMed

    Sert, Yusuf; Mahendra, M; Keskinoğlu, S; Chandra; Srikantamurthy, N; Umesha, K B; Çırak, Ç

    2015-03-15

    In this study the experimental and theoretical vibrational frequencies of a newly synthesized anti-tumor, antiviral, hypoglycemic, antifungal and anti-HIV agent namely, 5-Methyl-3-phenylisoxazole-4-carboxylic acid has been investigated. The experimental FT-IR (4000-400 cm(-1)) and Laser-Raman spectra (4000-100 cm(-1)) of the molecule in solid phase have been recorded. The theoretical vibrational frequencies and optimized geometric parameters (bond lengths, bond angles and torsion angles) have been calculated by using density functional theory (DFT/B3LYP: Becke, 3-parameter, Lee-Yang-Parr and DFT/M06-2X: highly parametrized, empirical exchange correlation function) with 6-311++G(d,p) basis set by Gaussian 09W software, for the first time. The assignments of the vibrational frequencies have been done by potential energy distribution (PED) analysis by using VEDA 4 software. The theoretical optimized geometric parameters and vibrational frequencies have been found to be in good agreement with the corresponding experimental data and results in the literature. In addition, the highest occupied molecular orbital (HOMO) energy, the lowest unoccupied molecular orbital (LUMO) energy and the other related molecular energy values of the compound have been investigated by using the same theoretical calculations. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. FT-IR, Laser-Raman spectra and computational analysis of 5-Methyl-3-phenylisoxazole-4-carboxylic acid

    NASA Astrophysics Data System (ADS)

    Sert, Yusuf; Mahendra, M.; Keskinoğlu, S.; Chandra; Srikantamurthy, N.; Umesha, K. B.; Çırak, Ç.

    2015-03-01

    In this study the experimental and theoretical vibrational frequencies of a newly synthesized anti-tumor, antiviral, hypoglycemic, antifungal and anti-HIV agent namely, 5-Methyl-3-phenylisoxazole-4-carboxylic acid has been investigated. The experimental FT-IR (4000-400 cm-1) and Laser-Raman spectra (4000-100 cm-1) of the molecule in solid phase have been recorded. The theoretical vibrational frequencies and optimized geometric parameters (bond lengths, bond angles and torsion angles) have been calculated by using density functional theory (DFT/B3LYP: Becke, 3-parameter, Lee-Yang-Parr and DFT/M06-2X: highly parametrized, empirical exchange correlation function) with 6-311++G(d,p) basis set by Gaussian 09W software, for the first time. The assignments of the vibrational frequencies have been done by potential energy distribution (PED) analysis by using VEDA 4 software. The theoretical optimized geometric parameters and vibrational frequencies have been found to be in good agreement with the corresponding experimental data and results in the literature. In addition, the highest occupied molecular orbital (HOMO) energy, the lowest unoccupied molecular orbital (LUMO) energy and the other related molecular energy values of the compound have been investigated by using the same theoretical calculations.

  5. An experimental and theoretical analysis of a foil-air bearing rotor system

    NASA Astrophysics Data System (ADS)

    Bonello, P.; Hassan, M. F. Bin

    2018-01-01

    Although there is considerable research on the experimental testing of foil-air bearing (FAB) rotor systems, only a small fraction has been correlated with simulations from a full nonlinear model that links the rotor, air film and foil domains, due to modelling complexity and computational burden. An approach for the simultaneous solution of the three domains as a coupled dynamical system, introduced by the first author and adopted by independent researchers, has recently demonstrated its capability to address this problem. This paper uses this approach, with further developments, in an experimental and theoretical study of a FAB-rotor test rig. The test rig is described in detail, including issues with its commissioning. The theoretical analysis uses a recently introduced modal-based bump foil model that accounts for interaction between the bumps and their inertia. The imposition of pressure constraints on the air film is found to delay the predicted onset of instability speed. The results lend experimental validation to a recent theoretically-based claim that the Gümbel condition may not be appropriate for a practical single-pad FAB. The satisfactory prediction of the salient features of the measured nonlinear behavior shows that the air film is indeed highly influential on the response, in contrast to an earlier finding.

  6. Memory-Efficient Analysis of Dense Functional Connectomes.

    PubMed

    Loewe, Kristian; Donohue, Sarah E; Schoenfeld, Mircea A; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download.

  7. Memory-Efficient Analysis of Dense Functional Connectomes

    PubMed Central

    Loewe, Kristian; Donohue, Sarah E.; Schoenfeld, Mircea A.; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download. PMID:27965565

  8. An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.

    PubMed

    Brito da Silva, Leonardo Enzo; Wunsch, Donald C

    2018-06-01

    Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.

  9. Hands-on Approach to Prepare Specialists in Climate Changes Modeling and Analysis Using an Information-Computational Web-GIS Portal "Climate"

    NASA Astrophysics Data System (ADS)

    Shulgina, T. M.; Gordova, Y. E.; Martynova, Y. V.

    2014-12-01

    A problem of making education relevant to the workplace tasks is a key problem of higher education in the professional field of environmental sciences. To answer this challenge several new courses for students of "Climatology" and "Meteorology" specialties were developed and implemented at the Tomsk State University, which comprises theoretical knowledge from up-to-date environmental sciences with computational tasks. To organize the educational process we use an open-source course management system Moodle (www.moodle.org). It gave us an opportunity to combine text and multimedia in a theoretical part of educational courses. The hands-on approach is realized through development of innovative trainings which are performed within the information-computational web GIS platform "Climate" (http://climate.scert.ru/). The platform has a set of tools and data bases allowing a researcher to perform climate changes analysis on the selected territory. The tools are also used for students' trainings, which contain practical tasks on climate modeling and climate changes assessment and analysis. Laboratory exercises are covering three topics: "Analysis of regional climate changes"; "Analysis of climate extreme indices on the regional scale"; and "Analysis of future climate". They designed to consolidate students' knowledge of discipline, to instill in them the skills to work independently with large amounts of geophysical data using modern processing and analysis tools of web-GIS platform "Climate" and to train them to present results obtained on laboratory work as reports with the statement of the problem, the results of calculations and logically justified conclusion. Thus, students are engaged in n the use of modern tools of the geophysical data analysis and it cultivates dynamic of their professional learning. The approach can help us to fill in this gap because it is the only approach that offers experience, increases students involvement, advance the use of modern information and communication tools. Financial support for this research from the RFBR (13-05-12034, 14-05-00502), SB RAS project VIII.80.2.1 and grant of the President of RF (№ 181) is acknowledged.

  10. Blade loss transient dynamics analysis, volume 2. Task 2: Theoretical and analytical development. Task 3: Experimental verification

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.

  11. Capacitance changes in frog skin caused by theophylline and antidiuretic hormone.

    PubMed

    Cuthbert, A W; Painter, E

    1969-09-01

    1. Impedance loci for frog skins have been calculated by computer analysis from voltage transients developed across the tissues.2. Attention has been paid to simultaneous changes in conductance and capacitance of skins treated either with antidiuretic hormone (ADH) or with theophylline. These drugs always caused an increase in conductance and usually the skin capacitance also increased. However, changes in conductance were not correlated with capacitance changes.3. Changes in capacitance caused by the drugs may represent pore formation in the barrier to water flow, since both drugs increase hydro-osmotic flow in epithelia. If this interpretation is correct, then 0.14% of the membrane area forms water-permeable pores in response to a maximal dose of ADH. This value is somewhat less than the value obtained previously (0.3%) by graphical analysis.4. A theoretical account is given of the relative accuracy of the computer method and the graphical method for voltage transient analysis.

  12. Molecular docking, TG/DTA, molecular structure, harmonic vibrational frequencies, natural bond orbital and TD-DFT analysis of diphenyl carbonate by DFT approach

    NASA Astrophysics Data System (ADS)

    Xavier, S.; Periandy, S.; Carthigayan, K.; Sebastian, S.

    2016-12-01

    Vibrational spectral analysis of Diphenyl Carbonate (DPC) is carried out by using FT-IR and FT-Raman spectroscopic techniques. It is found that all vibrational modes are in the expected region. Gaussian computational calculations were performed using B3LYP method with 6-311++G (d, p) basis set. The computed geometric parameters are in good agreement with XRD data. The observation shows that the structure of the carbonate group is unsymmetrical by ∼5° due to the attachment of the two phenyl rings. The stability of the molecule arising from hyperconjugative interaction and charge delocalization are analyzed by Natural Bond Orbital (NBO) study and the results show the lone pair transition has higher stabilization energy compared to all other. The 1H and 13C NMR chemical shifts are calculated using the Gauge-Including Atomic Orbital (GIAO) method with B3LYP/6-311++G (d, p) method. The chemical shifts computed theoretically go very closer to the experimental results. A study on the electronic and optical properties; absorption wavelengths, excitation energy, dipole moment and frontier molecular orbital energies and Molecular electrostatic potential (MEP) exhibit the high reactivity nature of the molecule. The non-linear optical property of the DPC molecule predicted theoretically found to be good candidate for NLO material. TG/DTA analysis was made and decomposition of the molecule with respect to the temperature was studied. DPC having the anthelmintic activity is docked in the Hemoglobin of Fasciola hepatica protein. The DPC has been screened to antimicrobial activity and found to exhibit antibacterial effects.

  13. Contact Kinetics in Fractal Macromolecules.

    PubMed

    Dolgushev, Maxim; Guérin, Thomas; Blumen, Alexander; Bénichou, Olivier; Voituriez, Raphaël

    2015-11-13

    We consider the kinetics of first contact between two monomers of the same macromolecule. Relying on a fractal description of the macromolecule, we develop an analytical method to compute the mean first contact time for various molecular sizes. In our theoretical description, the non-Markovian feature of monomer motion, arising from the interactions with the other monomers, is captured by accounting for the nonequilibrium conformations of the macromolecule at the very instant of first contact. This analysis reveals a simple scaling relation for the mean first contact time between two monomers, which involves only their equilibrium distance and the spectral dimension of the macromolecule, independently of its microscopic details. Our theoretical predictions are in excellent agreement with numerical stochastic simulations.

  14. Theoretical analysis of the electrical aspects of the basic electro-impulse problem in aircraft de-icing applications

    NASA Technical Reports Server (NTRS)

    Henderson, R. A.; Schrag, R. L.

    1986-01-01

    A summary of modeling the electrical system aspects of a coil and metal target configuration resembling a practical electro-impulse deicing (EIDI) installation, and a simple circuit for providing energy to the coil, was presented. The model was developed in sufficient theoretical detail to allow the generation of computer algorithms for the current in the coil, the magnetic induction on both surfaces of the target, the force between the coil and target, and the impulse delivered to the target. These algorithms were applied to a specific prototype EIDI test system for which the current, magnetic fields near the target surfaces, and impulse were previously measured.

  15. A program to evaluate a control system based on feedback of aerodynamic pressure differentials, part 1

    NASA Technical Reports Server (NTRS)

    Hrabak, R. R.; Levy, D. W.; Finn, P.; Roskam, J.

    1981-01-01

    The use of pressure differentials in a flight control system was evaluated. The pressure profile around the test surface was determined using two techniques: (1) windtunnel data (actual); and (2) NASA/Langley Single Element Airfoil Computer Program (theoretical). The system designed to evaluate the concept of using pressure differentials is composed of a sensor drive and power amplifiers, actuator, position potentiometer, and a control surface. The characteristics (both desired and actual) of the system and each individual component were analyzed. The desired characteristics of the system as a whole are given. The flight control system developed, the testing procedures and data reduction methods used, and theoretical frequency response analysis are described.

  16. Consistency of Cluster Analysis for Cognitive Diagnosis: The Reduced Reparameterized Unified Model and the General Diagnostic Model.

    PubMed

    Chiu, Chia-Yi; Köhn, Hans-Friedrich

    2016-09-01

    The asymptotic classification theory of cognitive diagnosis (ACTCD) provided the theoretical foundation for using clustering methods that do not rely on a parametric statistical model for assigning examinees to proficiency classes. Like general diagnostic classification models, clustering methods can be useful in situations where the true diagnostic classification model (DCM) underlying the data is unknown and possibly misspecified, or the items of a test conform to a mix of multiple DCMs. Clustering methods can also be an option when fitting advanced and complex DCMs encounters computational difficulties. These can range from the use of excessive CPU times to plain computational infeasibility. However, the propositions of the ACTCD have only been proven for the Deterministic Input Noisy Output "AND" gate (DINA) model and the Deterministic Input Noisy Output "OR" gate (DINO) model. For other DCMs, there does not exist a theoretical justification to use clustering for assigning examinees to proficiency classes. But if clustering is to be used legitimately, then the ACTCD must cover a larger number of DCMs than just the DINA model and the DINO model. Thus, the purpose of this article is to prove the theoretical propositions of the ACTCD for two other important DCMs, the Reduced Reparameterized Unified Model and the General Diagnostic Model.

  17. Reflectance analysis of porosity gradient in nanostructured silicon layers

    NASA Astrophysics Data System (ADS)

    Jurečka, Stanislav; Imamura, Kentaro; Matsumoto, Taketoshi; Kobayashi, Hikaru

    2017-12-01

    In this work we study optical properties of nanostructured layers formed on silicon surface. Nanostructured layers on Si are formed in order to reach high suppression of the light reflectance. Low spectral reflectance is important for improvement of the conversion efficiency of solar cells and for other optoelectronic applications. Effective method of forming nanostructured layers with ultralow reflectance in a broad interval of wavelengths is in our approach based on metal assisted etching of Si. Si surface immersed in HF and H2O2 solution is etched in contact with the Pt mesh roller and the structure of the mesh is transferred on the etched surface. During this etching procedure the layer density evolves gradually and the spectral reflectance decreases exponentially with the depth in porous layer. We analyzed properties of the layer porosity by incorporating the porosity gradient into construction of the layer spectral reflectance theoretical model. Analyzed layer is splitted into 20 sublayers in our approach. Complex dielectric function in each sublayer is computed by using Bruggeman effective media theory and the theoretical spectral reflectance of modelled multilayer system is computed by using Abeles matrix formalism. Porosity gradient is extracted from the theoretical reflectance model optimized in comparison to the experimental values. Resulting values of the structure porosity development provide important information for optimization of the technological treatment operations.

  18. Numerical nonlinear inelastic analysis of stiffened shells of revolution. Volume 1: Theory manual for STARS-2P digital computer program

    NASA Technical Reports Server (NTRS)

    Svalbonas, V.; Levine, H.

    1975-01-01

    The theoretical analysis background for the STARS-2P nonlinear inelastic program is discussed. The theory involved is amenable for the analysis of large deflection inelastic behavior in axisymmetric shells of revolution subjected to axisymmetric loadings. The analysis is capable of considering such effects as those involved in nonproportional and cyclic loading conditions. The following are also discussed: orthotropic nonlinear kinematic hardening theory; shell wall cross sections and discrete ring stiffeners; the coupled axisymmetric large deflection elasto-plastic torsion problem; and the provision for the inelastic treatment of smeared stiffeners, isogrid, and waffle wall constructions.

  19. Natural language processing, pragmatics, and verbal behavior

    PubMed Central

    Cherpas, Chris

    1992-01-01

    Natural Language Processing (NLP) is that part of Artificial Intelligence (AI) concerned with endowing computers with verbal and listener repertoires, so that people can interact with them more easily. Most attention has been given to accurately parsing and generating syntactic structures, although NLP researchers are finding ways of handling the semantic content of language as well. It is increasingly apparent that understanding the pragmatic (contextual and consequential) dimension of natural language is critical for producing effective NLP systems. While there are some techniques for applying pragmatics in computer systems, they are piecemeal, crude, and lack an integrated theoretical foundation. Unfortunately, there is little awareness that Skinner's (1957) Verbal Behavior provides an extensive, principled pragmatic analysis of language. The implications of Skinner's functional analysis for NLP and for verbal aspects of epistemology lead to a proposal for a “user expert”—a computer system whose area of expertise is the long-term computer user. The evolutionary nature of behavior suggests an AI technology known as genetic algorithms/programming for implementing such a system. ImagesFig. 1 PMID:22477052

  20. Interaction entropy for protein-protein binding

    NASA Astrophysics Data System (ADS)

    Sun, Zhaoxi; Yan, Yu N.; Yang, Maoyou; Zhang, John Z. H.

    2017-03-01

    Protein-protein interactions are at the heart of signal transduction and are central to the function of protein machine in biology. The highly specific protein-protein binding is quantitatively characterized by the binding free energy whose accurate calculation from the first principle is a grand challenge in computational biology. In this paper, we show how the interaction entropy approach, which was recently proposed for protein-ligand binding free energy calculation, can be applied to computing the entropic contribution to the protein-protein binding free energy. Explicit theoretical derivation of the interaction entropy approach for protein-protein interaction system is given in detail from the basic definition. Extensive computational studies for a dozen realistic protein-protein interaction systems are carried out using the present approach and comparisons of the results for these protein-protein systems with those from the standard normal mode method are presented. Analysis of the present method for application in protein-protein binding as well as the limitation of the method in numerical computation is discussed. Our study and analysis of the results provided useful information for extracting correct entropic contribution in protein-protein binding from molecular dynamics simulations.

  1. Interaction entropy for protein-protein binding.

    PubMed

    Sun, Zhaoxi; Yan, Yu N; Yang, Maoyou; Zhang, John Z H

    2017-03-28

    Protein-protein interactions are at the heart of signal transduction and are central to the function of protein machine in biology. The highly specific protein-protein binding is quantitatively characterized by the binding free energy whose accurate calculation from the first principle is a grand challenge in computational biology. In this paper, we show how the interactionentropy approach, which was recently proposed for protein-ligand binding free energy calculation, can be applied to computing the entropic contribution to the protein-protein binding free energy. Explicit theoretical derivation of the interactionentropy approach for protein-protein interaction system is given in detail from the basic definition. Extensive computational studies for a dozen realistic protein-protein interaction systems are carried out using the present approach and comparisons of the results for these protein-protein systems with those from the standard normal mode method are presented. Analysis of the present method for application in protein-protein binding as well as the limitation of the method in numerical computation is discussed. Our study and analysis of the results provided useful information for extracting correct entropic contribution in protein-protein binding from molecular dynamics simulations.

  2. Multilayer modeling and analysis of human brain networks

    PubMed Central

    2017-01-01

    Abstract Understanding how the human brain is structured, and how its architecture is related to function, is of paramount importance for a variety of applications, including but not limited to new ways to prevent, deal with, and cure brain diseases, such as Alzheimer’s or Parkinson’s, and psychiatric disorders, such as schizophrenia. The recent advances in structural and functional neuroimaging, together with the increasing attitude toward interdisciplinary approaches involving computer science, mathematics, and physics, are fostering interesting results from computational neuroscience that are quite often based on the analysis of complex network representation of the human brain. In recent years, this representation experienced a theoretical and computational revolution that is breaching neuroscience, allowing us to cope with the increasing complexity of the human brain across multiple scales and in multiple dimensions and to model structural and functional connectivity from new perspectives, often combined with each other. In this work, we will review the main achievements obtained from interdisciplinary research based on magnetic resonance imaging and establish de facto, the birth of multilayer network analysis and modeling of the human brain. PMID:28327916

  3. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  4. A toolbox for the fast information analysis of multiple-site LFP, EEG and spike train recordings

    PubMed Central

    Magri, Cesare; Whittingstall, Kevin; Singh, Vanessa; Logothetis, Nikos K; Panzeri, Stefano

    2009-01-01

    Background Information theory is an increasingly popular framework for studying how the brain encodes sensory information. Despite its widespread use for the analysis of spike trains of single neurons and of small neural populations, its application to the analysis of other types of neurophysiological signals (EEGs, LFPs, BOLD) has remained relatively limited so far. This is due to the limited-sampling bias which affects calculation of information, to the complexity of the techniques to eliminate the bias, and to the lack of publicly available fast routines for the information analysis of multi-dimensional responses. Results Here we introduce a new C- and Matlab-based information theoretic toolbox, specifically developed for neuroscience data. This toolbox implements a novel computationally-optimized algorithm for estimating many of the main information theoretic quantities and bias correction techniques used in neuroscience applications. We illustrate and test the toolbox in several ways. First, we verify that these algorithms provide accurate and unbiased estimates of the information carried by analog brain signals (i.e. LFPs, EEGs, or BOLD) even when using limited amounts of experimental data. This test is important since existing algorithms were so far tested primarily on spike trains. Second, we apply the toolbox to the analysis of EEGs recorded from a subject watching natural movies, and we characterize the electrodes locations, frequencies and signal features carrying the most visual information. Third, we explain how the toolbox can be used to break down the information carried by different features of the neural signal into distinct components reflecting different ways in which correlations between parts of the neural signal contribute to coding. We illustrate this breakdown by analyzing LFPs recorded from primary visual cortex during presentation of naturalistic movies. Conclusion The new toolbox presented here implements fast and data-robust computations of the most relevant quantities used in information theoretic analysis of neural data. The toolbox can be easily used within Matlab, the environment used by most neuroscience laboratories for the acquisition, preprocessing and plotting of neural data. It can therefore significantly enlarge the domain of application of information theory to neuroscience, and lead to new discoveries about the neural code. PMID:19607698

  5. A toolbox for the fast information analysis of multiple-site LFP, EEG and spike train recordings.

    PubMed

    Magri, Cesare; Whittingstall, Kevin; Singh, Vanessa; Logothetis, Nikos K; Panzeri, Stefano

    2009-07-16

    Information theory is an increasingly popular framework for studying how the brain encodes sensory information. Despite its widespread use for the analysis of spike trains of single neurons and of small neural populations, its application to the analysis of other types of neurophysiological signals (EEGs, LFPs, BOLD) has remained relatively limited so far. This is due to the limited-sampling bias which affects calculation of information, to the complexity of the techniques to eliminate the bias, and to the lack of publicly available fast routines for the information analysis of multi-dimensional responses. Here we introduce a new C- and Matlab-based information theoretic toolbox, specifically developed for neuroscience data. This toolbox implements a novel computationally-optimized algorithm for estimating many of the main information theoretic quantities and bias correction techniques used in neuroscience applications. We illustrate and test the toolbox in several ways. First, we verify that these algorithms provide accurate and unbiased estimates of the information carried by analog brain signals (i.e. LFPs, EEGs, or BOLD) even when using limited amounts of experimental data. This test is important since existing algorithms were so far tested primarily on spike trains. Second, we apply the toolbox to the analysis of EEGs recorded from a subject watching natural movies, and we characterize the electrodes locations, frequencies and signal features carrying the most visual information. Third, we explain how the toolbox can be used to break down the information carried by different features of the neural signal into distinct components reflecting different ways in which correlations between parts of the neural signal contribute to coding. We illustrate this breakdown by analyzing LFPs recorded from primary visual cortex during presentation of naturalistic movies. The new toolbox presented here implements fast and data-robust computations of the most relevant quantities used in information theoretic analysis of neural data. The toolbox can be easily used within Matlab, the environment used by most neuroscience laboratories for the acquisition, preprocessing and plotting of neural data. It can therefore significantly enlarge the domain of application of information theory to neuroscience, and lead to new discoveries about the neural code.

  6. Theoretical prediction of thick wing and pylon-fuselage-fanpod-nacelle aerodynamic characteristics at subcritical speeds. Part 1: Theory and results

    NASA Technical Reports Server (NTRS)

    Tulinius, J. R.

    1974-01-01

    The theoretical development and the comparison of results with data of a thick wing and pylon-fuselage-fanpod-nacelle analysis are presented. The analysis utilizes potential flow theory to compute the surface velocities and pressures, section lift and center of pressure, and the total configuration lift, moment, and vortex drag. The skin friction drag is also estimated in the analysis. The perturbation velocities induced by the wing and pylon, fuselage and fanpod, and nacelle are represented by source and vortex lattices, quadrilateral vortices, and source frustums, respectively. The strengths of these singularities are solved for simultaneously including all interference effects. The wing and pylon planforms, twists, cambers, and thickness distributions, and the fuselage and fanpod geometries can be arbitrary in shape, provided the surface gradients are smooth. The flow through nacelle is assumed to be axisymmetric. An axisymmetric center engine hub can also be included. The pylon and nacelle can be attached to the wing, fuselage, or fanpod.

  7. Status of the R-matrix Code AMUR toward a consistent cross-section evaluation and covariance analysis for the light nuclei

    NASA Astrophysics Data System (ADS)

    Kunieda, Satoshi

    2017-09-01

    We report the status of the R-matrix code AMUR toward consistent cross-section evaluation and covariance analysis for the light-mass nuclei. The applicable limit of the code is extended by including computational capability for the charged-particle elastic scattering cross-sections and the neutron capture cross-sections as example results are shown in the main texts. A simultaneous analysis is performed on the 17O compound system including the 16O(n,tot) and 13C(α,n)16O reactions together with the 16O(n,n) and 13C(α,α) scattering cross-sections. It is found that a large theoretical background is required for each reaction process to obtain a simultaneous fit with all the experimental cross-sections we analyzed. Also, the hard-sphere radii should be assumed to be different from the channel radii. Although these are technical approaches, we could learn roles and sources of the theoretical background in the standard R-matrix.

  8. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  9. Coupling MD Simulations and X-ray Absorption Spectroscopy to Study Ions in Solution

    NASA Astrophysics Data System (ADS)

    Marcos, E. Sánchez; Beret, E. C.; Martínez, J. M.; Pappalardo, R. R.; Ayala, R.; Muñoz-Páez, A.

    2007-12-01

    The structure of ionic solutions is a key-point in understanding physicochemical properties of electrolyte solutions. Among the reduced number of experimental techniques which can supply direct information on the ion environment, X-ray Absorption techniques (XAS) have gained importance during the last decades although they are not free of difficulties associated to the data analysis leading to provide reliable structures. Computer simulations of ions in solution is a theoretical alternative to provide information on the solvation structure. Thus, the use of computational chemistry can increase the understanding of these systems although an accurate description of ionic solvation phenomena represents nowadays a significant challenge to theoretical chemistry. We present: (a) the assignment of features in the XANES spectrum to well defined structural motif in the ion environment, (b) MD-based evaluation of EXAFS parameters used in the fitting procedure to make easier the structural resolution, and (c) the use of the agreement between experimental and simulated XANES spectra to help in the choice of a given intermolecular potential for Computer Simulations. Chemical problems examined are: (a) the identification of the second hydration shell in dilute aqueous solutions of highly-charged cations, such as Cr3+, Rh3+, Ir3+, (b) the invisibility by XAS of certain structures characterized by Computer Simulations but exhibiting high dynamical behavior and (c) the solvation of Br- in acetonitrile.

  10. Coupling MD Simulations and X-ray Absorption Spectroscopy to Study Ions in Solution

    NASA Astrophysics Data System (ADS)

    Marcos, E. Sánchez; Beret, E. C.; Martínez, J. M.; Pappalardo, R. R.; Ayala, R.; Muñoz-Páez, A.

    2007-11-01

    The structure of ionic solutions is a key-point in understanding physicochemical properties of electrolyte solutions. Among the reduced number of experimental techniques which can supply direct information on the ion environment, X-ray Absorption techniques (XAS) have gained importance during the last decades although they are not free of difficulties associated to the data analysis leading to provide reliable structures. Computer simulations of ions in solution is a theoretical alternative to provide information on the solvation structure. Thus, the use of computational chemistry can increase the understanding of these systems although an accurate description of ionic solvation phenomena represents nowadays a significant challenge to theoretical chemistry. We present: (a) the assignment of features in the XANES spectrum to well defined structural motif in the ion environment, (b) MD-based evaluation of EXAFS parameters used in the fitting procedure to make easier the structural resolution, and (c) the use of the agreement between experimental and simulated XANES spectra to help in the choice of a given intermolecular potential for Computer Simulations. Chemical problems examined are: (a) the identification of the second hydration shell in dilute aqueous solutions of highly-charged cations, such as Cr3+, Rh3+, Ir3+, (b) the invisibility by XAS of certain structures characterized by Computer Simulations but exhibiting high dynamical behavior and (c) the solvation of Br- in acetonitrile.

  11. Theoretical and experimental studies of 3β-acetoxy-5α-cholestan-6-one oxime

    NASA Astrophysics Data System (ADS)

    Khan, Azhar U.; Avecillia, Fernando; Malik, Nazia; Khan, Md. Shahzad; Khan, Mohd Shahid; Mushtaque, Md.

    2016-10-01

    Steroidal oxime (3β-acetoxy-5α-cholestan- 6-one oxime) has been synthesized using microwave-induced reaction in 3.5 min using saturated steroidal ketone and aqueous hydroxylamine hydrochloride in ethanol. The structure of the compound was elucidated by UV, IR, 1H NMR and X-ray single crystal structure. The computational quantum chemical studies like, IR, UV analysis were performed by density functional theory (DFT) at Becke-3-Lee-Yang-Parr(B3LYP) exchange-correlation functional in combination with 6-31++G(d,p) basis sets. The harmonic vibrational frequencies, the optimized geometric parameters have been interpreted and compared with experimental values. Theoretical wavelength at 214.88 cm-1 correspond to the experimental value 214.0 cm-1. The nature of this transition is n → π*. The theoretical results are in good agreement with experiment results.

  12. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  13. Computational models of neuromodulation.

    PubMed

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  14. Human-Computer Interaction and Sociological Insight: A Theoretical Examination and Experiment in Building Affinity in Small Groups

    ERIC Educational Resources Information Center

    Oren, Michael Anthony

    2011-01-01

    The juxtaposition of classic sociological theory and the, relatively, young discipline of human-computer interaction (HCI) serves as a powerful mechanism for both exploring the theoretical impacts of technology on human interactions as well as the application of technological systems to moderate interactions. It is the intent of this dissertation…

  15. Spectral and structural studies of the anti-cancer drug Flutamide by density functional theoretical method

    NASA Astrophysics Data System (ADS)

    Mariappan, G.; Sundaraganesan, N.

    2014-01-01

    A comprehensive screening of the more recent DFT theoretical approach to structural analysis is presented in this section of theoretical structural analysis. The chemical name of 2-methyl-N-[4-nitro-3-(trifluoromethyl)phenyl]-propanamide is usually called as Flutamide (In the present study it is abbreviated as FLT) and is an important and efficacious drug in the treatment of anti-cancer resistant. The molecular geometry, vibrational spectra, electronic and NMR spectral interpretation of Flutamide have been studied with the aid of density functional theory method (DFT). The vibrational assignments of the normal modes were performed on the basis of the PED calculations using the VEDA 4 program. Comparison of computational results with X-ray diffraction results of Flutamide allowed the evaluation of structure predictions and confirmed B3LYP/6-31G(d,p) as accurate for structure determination. Application of scaling factors for IR and Raman frequency predictions showed good agreement with experimental values. This is supported the assignment of the major contributors of the vibration modes of the title compound. Stability of the molecule arising from hyperconjugative interactions leading to its bioactivity, charge delocalization have been analyzed using natural bond orbital (NBO) analysis. NMR chemical shifts of the molecule were calculated using the gauge independent atomic orbital (GIAO) method. The comparison of measured FTIR, FT-Raman, and UV-Visible data to calculated values allowed assignment of major spectral features of the title molecule. Besides, Frontier molecular orbital analyze was also investigated using theoretical calculations.

  16. Anticorrosive Effects of Some Thiophene Derivatives Against the Corrosion of Iron: A Computational Study

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Safi, Zaki S.; Kaya, Savas; Shi, Wei; Tüzün, Burak; Altunay, Nail; Kaya, Cemal

    2018-05-01

    It is known that iron is one of the most widely used metals in industrial production. In this work, the inhibition performances of three thiophene derivatives on the corrosion of iron were investigated in the light of several theoretical approaches. In the section including DFT calculations, several global reactivity descriptors such as EHOMO, ELUMO, ionization energy (I), electron affinity (A), HOMO-LUMO energy gap (ΔE), chemical hardness (η), softness (σ), as well as local reactivity descriptors like Fukui indices, local softness, and local electrophilicity were considered and discussed. The adsorption behaviors of considered thiophene derivatives on Fe(110) surface were investigated using molecular dynamics simulation approach. To determine the most active corrosion inhibitor among studied thiophene derivatives, we used the principle component analysis (PCA) and agglomerative hierarchical cluster analysis (AHCA). Accordingly, all data obtained using various theoretical calculation techniques are consistent with experiments.

  17. Whole-Volume Clustering of Time Series Data from Zebrafish Brain Calcium Images via Mixture Modeling.

    PubMed

    Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L

    2018-02-01

    Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.

  18. Fluid Dynamic and Stability Analysis of a Thin Liquid Sheet

    NASA Technical Reports Server (NTRS)

    McMaster, Matthew S.

    1992-01-01

    Interest in thin sheet flows has recently been renewed due to their potential application in space radiators. Theoretical and experimental studies of the fluid dynamics and stability of thin liquid sheet flows have been carried out in this thesis. A computer program was developed to determine the cross-sectional shape of the edge cylinder given the cross-sectional area of the edge cylinder. A stability analysis was performed on a non-planer liquid sheet. A study was conducted to determine the effects of air resistance on the sheet.

  19. Noncolocated Time-Reversal MUSIC: High-SNR Distribution of Null Spectrum

    NASA Astrophysics Data System (ADS)

    Ciuonzo, Domenico; Rossi, Pierluigi Salvo

    2017-04-01

    We derive the asymptotic distribution of the null spectrum of the well-known Multiple Signal Classification (MUSIC) in its computational Time-Reversal (TR) form. The result pertains to a single-frequency non-colocated multistatic scenario and several TR-MUSIC variants are here investigated. The analysis builds upon the 1st-order perturbation of the singular value decomposition and allows a simple characterization of null-spectrum moments (up to the 2nd order). This enables a comparison in terms of spectrums stability. Finally, a numerical analysis is provided to confirm the theoretical findings.

  20. Physics of solar activity

    NASA Technical Reports Server (NTRS)

    Sturrock, Peter A.

    1993-01-01

    The aim of the research activity was to increase our understanding of solar activity through data analysis, theoretical analysis, and computer modeling. Because the research subjects were diverse and many researchers were supported by this grant, a select few key areas of research are described in detail. Areas of research include: (1) energy storage and force-free magnetic field; (2) energy release and particle acceleration; (3) radiation by nonthermal electrons; (4) coronal loops; (5) flare classification; (6) longitude distributions of flares; (7) periodicities detected in the solar activity; (8) coronal heating and related problems; and (9) plasma processes.

  1. Computer-Delivered Interventions for Health Promotion and Behavioral Risk Reduction: A Meta-Analysis of 75 Randomized Controlled Trials, 1988 – 2007

    PubMed Central

    Portnoy, David B.; Scott-Sheldon, Lori A. J.; Johnson, Blair T.; Carey, Michael P.

    2008-01-01

    Objective Use of computers to promote healthy behavior is increasing. To evaluate the efficacy of these computer-delivered interventions, we conducted a meta-analysis of the published literature. Method Studies examining health domains related to the leading health indicators outlined in Healthy People 2010 were selected. Data from 75 randomized controlled trials, published between 1988 and 2007, with 35,685 participants and 82 separate interventions were included. All studies were coded independently by two raters for study and participant characteristics, design and methodology, and intervention content. We calculated weighted mean effect sizes for theoretically-meaningful psychosocial and behavioral outcomes; moderator analyses determined the relation between study characteristics and the magnitude of effect sizes for heterogeneous outcomes. Results Compared with controls, participants who received a computer-delivered intervention improved several hypothesized antecedents of health behavior (knowledge, attitudes, intentions); intervention recipients also improved health behaviors (nutrition, tobacco use, substance use, safer sexual behavior, binge/purge behaviors) and general health maintenance. Several sample, study and intervention characteristics moderated the psychosocial and behavioral outcomes. Conclusion Computer-delivered interventions can lead to improved behavioral health outcomes at first post-intervention assessment. Interventions evaluating outcomes at extended assessment periods are needed to evaluate the longer-term efficacy of computer-delivered interventions. PMID:18403003

  2. Shell stability analysis in a computer aided engineering (CAE) environment

    NASA Technical Reports Server (NTRS)

    Arbocz, J.; Hol, J. M. A. M.

    1993-01-01

    The development of 'DISDECO', the Delft Interactive Shell DEsign COde is described. The purpose of this project is to make the accumulated theoretical, numerical and practical knowledge of the last 25 years or so readily accessible to users interested in the analysis of buckling sensitive structures. With this open ended, hierarchical, interactive computer code the user can access from his workstation successively programs of increasing complexity. The computational modules currently operational in DISDECO provide the prospective user with facilities to calculate the critical buckling loads of stiffened anisotropic shells under combined loading, to investigate the effects the various types of boundary conditions will have on the critical load, and to get a complete picture of the degrading effects the different shapes of possible initial imperfections might cause, all in one interactive session. Once a design is finalized, its collapse load can be verified by running a large refined model remotely from behind the workstation with one of the current generation 2-dimensional codes, with advanced capabilities to handle both geometric and material nonlinearities.

  3. Secure multiparty computation of a comparison problem.

    PubMed

    Liu, Xin; Li, Shundong; Liu, Jian; Chen, Xiubo; Xu, Gang

    2016-01-01

    Private comparison is fundamental to secure multiparty computation. In this study, we propose novel protocols to privately determine [Formula: see text], or [Formula: see text] in one execution. First, a 0-1-vector encoding method is introduced to encode a number into a vector, and the Goldwasser-Micali encryption scheme is used to compare integers privately. Then, we propose a protocol by using a geometric method to compare rational numbers privately, and the protocol is information-theoretical secure. Using the simulation paradigm, we prove the privacy-preserving property of our protocols in the semi-honest model. The complexity analysis shows that our protocols are more efficient than previous solutions.

  4. Les résonances d'un trou noir de Schwarzschild.

    NASA Astrophysics Data System (ADS)

    Bachelot, A.; Motet-Bachelot, A.

    1993-09-01

    This paper is devoted to the theoretical and computational investigations of the scattering frequencies of scalar, electromagnetic, gravitational waves around a spherical black hole. The authors adopt a time dependent approach: construction of wave operators for the hyperbolic Regge-Wheeler equation; asymptotic completeness; outgoing and incoming spectral representations; meromorphic continuation of the Heisenberg matrix; approximation by dumping and cut-off of the potentials and interpretation of the semi group Z(t) in the framework of the membrane paradigma. They develop a new procedure for the computation of the resonances by the spectral analysis of the transient scattered wave, based on Prony's algorithm.

  5. Civil and mechanical engineering applications of sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Komkov, V.

    1985-07-01

    In this largely tutorial presentation, the historical development of optimization theories has been outlined as they applied to mechanical and civil engineering designs and the development of modern sensitivity techniques during the last 20 years has been traced. Some of the difficulties and the progress made in overcoming them have been outlined. Some of the recently developed theoretical methods have been stressed to indicate their importance to computer-aided design technology.

  6. Determination of neutron flux distribution in an Am-Be irradiator using the MCNP.

    PubMed

    Shtejer-Diaz, K; Zamboni, C B; Zahn, G S; Zevallos-Chávez, J Y

    2003-10-01

    A neutron irradiator has been assembled at IPEN facilities to perform qualitative-quantitative analysis of many materials using thermal and fast neutrons outside the nuclear reactor premises. To establish the prototype specifications, the neutron flux distribution and the absorbed dose rates were calculated using the MCNP computer code. These theoretical predictions then allow one to discuss the optimum irradiator design and its performance.

  7. Vibrational Spectral Studies of Gemfibrozil

    NASA Astrophysics Data System (ADS)

    Benitta, T. Asenath; Balendiran, G. K.; James, C.

    2008-11-01

    The Fourier Transform Raman and infrared spectra of the crystallized drug molecule 5-(2,5-Dimethylphenoxy)-2,2-dimethylpentanoic acid (Gemfibrozil) have been recorded and analyzed. Quantum chemical computational methods have been employed using Gaussian 03 software package based on Hartree Fock method for theoretically modeling the grown molecule. The optimized geometry and vibrational frequencies have been predicted. Observed vibrational modes have been assigned with the aid of normal coordinate analysis.

  8. Information physics fundamentals of nanophotonics.

    PubMed

    Naruse, Makoto; Tate, Naoya; Aono, Masashi; Ohtsu, Motoichi

    2013-05-01

    Nanophotonics has been extensively studied with the aim of unveiling and exploiting light-matter interactions that occur at a scale below the diffraction limit of light, and recent progress made in experimental technologies--both in nanomaterial fabrication and characterization--is driving further advancements in the field. From the viewpoint of information, on the other hand, novel architectures, design and analysis principles, and even novel computing paradigms should be considered so that we can fully benefit from the potential of nanophotonics. This paper examines the information physics aspects of nanophotonics. More specifically, we present some fundamental and emergent information properties that stem from optical excitation transfer mediated by optical near-field interactions and the hierarchical properties inherent in optical near-fields. We theoretically and experimentally investigate aspects such as unidirectional signal transfer, energy efficiency and networking effects, among others, and we present their basic theoretical formalisms and describe demonstrations of practical applications. A stochastic analysis of light-assisted material formation is also presented, where an information-based approach provides a deeper understanding of the phenomena involved, such as self-organization. Furthermore, the spatio-temporal dynamics of optical excitation transfer and its inherent stochastic attributes are utilized for solution searching, paving the way to a novel computing paradigm that exploits coherent and dissipative processes in nanophotonics.

  9. Experimental and theoretical oscillator strengths of Mg I for accurate abundance analysis

    NASA Astrophysics Data System (ADS)

    Pehlivan Rhodin, A.; Hartman, H.; Nilsson, H.; Jönsson, P.

    2017-02-01

    Context. With the aid of stellar abundance analysis, it is possible to study the galactic formation and evolution. Magnesium is an important element to trace the α-element evolution in our Galaxy. For chemical abundance analysis, such as magnesium abundance, accurate and complete atomic data are essential. Inaccurate atomic data lead to uncertain abundances and prevent discrimination between different evolution models. Aims: We study the spectrum of neutral magnesium from laboratory measurements and theoretical calculations. Our aim is to improve the oscillator strengths (f-values) of Mg I lines and to create a complete set of accurate atomic data, particularly for the near-IR region. Methods: We derived oscillator strengths by combining the experimental branching fractions with radiative lifetimes reported in the literature and computed in this work. A hollow cathode discharge lamp was used to produce free atoms in the plasma and a Fourier transform spectrometer recorded the intensity-calibrated high-resolution spectra. In addition, we performed theoretical calculations using the multiconfiguration Hartree-Fock program ATSP2K. Results: This project provides a set of experimental and theoretical oscillator strengths. We derived 34 experimental oscillator strengths. Except from the Mg I optical triplet lines (3p 3P°0,1,2-4s 3S1), these oscillator strengths are measured for the first time. The theoretical oscillator strengths are in very good agreement with the experimental data and complement the missing transitions of the experimental data up to n = 7 from even and odd parity terms. We present an evaluated set of oscillator strengths, gf, with uncertainties as small as 5%. The new values of the Mg I optical triplet line (3p 3P°0,1,2-4s 3S1) oscillator strength values are 0.08 dex larger than the previous measurements.

  10. Molecular structure, conformational preferences and vibrational analysis of 2-hydroxystyrene: A computational and spectroscopic research

    NASA Astrophysics Data System (ADS)

    García, Gregorio; Navarro, Amparo; Granadino-Roldán, José Manuel; Garzón, Andrés; Ruiz, Tomás Peña; Fernández-Liencres, Maria Paz; Melguizo, Manuel; Peñas, Antonio; Pongor, Gábor; Eőri, János; Fernández-Gómez, Manuel

    2010-08-01

    The molecular structure of 2-hydroxy-styrene has been investigated at DFT (B3LYP, mPW1PW91) and MP2 levels with an assortment of Pople's and Dunning's basis sets within the isolated molecule approximation. The presence of intramolecular hydrogen bonds has been theoretically characterized through a topological analysis of the electron density according to the Atom-In-Molecules, AIM, theory. The conformational equilibrium has been pursued by means of an analysis of the hydroxyl-phenyl and vinyl-phenyl internal rotation barriers. This analysis also allowed getting an insight into the effects governing the torsion barriers and the preferred conformations. A twofold scheme has been used for this goal, i.e. the total electronic energy changes and the natural bonding orbital, NBO, schemes. The vibrational spectrum was recorded and then calculated at DFT-B3LYP/6-31G∗ and cc-pVTZ levels. Two scaling methods, SQMFF and linear scaling, have been applied on the theoretical spectrum in order to analyse the experimental one. The results point out that at least three different conformers coexist at room temperature.

  11. LOX/LH2 vane pump for auxiliary propulsion systems

    NASA Technical Reports Server (NTRS)

    Hemminger, J. A.; Ulbricht, T. E.

    1985-01-01

    Positive displacement pumps offer potential efficiency advantages over centrifugal pumps for future low thrust space missions. Low flow rate applications, such as space station auxiliary propulsion or dedicated low thrust orbiter transfer vehicles, are typical of missions where low flow and high head rise challenge centrifugal pumps. The positive displacement vane pump for pumping of LOX and LH2 is investigated. This effort has included: (1) a testing program in which pump performance was investigated for differing pump clearances and for differing pump materials while pumping LN2, LOX, and LH2; and (2) an analysis effort, in which a comprehensive pump performance analysis computer code was developed and exercised. An overview of the theoretical framework of the performance analysis computer code is presented, along with a summary of analysis results. Experimental results are presented for pump operating in liquid nitrogen. Included are data on the effects on pump performance of pump clearance, speed, and pressure rise. Pump suction performance is also presented.

  12. Hydrodynamics Analysis and CFD Simulation of Portal Venous System by TIPS and LS.

    PubMed

    Wang, Meng; Zhou, Hongyu; Huang, Yaozhen; Gong, Piyun; Peng, Bing; Zhou, Shichun

    2015-06-01

    In cirrhotic patients, portal hypertension is often associated with a hyperdynamic changes. Transjugular Intrahepatic Portosystemic Shunt (TIPS) and Laparoscopic splenectomy are both treatments for liver cirrhosis due to portal hypertension. While, the two different interventions have different effects on hemodynamics after operation and the possibilities of triggering PVT are different. How hemodynamics of portal vein system evolving with two different operations remain unknown. Based on ultrasound and established numerical methods, CFD technique is applied to analyze hemodynamic changes after TIPS and Laparoscopic splenectomy. In this paper, we applied two 3-D flow models to the hemodynamic analysis for two patients who received a TIPS and a laparoscopic splenectomy, both therapies for treating portal hypertension induced diseases. The current computer simulations give a quantitative analysis of the interplay between hemodynamics and TIPS or splenectomy. In conclusion, the presented computational model can be used for the theoretical analysis of TIPS and laparoscopic splenectomy, clinical decisions could be made based on the simulation results with personal properly treatment.

  13. Efficient Computation of Difference Vibrational Spectra in Isothermal-Isobaric Ensemble.

    PubMed

    Joutsuka, Tatsuya; Morita, Akihiro

    2016-11-03

    Difference spectroscopy between two close systems is widely used to augment its selectivity to the different parts of the observed system, though the molecular dynamics calculation of tiny difference spectra would be computationally extraordinary demanding by subtraction of two spectra. Therefore, we have proposed an efficient computational algorithm of difference spectra without resorting to the subtraction. The present paper reports our extension of the theoretical method in the isothermal-isobaric (NPT) ensemble. The present theory expands our applications of analysis including pressure dependence of the spectra. We verified that the present theory yields accurate difference spectra in the NPT condition as well, with remarkable computational efficiency over the straightforward subtraction by several orders of magnitude. This method is further applied to vibrational spectra of liquid water with varying pressure and succeeded in reproducing tiny difference spectra by pressure change. The anomalous pressure dependence is elucidated in relation to other properties of liquid water.

  14. Computational Approaches to the Chemical Equilibrium Constant in Protein-ligand Binding.

    PubMed

    Montalvo-Acosta, Joel José; Cecchini, Marco

    2016-12-01

    The physiological role played by protein-ligand recognition has motivated the development of several computational approaches to the ligand binding affinity. Some of them, termed rigorous, have a strong theoretical foundation but involve too much computation to be generally useful. Some others alleviate the computational burden by introducing strong approximations and/or empirical calibrations, which also limit their general use. Most importantly, there is no straightforward correlation between the predictive power and the level of approximation introduced. Here, we present a general framework for the quantitative interpretation of protein-ligand binding based on statistical mechanics. Within this framework, we re-derive self-consistently the fundamental equations of some popular approaches to the binding constant and pinpoint the inherent approximations. Our analysis represents a first step towards the development of variants with optimum accuracy/efficiency ratio for each stage of the drug discovery pipeline. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. The development and application of CFD technology in mechanical engineering

    NASA Astrophysics Data System (ADS)

    Wei, Yufeng

    2017-12-01

    Computational Fluid Dynamics (CFD) is an analysis of the physical phenomena involved in fluid flow and heat conduction by computer numerical calculation and graphical display. The numerical method simulates the complexity of the physical problem and the precision of the numerical solution, which is directly related to the hardware speed of the computer and the hardware such as memory. With the continuous improvement of computer performance and CFD technology, it has been widely applied to the field of water conservancy engineering, environmental engineering and industrial engineering. This paper summarizes the development process of CFD, the theoretical basis, the governing equations of fluid mechanics, and introduces the various methods of numerical calculation and the related development of CFD technology. Finally, CFD technology in the mechanical engineering related applications are summarized. It is hoped that this review will help researchers in the field of mechanical engineering.

  16. Channel Model Optimization with Reflection Residual Component for Indoor MIMO-VLC System

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Li, Tengfei; Liu, Huanlin; Li, Yichao

    2017-12-01

    A fast channel modeling method is studied to solve the problem of reflection channel gain for multiple input multiple output-visible light communications (MIMO-VLC) in the paper. For reducing the computational complexity when associating with the reflection times, no more than 3 reflections are taken into consideration in VLC. We think that higher order reflection link consists of corresponding many times line of sight link and firstly present reflection residual component to characterize higher reflection (more than 2 reflections). We perform computer simulation results for point-to-point channel impulse response, receiving optical power and receiving signal to noise ratio. Based on theoretical analysis and simulation results, the proposed method can effectively reduce the computational complexity of higher order reflection in channel modeling.

  17. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  18. Postdoctoral Fellow | Center for Cancer Research

    Cancer.gov

    The Neuro-Oncology Branch (NOB), Center for Cancer Research (CCR), National Cancer Institute (NCI) of the National Institutes of Health (NIH) is seeking outstanding postdoctoral candidates interested in studying metabolic and cell signaling pathways in the context of brain cancers through construction of computational models amenable to formal computational analysis and simulation. The ability to closely collaborate with the modern metabolomics center developed at CCR provides a unique opportunity for a postdoctoral candidate with a strong theoretical background and interest in demonstrating the incredible potential of computational approaches to solve problems from scientific disciplines and improve lives. The candidate will be given the opportunity to both construct data-driven models, as well as biologically validate the models by demonstrating the ability to predict the effects of altering tumor metabolism in laboratory and clinical settings.

  19. Efficient forced vibration reanalysis method for rotating electric machines

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Suzuki, Hiromitsu; Kuroishi, Masakatsu; Nakai, Hideo

    2015-01-01

    Rotating electric machines are subject to forced vibration by magnetic force excitation with wide-band frequency spectrum that are dependent on the operating conditions. Therefore, when designing the electric machines, it is inevitable to compute the vibration response of the machines at various operating conditions efficiently and accurately. This paper presents an efficient frequency-domain vibration analysis method for the electric machines. The method enables the efficient re-analysis of the vibration response of electric machines at various operating conditions without the necessity to re-compute the harmonic response by finite element analyses. Theoretical background of the proposed method is provided, which is based on the modal reduction of the magnetic force excitation by a set of amplitude-modulated standing-waves. The method is applied to the forced response vibration of the interior permanent magnet motor at a fixed operating condition. The results computed by the proposed method agree very well with those computed by the conventional harmonic response analysis by the FEA. The proposed method is then applied to the spin-up test condition to demonstrate its applicability to various operating conditions. It is observed that the proposed method can successfully be applied to the spin-up test conditions, and the measured dominant frequency peaks in the frequency response can be well captured by the proposed approach.

  20. Fundamental energy limits of SET-based Brownian NAND and half-adder circuits. Preliminary findings from a physical-information-theoretic methodology

    NASA Astrophysics Data System (ADS)

    Ercan, İlke; Suyabatmaz, Enes

    2018-06-01

    The saturation in the efficiency and performance scaling of conventional electronic technologies brings about the development of novel computational paradigms. Brownian circuits are among the promising alternatives that can exploit fluctuations to increase the efficiency of information processing in nanocomputing. A Brownian cellular automaton, where signals propagate randomly and are driven by local transition rules, can be made computationally universal by embedding arbitrary asynchronous circuits on it. One of the potential realizations of such circuits is via single electron tunneling (SET) devices since SET technology enable simulation of noise and fluctuations in a fashion similar to Brownian search. In this paper, we perform a physical-information-theoretic analysis on the efficiency limitations in a Brownian NAND and half-adder circuits implemented using SET technology. The method we employed here establishes a solid ground that enables studying computational and physical features of this emerging technology on an equal footing, and yield fundamental lower bounds that provide valuable insights into how far its efficiency can be improved in principle. In order to provide a basis for comparison, we also analyze a NAND gate and half-adder circuit implemented in complementary metal oxide semiconductor technology to show how the fundamental bound of the Brownian circuit compares against a conventional paradigm.

  1. Computing UV/vis spectra using a combined molecular dynamics and quantum chemistry approach: bis-triazin-pyridine (BTP) ligands studied in solution.

    PubMed

    Höfener, Sebastian; Trumm, Michael; Koke, Carsten; Heuser, Johannes; Ekström, Ulf; Skerencak-Frech, Andrej; Schimmelpfennig, Bernd; Panak, Petra J

    2016-03-21

    We report a combined computational and experimental study to investigate the UV/vis spectra of 2,6-bis(5,6-dialkyl-1,2,4-triazin-3-yl)pyridine (BTP) ligands in solution. In order to study molecules in solution using theoretical methods, force-field parameters for the ligand-water interaction are adjusted to ab initio quantum chemical calculations. Based on these parameters, molecular dynamics (MD) simulations are carried out from which snapshots are extracted as input to quantum chemical excitation-energy calculations to obtain UV/vis spectra of BTP ligands in solution using time-dependent density functional theory (TDDFT) employing the Tamm-Dancoff approximation (TDA). The range-separated CAM-B3LYP functional is used to avoid large errors for charge-transfer states occurring in the electronic spectra. In order to study environment effects with theoretical methods, the frozen-density embedding scheme is applied. This computational procedure allows to obtain electronic spectra calculated at the (range-separated) DFT level of theory in solution, revealing solvatochromic shifts upon solvation of up to about 0.6 eV. Comparison to experimental data shows a significantly improved agreement compared to vacuum calculations and enables the analysis of relevant excitations for the line shape in solution.

  2. Design and Analyze a New Measuring Lift Device for Fin Stabilizers Using Stiffness Matrix of Euler-Bernoulli Beam

    PubMed Central

    Liang, Lihua; Sun, Mingxiao; Shi, Hongyu; Luan, Tiantian

    2017-01-01

    Fin-angle feedback control is usually used in conventional fin stabilizers, and its actual anti-rolling effect is difficult to reach theoretical design requirements. Primarily, lift of control torque is a theoretical value calculated by static hydrodynamic characteristics of fin. However, hydrodynamic characteristics of fin are dynamic while fin is moving in waves. As a result, there is a large deviation between actual value and theoretical value of lift. Firstly, the reasons of deviation are analyzed theoretically, which could avoid a variety of interference factors and complex theoretical derivations. Secondly, a new device is designed for direct measurement of actual lift, which is composed of fin-shaft combined mechanism and sensors. This new device can make fin-shaft not only be the basic function of rotating fin, but also detect actual lift. Through analysis using stiffness matrix of Euler-Bernoulli beam, displacement of shaft-core end is measured instead of lift which is difficult to measure. Then quantitative relationship between lift and displacement is defined. Three main factors are analyzed with quantitative relationship. What is more, two installation modes of sensors and a removable shaft-end cover are proposed according to hydrodynamic characteristics of fin. Thus the new device contributes to maintenance and measurement. Lastly, the effectiveness and accuracy of device are verified by contrasting calculation and simulation on the basis of actual design parameters. And the new measuring lift method can be proved to be effective through experiments. The new device is achieved from conventional fin stabilizers. Accordingly, the reliability of original equipment is inherited. The alteration of fin stabilizers is minor, which is suitable for engineering application. In addition, the flexural properties of fin-shaft are digitized with analysis of stiffness matrix. This method provides theoretical support for engineering application by carrying out finite element analysis with computers. PMID:28046122

  3. X-ray structure determination, Hirshfeld surface analysis, spectroscopic (FT-IR, NMR, UV-Vis, fluorescence), non-linear optical properties, Fukui function and chemical activity of 4‧-(2,4-dimethoxyphenyl)-2,2‧:6‧,2″-terpyridine

    NASA Astrophysics Data System (ADS)

    Demircioğlu, Zeynep; Yeşil, Ahmet Emin; Altun, Mehmet; Bal-Demirci, Tülay; Özdemir, Namık

    2018-06-01

    The compound 4‧-(2,4-dimethoxyphenyl)-2,2‧:6‧,2″-terpyridine (Mtpyr) was synthesized and investigated using X-ray single crystal structure determination, combined with Hirshfeld topology analysis of the molecular packing. In addition, Mtpyr was characterized by experimental and theoretical FT-IR, UV-Vis, 1H NMR, 13C NMR and fluorescence emission spectra. The optimized molecular geometry (bond length, bond angle, torsion angle), the complete vibrational frequency and all other theoretical computations were calculated by using density functional theory (DFT) B3LYP method with the help of 6-311++G(d,p) basis set. From the recorded UV-Vis spectrum, the electronic properties such as excitation energies, wavelength and oscillator strength are evaluated by TD-DFT in chloroform solution. The 1H and 13C nuclear magnetic resonance (NMR) chemical shifts of the molecule were calculated by the gauge-independent atomic orbital (GIAO) method and compared with experimental results. The calculated HOMO-LUMO band gap energies confirmed that charge transfer and chemical stability within the molecule. The hyperconjugative interaction energy E(2) and electron densities of donor (i) and acceptor (j) bonds were calculated using natural bond orbital (NBO) analysis. Besides Mulliken and natural population charges (NPA), non-linear optic properties (NLO), Fukui Function analysis, molecular electrostatic potential (MEP) were also computed which helps to identifying the electrophilic/nucleophilic nature.

  4. Visualization and Interaction in Research, Teaching, and Scientific Communication

    NASA Astrophysics Data System (ADS)

    Ammon, C. J.

    2017-12-01

    Modern computing provides many tools for exploring observations, numerical calculations, and theoretical relationships. The number of options is, in fact, almost overwhelming. But the choices provide those with modest programming skills opportunities to create unique views of scientific information and to develop deeper insights into their data, their computations, and the underlying theoretical data-model relationships. I present simple examples of using animation and human-computer interaction to explore scientific data and scientific-analysis approaches. I illustrate how valuable a little programming ability can free scientists from the constraints of existing tools and can facilitate the development of deeper appreciation data and models. I present examples from a suite of programming languages ranging from C to JavaScript including the Wolfram Language. JavaScript is valuable for sharing tools and insight (hopefully) with others because it is integrated into one of the most powerful communication tools in human history, the web browser. Although too much of that power is often spent on distracting advertisements, the underlying computation and graphics engines are efficient, flexible, and almost universally available in desktop and mobile computing platforms. Many are working to fulfill the browser's potential to become the most effective tool for interactive study. Open-source frameworks for visualizing everything from algorithms to data are available, but advance rapidly. One strategy for dealing with swiftly changing tools is to adopt common, open data formats that are easily adapted (often by framework or tool developers). I illustrate the use of animation and interaction in research and teaching with examples from earthquake seismology.

  5. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    PubMed

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  6. Anisotropic resonator analysis using the Fourier-Bessel mode solver

    NASA Astrophysics Data System (ADS)

    Gauthier, Robert C.

    2018-03-01

    A numerical mode solver for optical structures that conform to cylindrical symmetry using Faraday's and Ampere's laws as starting expressions is developed when electric or magnetic anisotropy is present. The technique builds on the existing Fourier-Bessel mode solver which allows resonator states to be computed exploiting the symmetry properties of the resonator and states to reduce the matrix system. The introduction of anisotropy into the theoretical frame work facilitates the inclusion of PML borders permitting the computation of open ended structures and a better estimation of the resonator state quality factor. Matrix populating expressions are provided that can accommodate any material anisotropy with arbitrary orientation in the computation domain. Several example of electrical anisotropic computations are provided for rationally symmetric structures such as standard optical fibers, axial Bragg-ring fibers and bottle resonators. The anisotropy present in the materials introduces off diagonal matrix elements in the permittivity tensor when expressed in cylindrical coordinates. The effects of the anisotropy of computed states are presented and discussed.

  7. Bragg-cell receiver study

    NASA Technical Reports Server (NTRS)

    Wilson, Lonnie A.

    1987-01-01

    Bragg-cell receivers are employed in specialized Electronic Warfare (EW) applications for the measurement of frequency. Bragg-cell receiver characteristics are fully characterized for simple RF emitter signals. This receiver is early in its development cycle when compared to the IFM receiver. Functional mathematical models are derived and presented in this report for the Bragg-cell receiver. Theoretical analysis is presented and digital computer signal processing results are presented for the Bragg-cell receiver. Probability density function analysis are performed for output frequency. Probability density function distributions are observed to depart from assumed distributions for wideband and complex RF signals. This analysis is significant for high resolution and fine grain EW Bragg-cell receiver systems.

  8. Portraits of self-organization in fish schools interacting with robots

    NASA Astrophysics Data System (ADS)

    Aureli, M.; Fiorilli, F.; Porfiri, M.

    2012-05-01

    In this paper, we propose an enabling computational and theoretical framework for the analysis of experimental instances of collective behavior in response to external stimuli. In particular, this work addresses the characterization of aggregation and interaction phenomena in robot-animal groups through the exemplary analysis of fish schooling in the vicinity of a biomimetic robot. We adapt global observables from statistical mechanics to capture the main features of the shoal collective motion and its response to the robot from experimental observations. We investigate the shoal behavior by using a diffusion mapping analysis performed on these global observables that also informs the definition of relevant portraits of self-organization.

  9. Theoretical study of the composition pulling effect in InGaN metalorganic vapor-phase epitaxy growth

    NASA Astrophysics Data System (ADS)

    Inatomi, Yuya; Kangawa, Yoshihiro; Ito, Tomonori; Suski, Tadeusz; Kumagai, Yoshinao; Kakimoto, Koichi; Koukitu, Akinori

    2017-07-01

    The composition pulling effect in metalorganic vapor-phase InGaN epitaxy was theoretically investigated by thermodynamic analysis. The excess energies of biaxial-strained In x Ga1- x N were numerically calculated using empirical interatomic potentials considering different situations: (i) coherent growth on GaN(0001), (ii) coherent growth on In0.2Ga0.8N(0001), and (iii) bulk growth. Using the excess energies, the excess chemical potentials of InN and GaN alloys were computed. Our results show that compressive strain suppresses In incorporation, whereas tensile strain promotes it. Moreover, assuming chemical equilibrium, the relationship between the solid composition and the growth conditions was predicted. The results successfully reproduced the typical composition pulling effect.

  10. Analysis of the tunable asymmetric fiber F-P cavity for fiber strain sensor edge-filter demodulation

    NASA Astrophysics Data System (ADS)

    Chen, Haotao; Liang, Youcheng

    2014-12-01

    An asymmetric fiber (Fabry-Pérot, F-P) interferometric cavity with the good linearity and wide dynamic range was successfully designed based on the optical thin film characteristic matrix theory; by adjusting the material of two different thin metallic layers, the asymmetric fiber F-P interferometric cavity was fabricated by depositing the multi-layer thin films on the optical fiber's end face. The asymmetric F-P cavity has the extensive potential application. In this paper, the demodulation method for the wavelength shift of the fiber Bragg grating (FBG) sensor based on the F-P cavity is demonstrated, and a theoretical formula is obtained. And the experimental results coincide well with the computational results obtained from the theoretical model.

  11. Improving the theoretical prediction for the Bs - B̅s width difference: matrix elements of next-to-leading order ΔB = 2 operators

    NASA Astrophysics Data System (ADS)

    Davies, Christine; Harrison, Judd; Lepage, G. Peter; Monahan, Christopher; Shigemitsu, Junko; Wingate, Matthew

    2018-03-01

    We present lattice QCD results for the matrix elements of R2 and other dimension-7, ΔB = 2 operators relevant for calculations of Δs, the Bs - B̅s width difference. We have computed correlation functions using 5 ensembles of the MILC Collaboration's 2+1 + 1-flavour gauge field configurations, spanning 3 lattice spacings and light sea quarks masses down to the physical point. The HISQ action is used for the valence strange quarks, and the NRQCD action is used for the bottom quarks. Once our analysis is complete, the theoretical uncertainty in the Standard Model prediction for ΔΓs will be substantially reduced.

  12. Evol and ProDy for bridging protein sequence evolution and structural dynamics.

    PubMed

    Bakan, Ahmet; Dutta, Anindita; Mao, Wenzhi; Liu, Ying; Chennubhotla, Chakra; Lezon, Timothy R; Bahar, Ivet

    2014-09-15

    Correlations between sequence evolution and structural dynamics are of utmost importance in understanding the molecular mechanisms of function and their evolution. We have integrated Evol, a new package for fast and efficient comparative analysis of evolutionary patterns and conformational dynamics, into ProDy, a computational toolbox designed for inferring protein dynamics from experimental and theoretical data. Using information-theoretic approaches, Evol coanalyzes conservation and coevolution profiles extracted from multiple sequence alignments of protein families with their inferred dynamics. ProDy and Evol are open-source and freely available under MIT License from http://prody.csb.pitt.edu/. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Computer-aided molecular modeling techniques for predicting the stability of drug cyclodextrin inclusion complexes in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Faucci, Maria Teresa; Melani, Fabrizio; Mura, Paola

    2002-06-01

    Molecular modeling was used to investigate factors influencing complex formation between cyclodextrins and guest molecules and predict their stability through a theoretical model based on the search for a correlation between experimental stability constants ( Ks) and some theoretical parameters describing complexation (docking energy, host-guest contact surfaces, intermolecular interaction fields) calculated from complex structures at a minimum conformational energy, obtained through stochastic methods based on molecular dynamic simulations. Naproxen, ibuprofen, ketoprofen and ibuproxam were used as model drug molecules. Multiple Regression Analysis allowed identification of the significant factors for the complex stability. A mathematical model ( r=0.897) related log Ks with complex docking energy and lipophilic molecular fields of cyclodextrin and drug.

  14. Theoretical studies of system performance and adaptive optics design parameters

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1990-08-01

    The ultimate performance of an adaptive optics (AO) system can be sensitive to specific design parameters of individual components. The type and configuration of a wavefront sensor or the shape of individual deformable mirror actuator influence functions can have a profound effect on the correctability of the AO system. This paper will discuss the results of a theoretical study which employed both closed form analytic solutions and computer models. A parametric analysis of wavefront sensor characteristics, noise, and subaperture geometry are independently evaluated against system response to an aberrated wave characteristic of atmospheric turbulence. Similarly, the shape and extent of the deformable mirror influence function and the placement and number of actuators is evaluated to characterize the effects of fitting error and coupling.

  15. Theoretical calculation of polarizability isotope effects.

    PubMed

    Moncada, Félix; Flores-Moreno, Roberto; Reyes, Andrés

    2017-03-01

    We propose a scheme to estimate hydrogen isotope effects on molecular polarizabilities. This approach combines the any-particle molecular orbital method, in which both electrons and H/D nuclei are described as quantum waves, with the auxiliary density perturbation theory, to calculate analytically the polarizability tensor. We assess the performance of method by calculating the polarizability isotope effect for 20 molecules. A good correlation between theoretical and experimental data is found. Further analysis of the results reveals that the change in the polarizability of a X-H bond upon deuteration decreases as the electronegativity of X increases. Our investigation also reveals that the molecular polarizability isotope effect presents an additive character. Therefore, it can be computed by counting the number of deuterated bonds in the molecule.

  16. Mathematical Analysis of an Epidemic System in the Presence of Optimal Control and Population Dispersal

    NASA Astrophysics Data System (ADS)

    Nandi, Swapan Kumar; Jana, Soovoojeet; Mandal, Manotosh; Kar, T. K.

    In this paper, we proposed and analyzed a susceptible-infected-recovered (SIR) type epidemic model to investigate the effect of transport-related infectious diseases namely tuberculosis, measles, rubella, influenza, sexually transmitted diseases, etc. The existence and stability criteria of both the diseases include free equilibrium point and endemic equilibrium point which are established and the threshold parametric condition for which the system passes through a transcritical bifurcation is also obtained. Optimal control strategy for control parameters is formulated and solved both theoretically and numerically. Lastly, we not only illustrate our theoretical results through graphical illustrations but also computer simulation is used to show that our model would be a good model to study the SARS epidemic in 2003.

  17. FT-IR, FT-Raman, and DFT computational studies of melaminium nitrate molecular-ionic crystal

    NASA Astrophysics Data System (ADS)

    Tanak, Hasan; Marchewka, Mariusz K.

    2013-02-01

    The experimental and theoretical vibrational spectra of melaminium nitrate were studied. The Raman and infrared (FT-IR) spectra of the melaminium nitrate and its deuterated analogue were recorded in the solid phase. Molecular geometry and vibrational frequency values of melaminium nitrate in the electronic ground state were calculated using the density functional method (B3LYP) with the 6-31++G(d,p) basis set. The calculated results show that the optimized geometry can well reproduce the crystal structure, and the theoretical vibrational frequency values show good agreement with experimental values. The NBO analysis reveals that the N-H···O and N-H···N intermolecular interactions significantly influence crystal packing in this molecule.

  18. Computational control of flexible aerospace systems

    NASA Technical Reports Server (NTRS)

    Sharpe, Lonnie, Jr.; Shen, Ji Yao

    1994-01-01

    The main objective of this project is to establish a distributed parameter modeling technique for structural analysis, parameter estimation, vibration suppression and control synthesis of large flexible aerospace structures. This report concentrates on the research outputs produced in the last two years. The main accomplishments can be summarized as follows. A new version of the PDEMOD Code had been completed based on several incomplete versions. The verification of the code had been conducted by comparing the results with those examples for which the exact theoretical solutions can be obtained. The theoretical background of the package and the verification examples has been reported in a technical paper submitted to the Joint Applied Mechanics & Material Conference, ASME. A brief USER'S MANUAL had been compiled, which includes three parts: (1) Input data preparation; (2) Explanation of the Subroutines; and (3) Specification of control variables. Meanwhile, a theoretical investigation of the NASA MSFC two-dimensional ground-based manipulator facility by using distributed parameter modeling technique has been conducted. A new mathematical treatment for dynamic analysis and control of large flexible manipulator systems has been conceived, which may provide an embryonic form of a more sophisticated mathematical model for future modified versions of the PDEMOD Codes.

  19. Two dimensional kinetic analysis of electrostatic harmonic plasma waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca-Pongutá, E. C.; Ziebell, L. F.; Gaelzer, R.

    2016-06-15

    Electrostatic harmonic Langmuir waves are virtual modes excited in weakly turbulent plasmas, first observed in early laboratory beam-plasma experiments as well as in rocket-borne active experiments in space. However, their unequivocal presence was confirmed through computer simulated experiments and subsequently theoretically explained. The peculiarity of harmonic Langmuir waves is that while their existence requires nonlinear response, their excitation mechanism and subsequent early time evolution are governed by essentially linear process. One of the unresolved theoretical issues regards the role of nonlinear wave-particle interaction process over longer evolution time period. Another outstanding issue is that existing theories for these modes aremore » limited to one-dimensional space. The present paper carries out two dimensional theoretical analysis of fundamental and (first) harmonic Langmuir waves for the first time. The result shows that harmonic Langmuir wave is essentially governed by (quasi)linear process and that nonlinear wave-particle interaction plays no significant role in the time evolution of the wave spectrum. The numerical solutions of the two-dimensional wave spectra for fundamental and harmonic Langmuir waves are also found to be consistent with those obtained by direct particle-in-cell simulation method reported in the literature.« less

  20. PREFACE: 2nd International Workshop on Theoretical and Computational Physics (IWTCP-2): Modern Methods and Latest Results in Particle Physics, Nuclear Physics and Astrophysics and the 39th National Conference on Theoretical Physics (NCTP-39)

    NASA Astrophysics Data System (ADS)

    Hoang, Trinh Xuan; Ky, Nguyen Anh; Lan, Nguyen Tri; Viet, Nguyen Ai

    2015-06-01

    This volume contains selected papers presented at the 2nd International Workshop on Theoretical and Computational Physics (IWTCP-2): Modern Methods and Latest Results in Particle Physics, Nuclear Physics and Astrophysics and the 39th National Conference on Theoretical Physics (NCTP-39). Both the workshop and the conference were held from 28th - 31st July 2014 in Dakruco Hotel, Buon Ma Thuot, Dak Lak, Vietnam. The NCTP-39 and the IWTCP-2 were organized under the support of the Vietnamese Theoretical Physics Society, with a motivation to foster scientific exchanges between the theoretical and computational physicists in Vietnam and worldwide, as well as to promote high-standard level of research and education activities for young physicists in the country. The IWTCP-2 was also an External Activity of the Asia Pacific Center for Theoretical Physics (APCTP). About 100 participants coming from nine countries participated in the workshop and the conference. At the IWTCP-2 workshop, we had 16 invited talks presented by international experts, together with eight oral and ten poster contributions. At the NCTP-39, three invited talks, 15 oral contributions and 39 posters were presented. We would like to thank all invited speakers, participants and sponsors for making the workshop and the conference successful. Trinh Xuan Hoang, Nguyen Anh Ky, Nguyen Tri Lan and Nguyen Ai Viet

  1. Compact Information Representations

    DTIC Science & Technology

    2016-08-02

    applied computer science, and applied math . Within the scope of this proposal, the focus is preliminarily on the fundamental, theoretical research...Science & Technology • Tung-Lung Wu, now Assistant Professor, Dept. of Math and Stat, Mississippi State Univ 2 Papers In this section, we list the papers...computer science, and applied math . Within the scope of this proposal, the focus is preliminarily on the fundamental, theoretical research which lies in

  2. Assessing the Effectiveness of Two Theoretically Motivated Computer-Assisted Reading Interventions in the United Kingdom: GG Rime and GG Phoneme

    ERIC Educational Resources Information Center

    Kyle, Fiona; Kujala, Janne; Richardson, Ulla; Lyytinen, Heikki; Goswami, Usha

    2013-01-01

    We report an empirical comparison of the effectiveness of two theoretically motivated computer-assisted reading interventions (CARI) based on the Finnish GraphoGame CARI: English GraphoGame Rime (GG Rime) and English GraphoGame Phoneme (GG Phoneme). Participants were 6-7-year-old students who had been identified by their teachers as being…

  3. Computing Game-Theoretic Solutions for Security in the Medium Term

    DTIC Science & Technology

    This project concerns the design of algorithms for computing game- theoretic solutions . (Game theory concerns how to act in a strategically optimal...way in environments with other agents who also seek to act optimally but have different , and possibly opposite, interests .) Such algorithms have...recently found application in a number of real-world security applications, including among others airport security, scheduling Federal Air Marshals, and

  4. Linear Transforms for Fourier Data on the Sphere: Application to High Angular Resolution Diffusion MRI of the Brain

    PubMed Central

    Haldar, Justin P.; Leahy, Richard M.

    2013-01-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. PMID:23353603

  5. Interplay between theory and experiment: computational organometallic and transition metal chemistry.

    PubMed

    Lin, Zhenyang

    2010-05-18

    Computational and theoretical chemistry provide fundamental insights into the structures, properties, and reactivities of molecules. As a result, theoretical calculations have become indispensable in various fields of chemical research and development. In this Account, we present our research in the area of computational transition metal chemistry, using examples to illustrate how theory impacts our understanding of experimental results and how close collaboration between theoreticians and experimental chemists can be mutually beneficial. We begin by examining the use of computational chemistry to elucidate the details of some unusual chemical bonds. We consider the three-center, two-electron bonding in titanocene sigma-borane complexes and the five-center, four-electron bonding in a rhodium-bismuth complex. The bonding in metallabenzene complexes is also examined. In each case, theoretical calculations provide particular insight into the electronic structure of the chemical bonds. We then give an example of how theoretical calculations aided the structural determination of a kappa(2)-N,N chelate ruthenium complex formed upon heating an intermediate benzonitrile-coordinated complex. An initial X-ray diffraction structure proposed on the basis of a reasonable mechanism appeared to fit well, with an apparently acceptable R value of 0.0478. But when DFT calculations were applied, the optimized geometry differed significantly from the experimental data. By combining experimental and theoretical outlooks, we posited a new structure. Remarkably, a re-refining of the X-ray diffraction data based on the new structure resulted in a slightly lower R value of 0.0453. We further examine the use of computational chemistry in providing new insight into C-H bond activation mechanisms and in understanding the reactivity properties of nucleophilic boryl ligands, addressing experimental difficulties with calculations and vice versa. Finally, we consider the impact of theoretical insights in three very specific experimental studies of chemical reactions, illustrating how theoretical results prompt further experimental studies: (i) diboration of aldehydes catalyzed by copper(I) boryl complexes, (ii) ruthenium-catalyzed C-H amination of arylazides, and (iii) zinc reduction of a vinylcarbyne complex. The concepts and examples presented here are intended for nonspecialists, particularly experimentalists. Together, they illustrate some of the achievements that are possible with a fruitful union of experiment and theory.

  6. Models, Data, and War: a Critique of the Foundation for Defense Analyses.

    DTIC Science & Technology

    1980-03-12

    scientific formulation 6 An "objective" solution 8 Analysis of a squishy problem 9 A judgmental formulation 9 A potential for distortion 11 A subjective...inextricably tied to those judgments. Different analysts, with apparently identical knowledge of a real world problem, may develop plausible formulations ...configured is a concrete theoretical statement." 2/ The formulation of a computer model--conceiving a mathematical representation of the real world

  7. Probing Cosmic Infrared Sources: A Computer Modeling Approach

    DTIC Science & Technology

    1992-06-01

    developed to study various physical phenomena involving dust grains, e.g., molecule formation on grains, grain formation in expanding circumstellar...EVALUATION OF METHODS OF ANALYSIS IN INFRARED ASTR9?NOMY 16 4.0 THEORETICAL STUDIES INVOLVING DUST GRAINS., 16 4.1 Theory of Molecule Formation on Dust Grains...17 4.2 Modeling Grain Formation in Stellar Outflows 7 18 4.3 Infrared Emission from Fractal Grains * 19 4.4 Photochemistry in Circumstellar Envelopes

  8. Global Methods for Image Motion Analysis

    DTIC Science & Technology

    1992-10-01

    a variant of the same error function as in Adiv [2]. Another related approach was presented by Maybank [46,45]. Nearly all researchers in motion...with an application to stereo vision. In Proc. 7th Intern. Joint Conference on AI, pages 674{679, Vancouver, 1981. [45] S. J. Maybank . Algorithm for...analysing optical ow based on the least-squares method. Image and Vision Computing, 4:38{42, 1986. [46] S. J. Maybank . A Theoretical Study of Optical

  9. Recent Naval Postgraduate School Publications.

    DTIC Science & Technology

    1982-04-01

    477 p. Haney, R L; et al.; eds. Ocean models for climate research: A workshop Sponsored by the U.S. Committee for the Global Atmos. Hes. Program. Nat... climate variability Oceanus, vol. 21, no. 4, p. 33-39, (1978). Williams, R T A review of theoretical models of atmospheric frontogenesis Chapman Conf...structure in large-scale optimization models Symp. 9 n Computer-Assisted Analysis and Model Simpification, Boulder, Colo., Mar. 24, 1980. Brown, G G

  10. Mechanistic analysis of intramolecular free radical reactions toward synthesis of 7-azabicyclo[2.2.1]heptane derivatives.

    PubMed

    Soriano, Elena; Marco-Contelles, José

    2009-06-05

    The mechanisms for the formation of conformationally constrained epibatidine analogues by intramolecular free radical processes have been computationally addressed by means of DFT methods. The mechanism and the critical effect of the 7-nitrogen protecting group on the outcome of these radical-mediated cyclizations are discussed. Theoretical findings account for unexpected experimental results and can assist in the selection of proper precursors for a successful cyclization.

  11. Experimental and Numerical Analysis of Axially Compressed Circular Cylindrical Fiber-Reinforced Panels with Various Boundary Conditions.

    DTIC Science & Technology

    1981-10-01

    Numerical predictions used in the compari- sons were obtained from the energy -based, finite-difference computer proqram CLAPP. Test specimens were clamped...edges V LONGITUDINAL STIFFENERS 45 I. Introduction 45 2. Stiffener Strain Energy 46 3. Stiffener Energy in Matrix Form 47 4. Displacement Continuity 49...that theoretical bifurcation loads predicted by the energy method represent upper bounds to the classical bifurcation loads associated with the test

  12. Closed-loop fiber optic gyroscope with homodyne detection

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Qin, BingKun; Chen, Shufen

    1996-09-01

    Interferometric fiber optic gyroscope (IFOG) has been analyzed with autocontrol theory in this paper. An open-loop IFOG system is not able to restrain the bias drift, but a closed-loop IFOG system can do it very well using negative feedback in order to suppress zero drift. The result of our theoretic analysis and computer simulation indicate that the bias drift of a closed-loop system is smaller than an open- loop one.

  13. Mathematic models for a ray tracing method and its applications in wireless optical communications.

    PubMed

    Zhang, Minglun; Zhang, Yangan; Yuan, Xueguang; Zhang, Jinnan

    2010-08-16

    This paper presents a new ray tracing method, which contains a whole set of mathematic models, and its validity is verified by simulations. In addition, both theoretical analysis and simulation results show that the computational complexity of the method is much lower than that of previous ones. Therefore, the method can be used to rapidly calculate the impulse response of wireless optical channels for complicated systems.

  14. Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces.

    PubMed

    Abu-Alqumsan, Mohammad; Peer, Angelika

    2016-06-01

    Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.

  15. The Theoretical Astrophysical Observatory: Cloud-based Mock Galaxy Catalogs

    NASA Astrophysics Data System (ADS)

    Bernyk, Maksym; Croton, Darren J.; Tonini, Chiara; Hodkinson, Luke; Hassan, Amr H.; Garel, Thibault; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Hegarty, Sarah

    2016-03-01

    We introduce the Theoretical Astrophysical Observatory (TAO), an online virtual laboratory that houses mock observations of galaxy survey data. Such mocks have become an integral part of the modern analysis pipeline. However, building them requires expert knowledge of galaxy modeling and simulation techniques, significant investment in software development, and access to high performance computing. These requirements make it difficult for a small research team or individual to quickly build a mock catalog suited to their needs. To address this TAO offers access to multiple cosmological simulations and semi-analytic galaxy formation models from an intuitive and clean web interface. Results can be funnelled through science modules and sent to a dedicated supercomputer for further processing and manipulation. These modules include the ability to (1) construct custom observer light cones from the simulation data cubes; (2) generate the stellar emission from star formation histories, apply dust extinction, and compute absolute and/or apparent magnitudes; and (3) produce mock images of the sky. All of TAO’s features can be accessed without any programming requirements. The modular nature of TAO opens it up for further expansion in the future.

  16. Practical sliced configuration spaces for curved planar pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sacks, E.

    1999-01-01

    In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less

  17. A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.

    PubMed

    Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H

    2001-03-01

    The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.

  18. Optical activity and electronic absorption spectra of some simple nucleosides related to cytidine and uridine: all-valence-shell molecular orbital calculations.

    PubMed Central

    Miles, D W; Redington, P K; Miles, D L; Eyring, H

    1981-01-01

    The circular dichroism and electronic absorption of three simple model systems for cytidine and uridine have been measured to 190 nm. The molecular spectral properties (excitation wavelengths, oscillator strengths, rotational strengths, and polarization directions) and electronic transitional patterns were investigated by using wave functions of the entire nucleoside with the goal of establishing the reliability of the theoretical method. The computed electronic absorption quantities were shown to be in satisfactory agreement with experimental data. It was found that the computed optical rotatory strengths of the B2u and E1u electronic transitions and lowest observed n-pi transition are in good agreement with experimental values. Electronic transitions were characterized by their electronic transitional patterns derived from population analysis of the transition density matrix. The theoretical rotational strengths associated with the B2u and E1u transitions stabilize after the use of just a few singly excited configurations in the configuration interaction basis and, hypothetically, are more reliable as indicators of conformation in pyrimidine nucleosides related to cytidine. PMID:6950393

  19. Marshal Wrubel and the Electronic Computer as an Astronomical Instrument

    NASA Astrophysics Data System (ADS)

    Mutschlecner, J. P.; Olsen, K. H.

    1998-05-01

    In 1960, Marshal H. Wrubel, professor of astrophysics at Indiana University, published an influential review paper under the title, "The Electronic Computer as an Astronomical Instrument." This essay pointed out the enormous potential of the electronic computer as an instrument of observational and theoretical research in astronomy, illustrated programming concepts, and made specific recommendations for the increased use of computers in astronomy. He noted that, with a few scattered exceptions, computer use by the astronomical community had heretofore been "timid and sporadic." This situation was to improve dramatically in the next few years. By the late 1950s, general-purpose, high-speed, "mainframe" computers were just emerging from the experimental, developmental stage, but few were affordable by or available to academic and research institutions not closely associated with large industrial or national defense programs. Yet by 1960 Wrubel had spent a decade actively pioneering and promoting the imaginative application of electronic computation within the astronomical community. Astronomy upper-level undergraduate and graduate students at Indiana were introduced to computing, and Ph.D. candidates who he supervised applied computer techniques to problems in theoretical astrophysics. He wrote an early textbook on programming, taught programming classes, and helped establish and direct the Research Computing Center at Indiana, later named the Wrubel Computing Center in his honor. He and his students created a variety of algorithms and subroutines and exchanged these throughout the astronomical community by distributing the Astronomical Computation News Letter. Nationally as well as internationally, Wrubel actively cooperated with other groups interested in computing applications for theoretical astrophysics, often through his position as secretary of the IAU commission on Stellar Constitution.

  20. TADS: A CFD-based turbomachinery and analysis design system with GUI. Volume 1: Method and results

    NASA Technical Reports Server (NTRS)

    Topp, D. A.; Myers, R. A.; Delaney, R. A.

    1995-01-01

    The primary objective of this study was the development of a computational fluid dynamics (CFD) based turbomachinery airfoil analysis and design system, controlled by a graphical user interface (GUI). The computer codes resulting from this effort are referred to as the Turbomachinery Analysis and Design System (TADS). This document describes the theoretical basis and analytical results from the TADS system. TADS couples a throughflow solver (ADPAC) with a quasi-3D blade-to-blade solver (RVCQ3D) in an interactive package. Throughflow analysis capability was developed in ADPAC through the addition of blade force and blockage terms to the governing equations. A GUI was developed to simplify user input and automate the many tasks required to perform turbomachinery analysis and design. The coupling of various programs was done in a way that alternative solvers or grid generators could be easily incorporated into the TADS framework. Results of aerodynamic calculations using the TADS system are presented for a highly loaded fan, a compressor stator, a low-speed turbine blade, and a transonic turbine vane.

  1. Theoretical basis, experimental design, and computerized simulation of synergism and antagonism in drug combination studies.

    PubMed

    Chou, Ting-Chao

    2006-09-01

    The median-effect equation derived from the mass-action law principle at equilibrium-steady state via mathematical induction and deduction for different reaction sequences and mechanisms and different types of inhibition has been shown to be the unified theory for the Michaelis-Menten equation, Hill equation, Henderson-Hasselbalch equation, and Scatchard equation. It is shown that dose and effect are interchangeable via defined parameters. This general equation for the single drug effect has been extended to the multiple drug effect equation for n drugs. These equations provide the theoretical basis for the combination index (CI)-isobologram equation that allows quantitative determination of drug interactions, where CI < 1, = 1, and > 1 indicate synergism, additive effect, and antagonism, respectively. Based on these algorithms, computer software has been developed to allow automated simulation of synergism and antagonism at all dose or effect levels. It displays the dose-effect curve, median-effect plot, combination index plot, isobologram, dose-reduction index plot, and polygonogram for in vitro or in vivo studies. This theoretical development, experimental design, and computerized data analysis have facilitated dose-effect analysis for single drug evaluation or carcinogen and radiation risk assessment, as well as for drug or other entity combinations in a vast field of disciplines of biomedical sciences. In this review, selected examples of applications are given, and step-by-step examples of experimental designs and real data analysis are also illustrated. The merging of the mass-action law principle with mathematical induction-deduction has been proven to be a unique and effective scientific method for general theory development. The median-effect principle and its mass-action law based computer software are gaining increased applications in biomedical sciences, from how to effectively evaluate a single compound or entity to how to beneficially use multiple drugs or modalities in combination therapies.

  2. The bacteriorhodopsin model membrane system as a prototype molecular computing element.

    PubMed

    Hong, F T

    1986-01-01

    The quest for more sophisticated integrated circuits to overcome the limitation of currently available silicon integrated circuits has led to the proposal of using biological molecules as computational elements by computer scientists and engineers. While the theoretical aspect of this possibility has been pursued by computer scientists, the research and development of experimental prototypes have not been pursued with an equal intensity. In this survey, we make an attempt to examine model membrane systems that incorporate the protein pigment bacteriorhodopsin which is found in Halobacterium halobium. This system was chosen for several reasons. The pigment/membrane system is sufficiently simple and stable for rigorous quantitative study, yet at the same time sufficiently complex in molecular structure to permit alteration of this structure in an attempt to manipulate the photosignal. Several methods of forming the pigment/membrane assembly are described and the potential application to biochip design is discussed. Experimental data using these membranes and measured by a tunable voltage clamp method are presented along with a theoretical analysis based on the Gouy-Chapman diffuse double layer theory to illustrate the usefulness of this approach. It is shown that detailed layouts of the pigment/membrane assembly as well as external loading conditions can modify the time course of the photosignal in a predictable manner. Some problems that may arise in the actual implementation and manufacturing, as well as the use of existing technology in protein chemistry, immunology, and recombinant DNA technology are discussed.

  3. Wind tunnel seeding particles for laser velocimeter

    NASA Technical Reports Server (NTRS)

    Ghorieshi, Anthony

    1992-01-01

    The design of an optimal air foil has been a major challenge for aerospace industries. The main objective is to reduce the drag force while increasing the lift force in various environmental air conditions. Experimental verification of theoretical and computational results is a crucial part of the analysis because of errors buried in the solutions, due to the assumptions made in theoretical work. Experimental studies are an integral part of a good design procedure; however, empirical data are not always error free due to environmental obstacles or poor execution, etc. The reduction of errors in empirical data is a major challenge in wind tunnel testing. One of the recent advances of particular interest is the use of a non-intrusive measurement technique known as laser velocimetry (LV) which allows for obtaining quantitative flow data without introducing flow disturbing probes. The laser velocimeter technique is based on measurement of scattered light by the particles present in the flow but not the velocity of the flow. Therefore, for an accurate flow velocity measurement with laser velocimeters, two criterion are investigated: (1) how well the particles track the local flow field, and (2) the requirement of light scattering efficiency to obtain signals with the LV. In order to demonstrate the concept of predicting the flow velocity by velocity measurement of particle seeding, the theoretical velocity of the gas flow is computed and compared with experimentally obtained velocity of particle seeding.

  4. Theoretical foundations for finite-time transient stability and sensitivity analysis of power systems

    NASA Astrophysics Data System (ADS)

    Dasgupta, Sambarta

    Transient stability and sensitivity analysis of power systems are problems of enormous academic and practical interest. These classical problems have received renewed interest, because of the advancement in sensor technology in the form of phasor measurement units (PMUs). The advancement in sensor technology has provided unique opportunity for the development of real-time stability monitoring and sensitivity analysis tools. Transient stability problem in power system is inherently a problem of stability analysis of the non-equilibrium dynamics, because for a short time period following a fault or disturbance the system trajectory moves away from the equilibrium point. The real-time stability decision has to be made over this short time period. However, the existing stability definitions and hence analysis tools for transient stability are asymptotic in nature. In this thesis, we discover theoretical foundations for the short-term transient stability analysis of power systems, based on the theory of normally hyperbolic invariant manifolds and finite time Lyapunov exponents, adopted from geometric theory of dynamical systems. The theory of normally hyperbolic surfaces allows us to characterize the rate of expansion and contraction of co-dimension one material surfaces in the phase space. The expansion and contraction rates of these material surfaces can be computed in finite time. We prove that the expansion and contraction rates can be used as finite time transient stability certificates. Furthermore, material surfaces with maximum expansion and contraction rate are identified with the stability boundaries. These stability boundaries are used for computation of stability margin. We have used the theoretical framework for the development of model-based and model-free real-time stability monitoring methods. Both the model-based and model-free approaches rely on the availability of high resolution time series data from the PMUs for stability prediction. The problem of sensitivity analysis of power system, subjected to changes or uncertainty in load parameters and network topology, is also studied using the theory of normally hyperbolic manifolds. The sensitivity analysis is used for the identification and rank ordering of the critical interactions and parameters in the power network. The sensitivity analysis is carried out both in finite time and in asymptotic. One of the distinguishing features of the asymptotic sensitivity analysis is that the asymptotic dynamics of the system is assumed to be a periodic orbit. For asymptotic sensitivity analysis we employ combination of tools from ergodic theory and geometric theory of dynamical systems.

  5. Spectral and structural studies of the anti-cancer drug Flutamide by density functional theoretical method.

    PubMed

    Mariappan, G; Sundaraganesan, N

    2014-01-03

    A comprehensive screening of the more recent DFT theoretical approach to structural analysis is presented in this section of theoretical structural analysis. The chemical name of 2-methyl-N-[4-nitro-3-(trifluoromethyl)phenyl]-propanamide is usually called as Flutamide (In the present study it is abbreviated as FLT) and is an important and efficacious drug in the treatment of anti-cancer resistant. The molecular geometry, vibrational spectra, electronic and NMR spectral interpretation of Flutamide have been studied with the aid of density functional theory method (DFT). The vibrational assignments of the normal modes were performed on the basis of the PED calculations using the VEDA 4 program. Comparison of computational results with X-ray diffraction results of Flutamide allowed the evaluation of structure predictions and confirmed B3LYP/6-31G(d,p) as accurate for structure determination. Application of scaling factors for IR and Raman frequency predictions showed good agreement with experimental values. This is supported the assignment of the major contributors of the vibration modes of the title compound. Stability of the molecule arising from hyperconjugative interactions leading to its bioactivity, charge delocalization have been analyzed using natural bond orbital (NBO) analysis. NMR chemical shifts of the molecule were calculated using the gauge independent atomic orbital (GIAO) method. The comparison of measured FTIR, FT-Raman, and UV-Visible data to calculated values allowed assignment of major spectral features of the title molecule. Besides, Frontier molecular orbital analyze was also investigated using theoretical calculations. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. FLUT - A program for aeroelastic stability analysis. [of aircraft structures in subsonic flow

    NASA Technical Reports Server (NTRS)

    Johnson, E. H.

    1977-01-01

    A computer program (FLUT) that can be used to evaluate the aeroelastic stability of aircraft structures in subsonic flow is described. The algorithm synthesizes data from a structural vibration analysis with an unsteady aerodynamics analysis and then performs a complex eigenvalue analysis to assess the system stability. The theoretical basis of the program is discussed with special emphasis placed on some innovative techniques which improve the efficiency of the analysis. User information needed to efficiently and successfully utilize the program is provided. In addition to identifying the required input, the flow of the program execution and some possible sources of difficulty are included. The use of the program is demonstrated with a listing of the input and output for a simple example.

  7. Category-theoretic models of algebraic computer systems

    NASA Astrophysics Data System (ADS)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  8. SPLASH program for three dimensional fluid dynamics with free surface boundaries

    NASA Astrophysics Data System (ADS)

    Yamaguchi, A.

    1996-05-01

    This paper describes a three dimensional computer program SPLASH that solves Navier-Stokes equations based on the Arbitrary Lagrangian Eulerian (ALE) finite element method. SPLASH has been developed for application to the fluid dynamics problems including the moving boundary of a liquid metal cooled Fast Breeder Reactor (FBR). To apply SPLASH code to the free surface behavior analysis, a capillary model using a cubic Spline function has been developed. Several sample problems, e.g., free surface oscillation, vortex shedding development, and capillary tube phenomena, are solved to verify the computer program. In the analyses, the numerical results are in good agreement with the theoretical value or experimental observance. Also SPLASH code has been applied to an analysis of a free surface sloshing experiment coupled with forced circulation flow in a rectangular tank. This is a simplified situation of the flow field in a reactor vessel of the FBR. The computational simulation well predicts the general behavior of the fluid flow inside and the free surface behavior. Analytical capability of the SPLASH code has been verified in this study and the application to more practical problems such as FBR design and safety analysis is under way.

  9. Left ventricular fluid mechanics: the long way from theoretical models to clinical applications.

    PubMed

    Pedrizzetti, Gianni; Domenichini, Federico

    2015-01-01

    The flow inside the left ventricle is characterized by the formation of vortices that smoothly accompany blood from the mitral inlet to the aortic outlet. Computational fluid dynamics permitted to shed some light on the fundamental processes involved with vortex motion. More recently, patient-specific numerical simulations are becoming an increasingly feasible tool that can be integrated with the developing imaging technologies. The existing computational methods are reviewed in the perspective of their potential role as a novel aid for advanced clinical analysis. The current results obtained by simulation methods either alone or in combination with medical imaging are summarized. Open problems are highlighted and perspective clinical applications are discussed.

  10. A computer method of finding valuations forcing validity of LC formulae

    NASA Astrophysics Data System (ADS)

    Godlewski, Łukasz; Świetorzecka, Kordula; Mulawka, Jan

    2014-11-01

    The purpose of this paper is to present the computer implementation of a system known as LC temporal logic [1]. Firstly, to become familiar with some theoretical issues, a short introduction to this logic is discussed. The algorithms allowing a deep analysis of the formulae of LC logic are considered. In particular we discuss how to determine if a formula is a tautology, contrtautology or it is satisfable. Next, we show how to find all valuations to satisfy the formula. Finally, we consider finding histories generated by the formula and transforming these histories into the state machine. Moreover, a description of the experiments that verify the implementation are briefly presented.

  11. Benchmarking of Computational Models for NDE and SHM of Composites

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin; Leckey, Cara; Hafiychuk, Vasyl; Juarez, Peter; Timucin, Dogan; Schuet, Stefan; Hafiychuk, Halyna

    2016-01-01

    Ultrasonic wave phenomena constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials such as carbon-fiber-reinforced polymer (CFRP) laminates. Computational models of ultrasonic guided-wave excitation, propagation, scattering, and detection in quasi-isotropic laminates can be extremely valuable in designing practically realizable NDE and SHM hardware and software with desired accuracy, reliability, efficiency, and coverage. This paper presents comparisons of guided-wave simulations for CFRP composites implemented using three different simulation codes: two commercial finite-element analysis packages, COMSOL and ABAQUS, and a custom code implementing the Elastodynamic Finite Integration Technique (EFIT). Comparisons are also made to experimental laser Doppler vibrometry data and theoretical dispersion curves.

  12. Dynamic regulation of erythropoiesis: A computer model of general applicability

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1979-01-01

    A mathematical model for the control of erythropoiesis was developed based on the balance between oxygen supply and demand at a renal oxygen detector which controls erythropoietin release and red cell production. Feedback regulation of tissue oxygen tension is accomplished by adjustments of hemoglobin levels resulting from the output of a renal-bone marrow controller. Special consideration was given to the determinants of tissue oxygenation including evaluation of the influence of blood flow, capillary diffusivity, oxygen uptake and oxygen-hemoglobin affinity. A theoretical analysis of the overall control system is presented. Computer simulations of altitude hypoxia, red cell infusion hyperoxia, and homolytic anemia demonstrate validity of the model for general human application in health and disease.

  13. The CMC/3DPNS computer program for prediction of three-dimension, subsonic, turbulent aerodynamic juncture region flow. Volume 2: Users' manual

    NASA Technical Reports Server (NTRS)

    Manhardt, P. D.

    1982-01-01

    The CMC fluid mechanics program system was developed to transmit the theoretical solution of finite element numerical solution methodology, applied to nonlinear field problems into a versatile computer code for comprehensive flow field analysis. Data procedures for the CMC 3 dimensional Parabolic Navier-Stokes (PNS) algorithm are presented. General data procedures a juncture corner flow standard test case data deck is described. A listing of the data deck and an explanation of grid generation methodology are presented. Tabulations of all commands and variables available to the user are described. These are in alphabetical order with cross reference numbers which refer to storage addresses.

  14. Information geometry and its application to theoretical statistics and diffusion tensor magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Wisniewski, Nicholas Andrew

    This dissertation is divided into two parts. First we present an exact solution to a generalization of the Behrens-Fisher problem by embedding the problem in the Riemannian manifold of Normal distributions. From this we construct a geometric hypothesis testing scheme. Secondly we investigate the most commonly used geometric methods employed in tensor field interpolation for DT-MRI analysis and cardiac computer modeling. We computationally investigate a class of physiologically motivated orthogonal tensor invariants, both at the full tensor field scale and at the scale of a single interpolation by doing a decimation/interpolation experiment. We show that Riemannian-based methods give the best results in preserving desirable physiological features.

  15. Quantifying the Beauty of Words: A Neurocognitive Poetics Perspective

    PubMed Central

    Jacobs, Arthur M.

    2017-01-01

    In this paper I would like to pave the ground for future studies in Computational Stylistics and (Neuro-)Cognitive Poetics by describing procedures for predicting the subjective beauty of words. A set of eight tentative word features is computed via Quantitative Narrative Analysis (QNA) and a novel metric for quantifying word beauty, the aesthetic potential is proposed. Application of machine learning algorithms fed with this QNA data shows that a classifier of the decision tree family excellently learns to split words into beautiful vs. ugly ones. The results shed light on surface and semantic features theoretically relevant for affective-aesthetic processes in literary reading and generate quantitative predictions for neuroaesthetic studies of verbal materials. PMID:29311877

  16. Quantifying the Beauty of Words: A Neurocognitive Poetics Perspective.

    PubMed

    Jacobs, Arthur M

    2017-01-01

    In this paper I would like to pave the ground for future studies in Computational Stylistics and (Neuro-)Cognitive Poetics by describing procedures for predicting the subjective beauty of words. A set of eight tentative word features is computed via Quantitative Narrative Analysis (QNA) and a novel metric for quantifying word beauty, the aesthetic potential is proposed. Application of machine learning algorithms fed with this QNA data shows that a classifier of the decision tree family excellently learns to split words into beautiful vs. ugly ones. The results shed light on surface and semantic features theoretically relevant for affective-aesthetic processes in literary reading and generate quantitative predictions for neuroaesthetic studies of verbal materials.

  17. A study of reacting free and ducted hydrogen/air jets

    NASA Technical Reports Server (NTRS)

    Beach, H. L., Jr.

    1975-01-01

    The mixing and reaction of a supersonic jet of hydrogen in coaxial free and ducted high temperature test gases were investigated. The importance of chemical kinetics on computed results, and the utilization of free-jet theoretical approaches to compute enclosed flow fields were studied. Measured pitot pressure profiles were correlated by use of a parabolic mixing analysis employing an eddy viscosity model. All computations, including free, ducted, reacting, and nonreacting cases, use the same value of the empirical constant in the viscosity model. Equilibrium and finite rate chemistry models were utilized. The finite rate assumption allowed prediction of observed ignition delay, but the equilibrium model gave the best correlations downstream from the ignition location. Ducted calculations were made with finite rate chemistry; correlations were, in general, as good as the free-jet results until problems with the boundary conditions were encountered.

  18. Theoretical and experimental study of polycyclic aromatic compounds as β-tubulin inhibitors.

    PubMed

    Olazarán, Fabian E; García-Pérez, Carlos A; Bandyopadhyay, Debasish; Balderas-Rentería, Isaias; Reyes-Figueroa, Angel D; Henschke, Lars; Rivera, Gildardo

    2017-03-01

    In this work, through a docking analysis of compounds from the ZINC chemical library on human β-tubulin using high performance computer cluster, we report new polycyclic aromatic compounds that bind with high energy on the colchicine binding site of β-tubulin, suggesting three new key amino acids. However, molecular dynamic analysis showed low stability in the interaction between ligand and receptor. Results were confirmed experimentally in in vitro and in vivo models that suggest that molecular dynamics simulation is the best option to find new potential β-tubulin inhibitors. Graphical abstract Bennett's acceptance ratio (BAR) method.

  19. Virtual Manipulative Materials in Secondary Mathematics: A Theoretical Discussion

    ERIC Educational Resources Information Center

    Namukasa, Immacukate K.; Stanley, Darren; Tuchtie, Martin

    2009-01-01

    With the increased use of computer manipulatives in teaching there is need for theoretical discussions on the role of manipulatives. This paper reviews theoretical rationales for using manipulatives and illustrates how earlier distinctions of manipulative materials are broadened to include new forms of materials such as virtual manipulatives.…

  20. Finding and defining the natural automata acting in living plants: Toward the synthetic biology for robotics and informatics in vivo.

    PubMed

    Kawano, Tomonori; Bouteau, François; Mancuso, Stefano

    2012-11-01

    The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed.

  1. Finding and defining the natural automata acting in living plants: Toward the synthetic biology for robotics and informatics in vivo

    PubMed Central

    Kawano, Tomonori; Bouteau, François; Mancuso, Stefano

    2012-01-01

    The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed. PMID:23336016

  2. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  3. Comparative analysis of two discretizations of Ricci curvature for complex networks.

    PubMed

    Samal, Areejit; Sreejith, R P; Gu, Jiao; Liu, Shiping; Saucan, Emil; Jost, Jürgen

    2018-06-05

    We have performed an empirical comparison of two distinct notions of discrete Ricci curvature for graphs or networks, namely, the Forman-Ricci curvature and Ollivier-Ricci curvature. Importantly, these two discretizations of the Ricci curvature were developed based on different properties of the classical smooth notion, and thus, the two notions shed light on different aspects of network structure and behavior. Nevertheless, our extensive computational analysis in a wide range of both model and real-world networks shows that the two discretizations of Ricci curvature are highly correlated in many networks. Moreover, we show that if one considers the augmented Forman-Ricci curvature which also accounts for the two-dimensional simplicial complexes arising in graphs, the observed correlation between the two discretizations is even higher, especially, in real networks. Besides the potential theoretical implications of these observations, the close relationship between the two discretizations has practical implications whereby Forman-Ricci curvature can be employed in place of Ollivier-Ricci curvature for faster computation in larger real-world networks whenever coarse analysis suffices.

  4. Fault Diagnosis of Induction Machines in a Transient Regime Using Current Sensors with an Optimized Slepian Window

    PubMed Central

    Burriel-Valencia, Jordi; Martinez-Roman, Javier; Sapena-Bano, Angel

    2018-01-01

    The aim of this paper is to introduce a new methodology for the fault diagnosis of induction machines working in the transient regime, when time-frequency analysis tools are used. The proposed method relies on the use of the optimized Slepian window for performing the short time Fourier transform (STFT) of the stator current signal. It is shown that for a given sequence length of finite duration, the Slepian window has the maximum concentration of energy, greater than can be reached with a gated Gaussian window, which is usually used as the analysis window. In this paper, the use and optimization of the Slepian window for fault diagnosis of induction machines is theoretically introduced and experimentally validated through the test of a 3.15-MW induction motor with broken bars during the start-up transient. The theoretical analysis and the experimental results show that the use of the Slepian window can highlight the fault components in the current’s spectrogram with a significant reduction of the required computational resources. PMID:29316650

  5. Molecular structure, spectral studies, NBO, HOMO-LUMO profile, MEP and Mulliken analysis of 3β,6β-dichloro-5α-hydroxy-5α-cholestane

    NASA Astrophysics Data System (ADS)

    Alam, Mahboob; Park, Soonheum

    2018-05-01

    The synthesis of 3β,6β-dichloro-5α-hydroxy-5α-cholestane (in general, steroidal chlorohydrin or steroidal halohydrin) and theoretical study of the structure are reported in this paper. The individuality of chlorohydrin was confirmed by FT-IR, NMR, MS, CHN microanalysis and X-ray crystallography. DFT calculations on the titled molecule have been performed. The molecular structure and spectra explained by Gaussian hybrid computational analysis theory (B3LYP) are found to be in correlation with the experimental data obtained from the various spectrophotometric techniques. The theoretical geometry optimization data were compared with the X-ray data. The vibrational bands appearing in the FT-IR are assigned with accuracy using harmonic frequencies along with intensities and animated modes. Molecular properties like NBO, HOMO-LUMO analysis, chemical reactivity descriptors, MEP mapping and dipole moment have been dealt at same level of theory. The calculated electronic spectrum of chlorohydrin is interpreted on the basis of TD-DFT calculations.

  6. Conformational, structural, vibrational and quantum chemical analysis on 4-aminobenzohydrazide and 4-hydroxybenzohydrazide--a comparative study.

    PubMed

    Arjunan, V; Jayaprakash, A; Carthigayan, K; Periandy, S; Mohan, S

    2013-05-01

    Experimental and theoretical quantum chemical studies were carried out on 4-hydroxybenzohydrazide (4HBH) and 4-aminobenzohydrazide (4ABH) using FTIR and FT-Raman spectral data. The structural characteristics and vibrational spectroscopic analysis were carried performed by quantum chemical methods with the hybrid exchange-correlation functional B3LYP using 6-31G(**), 6-311++G(**) and aug-cc-pVDZ basis sets. The most stable conformer of the title compounds have been determined from the analysis of potential energy surface. The stable molecular geometries, electronic and thermodynamic parameters, IR intensities, harmonic vibrational frequencies, depolarisation ratio and Raman intensities have been computed. Molecular electrostatic potential and frontier molecular orbitals were constructed to understand the electronic properties. The potential energy distributions (PEDs) were calculated to explain the mixing of fundamental modes. The theoretical geometrical parameters and the fundamental frequencies were compared with the experimental. The interactions of hydroxy and amino group substitutions on the characteristic vibrations of the ring and hydrazide group have been analysed. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Fault Diagnosis of Induction Machines in a Transient Regime Using Current Sensors with an Optimized Slepian Window.

    PubMed

    Burriel-Valencia, Jordi; Puche-Panadero, Ruben; Martinez-Roman, Javier; Sapena-Bano, Angel; Pineda-Sanchez, Manuel

    2018-01-06

    The aim of this paper is to introduce a new methodology for the fault diagnosis of induction machines working in the transient regime, when time-frequency analysis tools are used. The proposed method relies on the use of the optimized Slepian window for performing the short time Fourier transform (STFT) of the stator current signal. It is shown that for a given sequence length of finite duration, the Slepian window has the maximum concentration of energy, greater than can be reached with a gated Gaussian window, which is usually used as the analysis window. In this paper, the use and optimization of the Slepian window for fault diagnosis of induction machines is theoretically introduced and experimentally validated through the test of a 3.15-MW induction motor with broken bars during the start-up transient. The theoretical analysis and the experimental results show that the use of the Slepian window can highlight the fault components in the current's spectrogram with a significant reduction of the required computational resources.

  8. Experimental and computational study on molecular structure and vibrational analysis of a modified biomolecule: 5-Bromo-2'-deoxyuridine

    NASA Astrophysics Data System (ADS)

    Çırak, Çağrı; Sert, Yusuf; Ucun, Fatih

    In the present study, the experimental and theoretical vibrational spectra of 5-bromo-2'-deoxyuridine were investigated. The experimental FT-IR (400-4000 cm-1) and μ-Raman spectra (100-4000 cm-1) of the molecule in the solid phase were recorded. Theoretical vibrational frequencies and geometric parameters (bond lengths and bond angles) were calculated using ab initio Hartree Fock (HF) and density functional B3LYP method with 6-31G(d), 6-31G(d,p), 6-311++G(d) and 6-311++G(d,p) basis sets by Gaussian program, for the first time. The assignments of vibrational frequencies were performed by potential energy distribution by using VEDA 4 program. The optimized geometric parameters and theoretical vibrational frequencies are compared with the corresponding experimental data and they were seen to be in a good agreement with the each other. Also, the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energies were found.

  9. Theoretical and computational analyses of LNG evaporator

    NASA Astrophysics Data System (ADS)

    Chidambaram, Palani Kumar; Jo, Yang Myung; Kim, Heuy Dong

    2017-04-01

    Theoretical and numerical analysis on the fluid flow and heat transfer inside a LNG evaporator is conducted in this work. Methane is used instead of LNG as the operating fluid. This is because; methane constitutes over 80% of natural gas. The analytical calculations are performed using simple mass and energy balance equations. The analytical calculations are made to assess the pressure and temperature variations in the steam tube. Multiphase numerical simulations are performed by solving the governing equations (basic flow equations of continuity, momentum and energy equations) in a portion of the evaporator domain consisting of a single steam pipe. The flow equations are solved along with equations of species transport. Multiphase modeling is incorporated using VOF method. Liquid methane is the primary phase. It vaporizes into the secondary phase gaseous methane. Steam is another secondary phase which flows through the heating coils. Turbulence is modeled by a two equation turbulence model. Both the theoretical and numerical predictions are seen to match well with each other. Further parametric studies are planned based on the current research.

  10. Algorithms for computing the time-corrected instantaneous frequency (reassigned) spectrogram, with applications.

    PubMed

    Fulop, Sean A; Fitz, Kelly

    2006-01-01

    A modification of the spectrogram (log magnitude of the short-time Fourier transform) to more accurately show the instantaneous frequencies of signal components was first proposed in 1976 [Kodera et al., Phys. Earth Planet. Inter. 12, 142-150 (1976)], and has been considered or reinvented a few times since but never widely adopted. This paper presents a unified theoretical picture of this time-frequency analysis method, the time-corrected instantaneous frequency spectrogram, together with detailed implementable algorithms comparing three published techniques for its computation. The new representation is evaluated against the conventional spectrogram for its superior ability to track signal components. The lack of a uniform framework for either mathematics or implementation details which has characterized the disparate literature on the schemes has been remedied here. Fruitful application of the method is shown in the realms of speech phonation analysis, whale song pitch tracking, and additive sound modeling.

  11. Implementing finite state machines in a computer-based teaching system

    NASA Astrophysics Data System (ADS)

    Hacker, Charles H.; Sitte, Renate

    1999-09-01

    Finite State Machines (FSM) are models for functions commonly implemented in digital circuits such as timers, remote controls, and vending machines. Teaching FSM is core in the curriculum of many university digital electronic or discrete mathematics subjects. Students often have difficulties grasping the theoretical concepts in the design and analysis of FSM. This has prompted the author to develop an MS-WindowsTM compatible software, WinState, that provides a tutorial style teaching aid for understanding the mechanisms of FSM. The animated computer screen is ideal for visually conveying the required design and analysis procedures. WinState complements other software for combinatorial logic previously developed by the author, and enhances the existing teaching package by adding sequential logic circuits. WinState enables the construction of a students own FSM, which can be simulated, to test the design for functionality and possible errors.

  12. Dimensionless embedding for nonlinear time series analysis

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; Aihara, Kazuyuki

    2017-09-01

    Recently, infinite-dimensional delay coordinates (InDDeCs) have been proposed for predicting high-dimensional dynamics instead of conventional delay coordinates. Although InDDeCs can realize faster computation and more accurate short-term prediction, it is still not well-known whether InDDeCs can be used in other applications of nonlinear time series analysis in which reconstruction is needed for the underlying dynamics from a scalar time series generated from a dynamical system. Here, we give theoretical support for justifying the use of InDDeCs and provide numerical examples to show that InDDeCs can be used for various applications for obtaining the recurrence plots, correlation dimensions, and maximal Lyapunov exponents, as well as testing directional couplings and extracting slow-driving forces. We demonstrate performance of the InDDeCs using the weather data. Thus, InDDeCs can eventually realize "dimensionless embedding" while we enjoy faster and more reliable computations.

  13. Analysis on trust influencing factors and trust model from multiple perspectives of online Auction

    NASA Astrophysics Data System (ADS)

    Yu, Wang

    2017-10-01

    Current reputation models lack the research on online auction trading completely so they cannot entirely reflect the reputation status of users and may cause problems on operability. To evaluate the user trust in online auction correctly, a trust computing model based on multiple influencing factors is established. It aims at overcoming the efficiency of current trust computing methods and the limitations of traditional theoretical trust models. The improved model comprehensively considers the trust degree evaluation factors of three types of participants according to different participation modes of online auctioneers, to improve the accuracy, effectiveness and robustness of the trust degree. The experiments test the efficiency and the performance of our model under different scale of malicious user, under environment like eBay and Sporas model. The experimental results analysis show the model proposed in this paper makes up the deficiency of existing model and it also has better feasibility.

  14. 2016 Energetic Materials Gordon Research Conference and Gordon Research Seminar Research Area 7: Chemical Sciences 7.0 Chemical Sciences (Dr. James K. Parker)

    DTIC Science & Technology

    2016-08-10

    thermal decomposition and mechanical damage of energetics. The program for the meeting included nine oral presentation sessions. Discussion leaders...USA) 7:30 pm - 7:35 pm Introduction by Discussion Leader 7:35 pm - 7:50 pm Vincent Baijot (Laboratory for Analysis and Architecture of Systems , CNRS...were synthesis of new materials, performance, advanced diagnostics, experimental techniques, theoretical approaches, and computational models for

  15. Modelling of eddy currents related to large angle magnetic suspension test fixture

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.; Foster, Lucas E.

    1994-01-01

    This report presents a preliminary analysis of the mathematical modelling of eddy current effects in a large-gap magnetic suspension system. It is shown that eddy currents can significantly affect the dynamic behavior and control of these systems, but are amenable to measurement and modelling. A theoretical framework is presented, together with a comparison of computed and experimental data related to the Large Angle Magnetic Suspension Test Fixture at NASA Langley Research Center.

  16. Theoretical and experimental studies on vibrational and nonlinear optic properties of guanidinium 3-nitrobenzoate. Differences and similarity between guanidinium 3-nitrobenzoate and guanidinium 4-nitrobenzoate complexes

    NASA Astrophysics Data System (ADS)

    Drozd, Marek

    2018-03-01

    According to literature data two structures of guanidine with nitrobenzoic acids are known. For guanidinium 4-nitrobenzoate the detailed studies of X-ray structure, vibrational and theoretical properties were performed. This compound was classified as second harmonic generator with efficiency of 3.3 times that KDP, standard crystal. On the contrary to mentioned above results for the guanidinium 3-nitrobenzoate the basic X-ray diffraction study was performed, only. On the basis of established crystallographic results, the detailed investigation of geometry and vibrational properties were made on the basis of theoretical calculation. According to this data the equilibrium geometry of investigated molecule was established. On the basis of this calculation the detailed computational studies of vibrational properties were performed. The theoretical IR and Raman frequencies, intensities and PED analysis are presented. Additionally, the NBO charges, HOMO and LUMO shapes and NLO properties of titled crystal were calculated. On the basis of these results the crystal was classified as second order generator in NLO but with bigger efficiency that guanidinium 4-nitorobenzoate compound. The obtained data are compared with experimental crystallographic and vibrational results for real crystal of guanidinium 3-nitrobenzoate. Additionally, the theoretical vibrational spectra are compared with literature calculations of guanidinium 4-nitrobenzoate compound.

  17. A general method for calculating three-dimensional compressible laminar and turbulent boundary layers on arbitrary wings

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Ramsey, J. A.

    1977-01-01

    The method described utilizes a nonorthogonal coordinate system for boundary-layer calculations. It includes a geometry program that represents the wing analytically, and a velocity program that computes the external velocity components from a given experimental pressure distribution when the external velocity distribution is not computed theoretically. The boundary layer method is general, however, and can also be used for an external velocity distribution computed theoretically. Several test cases were computed by this method and the results were checked with other numerical calculations and with experiments when available. A typical computation time (CPU) on an IBM 370/165 computer for one surface of a wing which roughly consist of 30 spanwise stations and 25 streamwise stations, with 30 points across the boundary layer is less than 30 seconds for an incompressible flow and a little more for a compressible flow.

  18. Methods of automatic nucleotide-sequence analysis. Multicomponent spectrophotometric analysis of mixtures of nucleic acid components by a least-squares procedure

    PubMed Central

    Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.

    1965-01-01

    1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087

  19. Network meta-analysis, electrical networks and graph theory.

    PubMed

    Rücker, Gerta

    2012-12-01

    Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Spectral analysis, vibrational assignments, NBO analysis, NMR, UV-Vis, hyperpolarizability analysis of 2-aminofluorene by density functional theory.

    PubMed

    Jone Pradeepa, S; Sundaraganesan, N

    2014-05-05

    In this present investigation, the collective experimental and theoretical study on molecular structure, vibrational analysis and NBO analysis has been reported for 2-aminofluorene. FT-IR spectrum was recorded in the range 4000-400 cm(-1). FT-Raman spectrum was recorded in the range 4000-50 cm(-1). The molecular geometry, vibrational spectra, and natural bond orbital analysis (NBO) were calculated for 2-aminofluorene using Density Functional Theory (DFT) based on B3LYP/6-31G(d,p) model chemistry. (13)C and (1)H NMR chemical shifts of 2-aminofluorene were calculated using GIAO method. The computed vibrational and NMR spectra were compared with the experimental results. The total energy distribution (TED) was derived to deepen the understanding of different modes of vibrations contributed by respective wavenumber. The experimental UV-Vis spectra was recorded in the region of 400-200 nm and correlated with simulated spectra by suitably solvated B3LYP/6-31G(d,p) model. The HOMO-LUMO energies were measured with time dependent DFT approach. The nonlinearity of the title compound was confirmed by hyperpolarizabilty examination. Using theoretical calculation Molecular Electrostatic Potential (MEP) was investigated. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. A combined experimental and DFT study of a novel unsymmetrical azine 2-(4-methoxybenzylidene)-1-(1-(4-isobutylphenyl) ethylidene) hydrazine

    NASA Astrophysics Data System (ADS)

    Vijaya, P.; Sankaran, K. R.

    2015-03-01

    A novel unsymmetrical azine 2-(4-methoxybenzylidene)-1-(1-(4-isobutylphenyl) ethylidene) hydrazine (UA) was prepared and characterized by IR, 1H and 13C NMR spectral studies. A 2D - potential energy scan (PES) of p-isobutylacetophenone (IBAP) was the portal to the conformational analysis of UA by density functional theory (DFT) methods using 6-31G(d,p) basis set by Gaussian 03 program. The theoretical IR frequencies were found to be in good agreement with the experimental values. The IR frequencies of UA were analyzed by means of Potential energy Distribution (PED %) calculation using Vibrational Energy Distribution Analysis (VEDA 4) program. The experimental NMR chemical shift values of UA were compared with the theoretical values obtained by DFT method. Nonlinear optical behavior of the unsymmetrical azine is also examined by the theoretically predicted values of dipole moment (μ), polarizability (α0) and first hyperpolarizability (βtot). Stability of the UA molecule has been analyzed using NBO analysis. The electrochemistry of UA studied experimentally by cyclic voltammetry is complemented by the computational analysis of the anionic form of the molecule UA. The determination of various global and local reactivity descriptors in the context of chemical reactivity is also performed and the electrophilicity at the vital atomic sites in UA is revealed. Bader's Atoms in molecules (AIM) theory of UA indicated the presence of intramolecular hydrogen bonding in the molecule. The molecular electrostatic potential (MEP) and HOMO-LUMO orbital analysis are also performed for the molecule UA.

  2. A combined experimental and DFT study of a novel unsymmetrical azine 2-(4-methoxybenzylidene)-1-(1-(4-isobutylphenyl) ethylidene) hydrazine.

    PubMed

    Vijaya, P; Sankaran, K R

    2015-03-05

    A novel unsymmetrical azine 2-(4-methoxybenzylidene)-1-(1-(4-isobutylphenyl) ethylidene) hydrazine (UA) was prepared and characterized by IR, (1)H and (13)C NMR spectral studies. A 2D - potential energy scan (PES) of p-isobutylacetophenone (IBAP) was the portal to the conformational analysis of UA by density functional theory (DFT) methods using 6-31G(d,p) basis set by Gaussian 03 program. The theoretical IR frequencies were found to be in good agreement with the experimental values. The IR frequencies of UA were analyzed by means of Potential energy Distribution (PED %) calculation using Vibrational Energy Distribution Analysis (VEDA 4) program. The experimental NMR chemical shift values of UA were compared with the theoretical values obtained by DFT method. Nonlinear optical behavior of the unsymmetrical azine is also examined by the theoretically predicted values of dipole moment (μ), polarizability (α0) and first hyperpolarizability (βtot). Stability of the UA molecule has been analyzed using NBO analysis. The electrochemistry of UA studied experimentally by cyclic voltammetry is complemented by the computational analysis of the anionic form of the molecule UA. The determination of various global and local reactivity descriptors in the context of chemical reactivity is also performed and the electrophilicity at the vital atomic sites in UA is revealed. Bader's Atoms in molecules (AIM) theory of UA indicated the presence of intramolecular hydrogen bonding in the molecule. The molecular electrostatic potential (MEP) and HOMO-LUMO orbital analysis are also performed for the molecule UA. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Cortical Neural Computation by Discrete Results Hypothesis

    PubMed Central

    Castejon, Carlos; Nuñez, Angel

    2016-01-01

    One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called “Discrete Results” (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of “Discrete Results” is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel “Discrete Results” concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation. PMID:27807408

  4. Cortical Neural Computation by Discrete Results Hypothesis.

    PubMed

    Castejon, Carlos; Nuñez, Angel

    2016-01-01

    One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called "Discrete Results" (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of "Discrete Results" is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel "Discrete Results" concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation.

  5. Analysis of basic clustering algorithms for numerical estimation of statistical averages in biomolecules.

    PubMed

    Anandakrishnan, Ramu; Onufriev, Alexey

    2008-03-01

    In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.

  6. Estimation of wing nonlinear aerodynamic characteristics at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Carlson, H. W.; Mack, R. J.

    1980-01-01

    A computational system for estimation of nonlinear aerodynamic characteristics of wings at supersonic speeds was developed and was incorporated in a computer program. This corrected linearized theory method accounts for nonlinearities in the variation of basic pressure loadings with local surface slopes, predicts the degree of attainment of theoretical leading edge thrust, and provides an estimate of detached leading edge vortex loadings that result when the theoretical thrust forces are not fully realized.

  7. Structural, vibrational spectroscopic and quantum chemical studies on indole-3-carboxaldehyde

    NASA Astrophysics Data System (ADS)

    Premkumar, R.; Asath, R. Mohamed; Mathavan, T.; Benial, A. Milton Franklin

    2017-05-01

    The potential energy surface (PES) scan was performed for indole-3-carboxaldehyde (ICA) and the most stable optimized conformer was predicted using DFT/B3LYP method with 6-31G basis set. The vibrational frequencies of ICA were theoretically calculated by the DFT/B3LYP method with cc-pVTZ basis set using Gaussian 09 program. The vibrational spectra were experimentally recorded by Fourier transform-infrared (FT-IR) and Fourier transform-Raman spectrometer (FT-Raman). The computed vibrational frequencies were scaled by scaling factors to yield a good agreement with observed vibrational frequencies. The theoretically calculated and experimentally observed vibrational frequencies were assigned on the basis of potential energy distribution (PED) calculation using VEDA 4.0 program. The molecular interaction, stability and intramolecular charge transfer of ICA were studied using frontier molecular orbitals (FMOs) analysis and Mulliken atomic charge distribution shows the distribution of the atomic charges. The presence of intramolecular charge transfer was studied using natural bond orbital (NBO) analysis.

  8. Theoretical Analysis of Rain Attenuation Probability

    NASA Astrophysics Data System (ADS)

    Roy, Surendra Kr.; Jha, Santosh Kr.; Jha, Lallan

    2007-07-01

    Satellite communication technologies are now highly developed and high quality, distance-independent services have expanded over a very wide area. As for the system design of the Hokkaido integrated telecommunications(HIT) network, it must first overcome outages of satellite links due to rain attenuation in ka frequency bands. In this paper theoretical analysis of rain attenuation probability on a slant path has been made. The formula proposed is based Weibull distribution and incorporates recent ITU-R recommendations concerning the necessary rain rates and rain heights inputs. The error behaviour of the model was tested with the loading rain attenuation prediction model recommended by ITU-R for large number of experiments at different probability levels. The novel slant path rain attenuastion prediction model compared to the ITU-R one exhibits a similar behaviour at low time percentages and a better root-mean-square error performance for probability levels above 0.02%. The set of presented models exhibits the advantage of implementation with little complexity and is considered useful for educational and back of the envelope computations.

  9. Critically evaluating the theory and performance of Bayesian analysis of macroevolutionary mixtures

    PubMed Central

    Moore, Brian R.; Höhna, Sebastian; May, Michael R.; Rannala, Bruce; Huelsenbeck, John P.

    2016-01-01

    Bayesian analysis of macroevolutionary mixtures (BAMM) has recently taken the study of lineage diversification by storm. BAMM estimates the diversification-rate parameters (speciation and extinction) for every branch of a study phylogeny and infers the number and location of diversification-rate shifts across branches of a tree. Our evaluation of BAMM reveals two major theoretical errors: (i) the likelihood function (which estimates the model parameters from the data) is incorrect, and (ii) the compound Poisson process prior model (which describes the prior distribution of diversification-rate shifts across branches) is incoherent. Using simulation, we demonstrate that these theoretical issues cause statistical pathologies; posterior estimates of the number of diversification-rate shifts are strongly influenced by the assumed prior, and estimates of diversification-rate parameters are unreliable. Moreover, the inability to correctly compute the likelihood or to correctly specify the prior for rate-variable trees precludes the use of Bayesian approaches for testing hypotheses regarding the number and location of diversification-rate shifts using BAMM. PMID:27512038

  10. Monolithic ceramic analysis using the SCARE program

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.

    1988-01-01

    The Structural Ceramics Analysis and Reliability Evaluation (SCARE) computer program calculates the fast fracture reliability of monolithic ceramic components. The code is a post-processor to the MSC/NASTRAN general purpose finite element program. The SCARE program automatically accepts the MSC/NASTRAN output necessary to compute reliability. This includes element stresses, temperatures, volumes, and areas. The SCARE program computes two-parameter Weibull strength distributions from input fracture data for both volume and surface flaws. The distributions can then be used to calculate the reliability of geometrically complex components subjected to multiaxial stress states. Several fracture criteria and flaw types are available for selection by the user, including out-of-plane crack extension theories. The theoretical basis for the reliability calculations was proposed by Batdorf. These models combine linear elastic fracture mechanics (LEFM) with Weibull statistics to provide a mechanistic failure criterion. Other fracture theories included in SCARE are the normal stress averaging technique and the principle of independent action. The objective of this presentation is to summarize these theories, including their limitations and advantages, and to provide a general description of the SCARE program, along with example problems.

  11. Computational carbohydrate chemistry: what theoretical methods can tell us

    PubMed Central

    Woods, Robert J.

    2014-01-01

    Computational methods have had a long history of application to carbohydrate systems and their development in this regard is discussed. The conformational analysis of carbohydrates differs in several ways from that of other biomolecules. Many glycans appear to exhibit numerous conformations coexisting in solution at room temperature and a conformational analysis of a carbohydrate must address both spatial and temporal properties. When solution nuclear magnetic resonance data are used for comparison, the simulation must give rise to ensemble-averaged properties. In contrast, when comparing to experimental data obtained from crystal structures a simulation of a crystal lattice, rather than of an isolated molecule, is appropriate. Molecular dynamics simulations are well suited for such condensed phase modeling. Interactions between carbohydrates and other biological macromolecules are also amenable to computational approaches. Having obtained a three-dimensional structure of the receptor protein, it is possible to model with accuracy the conformation of the carbohydrate in the complex. An example of the application of free energy perturbation simulations to the prediction of carbohydrate-protein binding energies is presented. PMID:9579797

  12. GVIPS Models and Software

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Gendy, Atef; Saleeb, Atef F.; Mark, John; Wilt, Thomas E.

    2007-01-01

    Two reports discuss, respectively, (1) the generalized viscoplasticity with potential structure (GVIPS) class of mathematical models and (2) the Constitutive Material Parameter Estimator (COMPARE) computer program. GVIPS models are constructed within a thermodynamics- and potential-based theoretical framework, wherein one uses internal state variables and derives constitutive equations for both the reversible (elastic) and the irreversible (viscoplastic) behaviors of materials. Because of the underlying potential structure, GVIPS models not only capture a variety of material behaviors but also are very computationally efficient. COMPARE comprises (1) an analysis core and (2) a C++-language subprogram that implements a Windows-based graphical user interface (GUI) for controlling the core. The GUI relieves the user of the sometimes tedious task of preparing data for the analysis core, freeing the user to concentrate on the task of fitting experimental data and ultimately obtaining a set of material parameters. The analysis core consists of three modules: one for GVIPS material models, an analysis module containing a specialized finite-element solution algorithm, and an optimization module. COMPARE solves the problem of finding GVIPS material parameters in the manner of a design-optimization problem in which the parameters are the design variables.

  13. Using text analysis to quantify the similarity and evolution of scientific disciplines

    PubMed Central

    Dias, Laércio; Scharloth, Joachim

    2018-01-01

    We use an information-theoretic measure of linguistic similarity to investigate the organization and evolution of scientific fields. An analysis of almost 20 M papers from the past three decades reveals that the linguistic similarity is related but different from experts and citation-based classifications, leading to an improved view on the organization of science. A temporal analysis of the similarity of fields shows that some fields (e.g. computer science) are becoming increasingly central, but that on average the similarity between pairs of disciplines has not changed in the last decades. This suggests that tendencies of convergence (e.g. multi-disciplinarity) and divergence (e.g. specialization) of disciplines are in balance. PMID:29410857

  14. Cooling of Gas Turbines. 6; Computed Temperature Distribution Through Cross Section of Water-Cooled Turbine Blade

    NASA Technical Reports Server (NTRS)

    Livingood, John N. B.; Sams, Eldon W.

    1947-01-01

    A theoretical analysis of the cross-sectional temperature distribution of a water-cooled turbine blade was made using the relaxation method to solve the differential equation derived from the analysis. The analysis was applied to specific turbine blade and the studies icluded investigations of the accuracy of simple methods to determine the temperature distribution along the mean line of the rear part of the blade, of the possible effect of varying the perimetric distribution of the hot gas-to -metal heat transfer coefficient, and of the effect of changing the thermal conductivity of the blade metal for a constant cross sectional area blade with two quarter inch diameter coolant passages.

  15. Using text analysis to quantify the similarity and evolution of scientific disciplines.

    PubMed

    Dias, Laércio; Gerlach, Martin; Scharloth, Joachim; Altmann, Eduardo G

    2018-01-01

    We use an information-theoretic measure of linguistic similarity to investigate the organization and evolution of scientific fields. An analysis of almost 20 M papers from the past three decades reveals that the linguistic similarity is related but different from experts and citation-based classifications, leading to an improved view on the organization of science. A temporal analysis of the similarity of fields shows that some fields (e.g. computer science) are becoming increasingly central, but that on average the similarity between pairs of disciplines has not changed in the last decades. This suggests that tendencies of convergence (e.g. multi-disciplinarity) and divergence (e.g. specialization) of disciplines are in balance.

  16. TADS: A CFD-based turbomachinery and analysis design system with GUI. Volume 1: Method and results

    NASA Technical Reports Server (NTRS)

    Topp, D. A.; Myers, R. A.; Delaney, R. A.

    1995-01-01

    The primary objective of this study was the development of a CFD (Computational Fluid Dynamics) based turbomachinery airfoil analysis and design system, controlled by a GUI (Graphical User Interface). The computer codes resulting from this effort are referred to as TADS (Turbomachinery Analysis and Design System). This document is the Final Report describing the theoretical basis and analytical results from the TADS system, developed under Task 18 of NASA Contract NAS3-25950, ADPAC System Coupling to Blade Analysis & Design System GUI. TADS couples a throughflow solver (ADPAC) with a quasi-3D blade-to-blade solver (RVCQ3D) in an interactive package. Throughflow analysis capability was developed in ADPAC through the addition of blade force and blockage terms to the governing equations. A GUI was developed to simplify user input and automate the many tasks required to perform turbomachinery analysis and design. The coupling of the various programs was done in such a way that alternative solvers or grid generators could be easily incorporated into the TADS framework. Results of aerodynamic calculations using the TADS system are presented for a highly loaded fan, a compressor stator, a low speed turbine blade and a transonic turbine vane.

  17. Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

    PubMed Central

    Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang

    2013-01-01

    The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941

  18. Netflow Theory Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozinoski, Radoslav; Winters, William

    2016-01-01

    The purpose of this report is to document the theoretical models utilized by the computer code NETFLOW. This report will focus on the theoretical models used to analyze high Mach number fully compressible transonic flows in piping networks.

  19. Differential geometry based solvation model I: Eulerian formulation

    NASA Astrophysics Data System (ADS)

    Chen, Zhan; Baker, Nathan A.; Wei, G. W.

    2010-11-01

    This paper presents a differential geometry based model for the analysis and computation of the equilibrium property of solvation. Differential geometry theory of surfaces is utilized to define and construct smooth interfaces with good stability and differentiability for use in characterizing the solvent-solute boundaries and in generating continuous dielectric functions across the computational domain. A total free energy functional is constructed to couple polar and nonpolar contributions to the solvation process. Geometric measure theory is employed to rigorously convert a Lagrangian formulation of the surface energy into an Eulerian formulation so as to bring all energy terms into an equal footing. By optimizing the total free energy functional, we derive coupled generalized Poisson-Boltzmann equation (GPBE) and generalized geometric flow equation (GGFE) for the electrostatic potential and the construction of realistic solvent-solute boundaries, respectively. By solving the coupled GPBE and GGFE, we obtain the electrostatic potential, the solvent-solute boundary profile, and the smooth dielectric function, and thereby improve the accuracy and stability of implicit solvation calculations. We also design efficient second-order numerical schemes for the solution of the GPBE and GGFE. Matrix resulted from the discretization of the GPBE is accelerated with appropriate preconditioners. An alternative direct implicit (ADI) scheme is designed to improve the stability of solving the GGFE. Two iterative approaches are designed to solve the coupled system of nonlinear partial differential equations. Extensive numerical experiments are designed to validate the present theoretical model, test computational methods, and optimize numerical algorithms. Example solvation analysis of both small compounds and proteins are carried out to further demonstrate the accuracy, stability, efficiency and robustness of the present new model and numerical approaches. Comparison is given to both experimental and theoretical results in the literature.

  20. Differential geometry based solvation model I: Eulerian formulation

    PubMed Central

    Chen, Zhan; Baker, Nathan A.; Wei, G. W.

    2010-01-01

    This paper presents a differential geometry based model for the analysis and computation of the equilibrium property of solvation. Differential geometry theory of surfaces is utilized to define and construct smooth interfaces with good stability and differentiability for use in characterizing the solvent-solute boundaries and in generating continuous dielectric functions across the computational domain. A total free energy functional is constructed to couple polar and nonpolar contributions to the salvation process. Geometric measure theory is employed to rigorously convert a Lagrangian formulation of the surface energy into an Eulerian formulation so as to bring all energy terms into an equal footing. By minimizing the total free energy functional, we derive coupled generalized Poisson-Boltzmann equation (GPBE) and generalized geometric flow equation (GGFE) for the electrostatic potential and the construction of realistic solvent-solute boundaries, respectively. By solving the coupled GPBE and GGFE, we obtain the electrostatic potential, the solvent-solute boundary profile, and the smooth dielectric function, and thereby improve the accuracy and stability of implicit solvation calculations. We also design efficient second order numerical schemes for the solution of the GPBE and GGFE. Matrix resulted from the discretization of the GPBE is accelerated with appropriate preconditioners. An alternative direct implicit (ADI) scheme is designed to improve the stability of solving the GGFE. Two iterative approaches are designed to solve the coupled system of nonlinear partial differential equations. Extensive numerical experiments are designed to validate the present theoretical model, test computational methods, and optimize numerical algorithms. Example solvation analysis of both small compounds and proteins are carried out to further demonstrate the accuracy, stability, efficiency and robustness of the present new model and numerical approaches. Comparison is given to both experimental and theoretical results in the literature. PMID:20938489

  1. SpectralNET – an application for spectral graph analysis and visualization

    PubMed Central

    Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J

    2005-01-01

    Background Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Results Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). Conclusion SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from . Source code is available upon request. PMID:16236170

  2. SpectralNET--an application for spectral graph analysis and visualization.

    PubMed

    Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J

    2005-10-19

    Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from http://chembank.broad.harvard.edu/resources/. Source code is available upon request.

  3. Synthesis, molecular structure, Hirshfeld surface, spectral investigations and molecular docking study of 3-(5-bromo-2-thienyl)-1-(4-fluorophenyl)-3-acetyl-2-pyrazoline (2) by DFT method

    NASA Astrophysics Data System (ADS)

    Sathish, M.; Meenakshi, G.; Xavier, S.; Sebastian, S.; Periandy, S.; Ahmad, NoorAisyah; Jamalis, Joazaizulfazli; Rosli, MohdMustaqim; Fun, Hoong-Kun

    2018-07-01

    The 3-(5-Bromo-2-thienyl)-1-(4-fluorophenyl)-3-acetyl-2-pyrazoline (2) (BTFA) was synthesized from condensation of thiophenechalcone (1) and hydrazine hydrate. The compound was characterized by FT-IR, 1H and 13C NMR. Crystal structure of this compound was determined using X-ray diffraction technique. The data of the geometry is compared with the optimized structure of the compound obtained using B3LYP functional with 6-311++G (d,p) basis set. The fundamental modes of vibrations are assigned using VEDA software with the PED assignments, and compared with data obtained from theoretical methods. The deviations are widely discussed and analyzed. The intermolecular interaction of the crystal structure was analyzed using Hirshfeld and fingerprint analysis. The chemical shift of the NMR for 13C and 1H are observed and computational data are computed using Gauge independent atomic orbital (GIAO) using B3LYP/6-311++G (d,p). The electronic and optical properties like absorption of wavelengths, excitation energy, dipole moment and frontier molecular orbital energies are computed with TD-SCF method using the above theoretical method. The antiviral nature of the molecule is also analyzed and the compound is docked in non-small cell lung cancer and human collapsin response mediator protein-1study exhibits its activity.

  4. An integrated theoretical and experimental investigation of insensitive munition compounds adsorption on cellulose, cellulose triacetate, chitin and chitosan surfaces.

    PubMed

    Gurtowski, Luke A; Griggs, Chris S; Gude, Veera G; Shukla, Manoj K

    2018-02-01

    This manuscript reports results of combined computational chemistry and batch adsorption investigation of insensitive munition compounds, 2,4-dinitroanisole (DNAN), triaminotrinitrobenzene (TATB), 1,1-diamino-2,2-dinitroethene (FOX-7) and nitroguanidine (NQ), and traditional munition compound 2,4,6-trinitrotoluene (TNT) on the surfaces of cellulose, cellulose triacetate, chitin and chitosan biopolymers. Cellulose, cellulose triacetate, chitin and chitosan were modeled as trimeric form of the linear chain of 4 C 1 chair conformation of β-d-glucopyranos, its triacetate form, β-N-acetylglucosamine and D-glucosamine, respectively, in the 1➔4 linkage. Geometries were optimized at the M062X functional level of the density functional theory (DFT) using the 6-31G(d,p) basis set in the gas phase and in the bulk water solution using the conductor-like polarizable continuum model (CPCM) approach. The nature of potential energy surfaces of the optimized geometries were ascertained through the harmonic vibrational frequency analysis. The basis set superposition error (BSSE) corrected interaction energies were obtained using the 6-311G(d,p) basis set at the same theoretical level. The computed BSSE in the gas phase was used to correct interaction energy in the bulk water solution. Computed and experimental results regarding the ability of considered surfaces in adsorbing the insensitive munitions compounds are discussed. Copyright © 2017. Published by Elsevier B.V.

  5. Defense of Cyber Infrastructures Against Cyber-Physical Attacks Using Game-Theoretic Models.

    PubMed

    Rao, Nageswara S V; Poole, Stephen W; Ma, Chris Y T; He, Fei; Zhuang, Jun; Yau, David K Y

    2016-04-01

    The operation of cyber infrastructures relies on both cyber and physical components, which are subject to incidental and intentional degradations of different kinds. Within the context of network and computing infrastructures, we study the strategic interactions between an attacker and a defender using game-theoretic models that take into account both cyber and physical components. The attacker and defender optimize their individual utilities, expressed as sums of cost and system terms. First, we consider a Boolean attack-defense model, wherein the cyber and physical subinfrastructures may be attacked and reinforced as individual units. Second, we consider a component attack-defense model wherein their components may be attacked and defended, and the infrastructure requires minimum numbers of both to function. We show that the Nash equilibrium under uniform costs in both cases is computable in polynomial time, and it provides high-level deterministic conditions for the infrastructure survival. When probabilities of successful attack and defense, and of incidental failures, are incorporated into the models, the results favor the attacker but otherwise remain qualitatively similar. This approach has been motivated and validated by our experiences with UltraScience Net infrastructure, which was built to support high-performance network experiments. The analytical results, however, are more general, and we apply them to simplified models of cloud and high-performance computing infrastructures. © 2015 Society for Risk Analysis.

  6. GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-04-01

    Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.

  7. The NASA Ames PAH IR Spectroscopic Database: Computational Version 3.00 with Updated Content and the Introduction of Multiple Scaling Factors

    NASA Astrophysics Data System (ADS)

    Bauschlicher, Charles W., Jr.; Ricca, A.; Boersma, C.; Allamandola, L. J.

    2018-02-01

    Version 3.00 of the library of computed spectra in the NASA Ames PAH IR Spectroscopic Database (PAHdb) is described. Version 3.00 introduces the use of multiple scale factors, instead of the single scaling factor used previously, to align the theoretical harmonic frequencies with the experimental fundamentals. The use of multiple scale factors permits the use of a variety of basis sets; this allows new PAH species to be included in the database, such as those containing oxygen, and yields an improved treatment of strained species and those containing nitrogen. In addition, the computed spectra of 2439 new PAH species have been added. The impact of these changes on the analysis of an astronomical spectrum through database-fitting is considered and compared with a fit using Version 2.00 of the library of computed spectra. Finally, astronomical constraints are defined for the PAH spectral libraries in PAHdb.

  8. The stability of the contact interface of cylindrical and spherical shock tubes

    NASA Astrophysics Data System (ADS)

    Crittenden, Paul E.; Balachandar, S.

    2018-06-01

    The stability of the contact interface for radial shock tubes is investigated as a model for explosive dispersal. The advection upstream splitting method with velocity and pressure diffusion (AUSM+-up) is used to solve for the radial base flow. To investigate the stability of the resulting contact interface, perturbed governing equations are derived assuming harmonic modes in the transverse directions. The perturbed harmonic flow is solved by assuming an initial disturbance and using a perturbed version of AUSM+-up derived in this paper. The intensity of the perturbation near the contact interface is computed and compared to theoretical results obtained by others. Despite the simplifying assumptions of the theoretical analysis, very good agreement is observed. Not only can the magnitude of the instability be predicted during the initial expansion, but also remarkably the agreement between the numerical and theoretical results can be maintained through the collision between the secondary shock and the contact interface. Since the theoretical results only depend upon the time evolution of the base flow, the stability of various modes could be quickly investigated without explicitly solving a system of partial differential equations for the perturbed flow.

  9. Adventures in the microlensing cloud: Large datasets, eResearch tools, and GPUs

    NASA Astrophysics Data System (ADS)

    Vernardos, G.; Fluke, C. J.

    2014-10-01

    As astronomy enters the petascale data era, astronomers are faced with new challenges relating to storage, access and management of data. A shift from the traditional approach of combining data and analysis at the desktop to the use of remote services, pushing the computation to the data, is now underway. In the field of cosmological gravitational microlensing, future synoptic all-sky surveys are expected to bring the number of multiply imaged quasars from the few tens that are currently known to a few thousands. This inflow of observational data, together with computationally demanding theoretical modeling via the production of microlensing magnification maps, requires a new approach. We present our technical solutions to supporting the GPU-Enabled, High Resolution cosmological MicroLensing parameter survey (GERLUMPH). This extensive dataset for cosmological microlensing modeling comprises over 70 000 individual magnification maps and ˜106 related results. We describe our approaches to hosting, organizing, and serving ˜ 30 TB of data and metadata products. We present a set of online analysis tools developed with PHP, JavaScript and WebGL to support access and analysis of GELRUMPH data in a Web browser. We discuss our use of graphics processing units (GPUs) to accelerate data production, and we release the core of the GPU-D direct inverse ray-shooting code (Thompson et al., 2010, 2014) used to generate the magnification maps. All of the GERLUMPH data and tools are available online from http://gerlumph.swin.edu.au. This project made use of gSTAR, the GPU Supercomputer for Theoretical Astrophysical Research.

  10. Linear transforms for Fourier data on the sphere: application to high angular resolution diffusion MRI of the brain.

    PubMed

    Haldar, Justin P; Leahy, Richard M

    2013-05-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Theoretical analysis of HVAC duct hanger systems

    NASA Technical Reports Server (NTRS)

    Miller, R. D.

    1987-01-01

    Several methods are presented which, together, may be used in the analysis of duct hanger systems over a wide range of frequencies. The finite element method (FEM) and component mode synthesis (CMS) method are used for low- to mid-frequency range computations and have been shown to yield reasonably close results. The statistical energy analysis (SEA) method yields predictions which agree with the CMS results for the 800 to 1000 Hz range provided that a sufficient number of modes participate. The CMS approach has been shown to yield valuable insight into the mid-frequency range of the analysis. It has been demonstrated that it is possible to conduct an analysis of a duct/hanger system in a cost-effective way for a wide frequency range, using several methods which overlap for several frequency bands.

  12. MULTIVARIATERESIDUES : A Mathematica package for computing multivariate residues

    NASA Astrophysics Data System (ADS)

    Larsen, Kasper J.; Rietkerk, Robbert

    2018-01-01

    Multivariate residues appear in many different contexts in theoretical physics and algebraic geometry. In theoretical physics, they for example give the proper definition of generalized-unitarity cuts, and they play a central role in the Grassmannian formulation of the S-matrix by Arkani-Hamed et al. In realistic cases their evaluation can be non-trivial. In this paper we provide a Mathematica package for efficient evaluation of multivariate residues based on methods from computational algebraic geometry.

  13. Ti:sapphire - A theoretical assessment for its spectroscopy

    NASA Astrophysics Data System (ADS)

    Da Silva, A.; Boschetto, D.; Rax, J. M.; Chériaux, G.

    2017-03-01

    This article tries to theoretically compute the stimulated emission cross-sections when we know the oscillator strength of a broad material class (dielectric crystals hosting metal-transition impurity atoms). We apply the present approach to Ti:sapphire and check it by computing some emission cross-section curves for both π and σ polarizations. We also set a relationship between oscillator strength and radiative lifetime. Such an approach will allow future parametric studies for Ti:sapphire spectroscopic properties.

  14. An Improved Theoretical Aerodynamic Derivatives Computer Program for Sounding Rockets

    NASA Technical Reports Server (NTRS)

    Barrowman, J. S.; Fan, D. N.; Obosu, C. B.; Vira, N. R.; Yang, R. J.

    1979-01-01

    The paper outlines a Theoretical Aerodynamic Derivatives (TAD) computer program for computing the aerodynamics of sounding rockets. TAD outputs include normal force, pitching moment and rolling moment coefficient derivatives as well as center-of-pressure locations as a function of the flight Mach number. TAD is applicable to slender finned axisymmetric vehicles at small angles of attack in subsonic and supersonic flows. TAD improvement efforts include extending Mach number regions of applicability, improving accuracy, and replacement of some numerical integration algorithms with closed-form integrations. Key equations used in TAD are summarized and typical TAD outputs are illustrated for a second-stage Tomahawk configuration.

  15. Topics on data transmission problem in software definition network

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Liang, Li; Xu, Tianwei; Gan, Jianhou

    2017-08-01

    In normal computer networks, the data transmission between two sites go through the shortest path between two corresponding vertices. However, in the setting of software definition network (SDN), it should monitor the network traffic flow in each site and channel timely, and the data transmission path between two sites in SDN should consider the congestion in current networks. Hence, the difference of available data transmission theory between normal computer network and software definition network is that we should consider the prohibit graph structures in SDN, and these forbidden subgraphs represent the sites and channels in which data can't be passed by the serious congestion. Inspired by theoretical analysis of an available data transmission in SDN, we consider some computational problems from the perspective of the graph theory. Several results determined in the paper imply the sufficient conditions of data transmission in SDN in the various graph settings.

  16. SCOTCH: Secure Counting Of encrypTed genomiC data using a Hybrid approach.

    PubMed

    Chenghong, Wang; Jiang, Yichen; Mohammed, Noman; Chen, Feng; Jiang, Xiaoqian; Al Aziz, Md Momin; Sadat, Md Nazmus; Wang, Shuang

    2017-01-01

    As genomic data are usually at large scale and highly sensitive, it is essential to enable both efficient and secure analysis, by which the data owner can securely delegate both computation and storage on untrusted public cloud. Counting query of genotypes is a basic function for many downstream applications in biomedical research (e.g., computing allele frequency, calculating chi-squared statistics, etc.). Previous solutions show promise on secure counting of outsourced data but the efficiency is still a big limitation for real world applications. In this paper, we propose a novel hybrid solution to combine a rigorous theoretical model (homomorphic encryption) and the latest hardware-based infrastructure (i.e., Software Guard Extensions) to speed up the computation while preserving the privacy of both data owners and data users. Our results demonstrated efficiency by using the real data from the personal genome project.

  17. SCOTCH: Secure Counting Of encrypTed genomiC data using a Hybrid approach

    PubMed Central

    Chenghong, Wang; Jiang, Yichen; Mohammed, Noman; Chen, Feng; Jiang, Xiaoqian; Al Aziz, Md Momin; Sadat, Md Nazmus; Wang, Shuang

    2017-01-01

    As genomic data are usually at large scale and highly sensitive, it is essential to enable both efficient and secure analysis, by which the data owner can securely delegate both computation and storage on untrusted public cloud. Counting query of genotypes is a basic function for many downstream applications in biomedical research (e.g., computing allele frequency, calculating chi-squared statistics, etc.). Previous solutions show promise on secure counting of outsourced data but the efficiency is still a big limitation for real world applications. In this paper, we propose a novel hybrid solution to combine a rigorous theoretical model (homomorphic encryption) and the latest hardware-based infrastructure (i.e., Software Guard Extensions) to speed up the computation while preserving the privacy of both data owners and data users. Our results demonstrated efficiency by using the real data from the personal genome project. PMID:29854245

  18. Correlative multiple porosimetries for reservoir sandstones with adoption of a new reference-sample-guided computed-tomographic method.

    PubMed

    Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min

    2016-07-22

    One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.

  19. Correlative multiple porosimetries for reservoir sandstones with adoption of a new reference-sample-guided computed-tomographic method

    PubMed Central

    Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min

    2016-01-01

    One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105

  20. Light aircraft lift, drag, and moment prediction: A review and analysis

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Summey, D. C.; Smith, N. S.; Carden, R. K.

    1975-01-01

    The historical development of analytical methods for predicting the lift, drag, and pitching moment of complete light aircraft configurations in cruising flight is reviewed. Theoretical methods, based in part on techniques described in the literature and in part on original work, are developed. These methods form the basis for understanding the computer programs given to: (1) compute the lift, drag, and moment of conventional airfoils, (2) extend these two-dimensional characteristics to three dimensions for moderate-to-high aspect ratio unswept wings, (3) plot complete configurations, (4) convert the fuselage geometric data to the correct input format, (5) compute the fuselage lift and drag, (6) compute the lift and moment of symmetrical airfoils to M = 1.0 by a simplified semi-empirical procedure, and (7) compute, in closed form, the pressure distribution over a prolate spheroid at alpha = 0. Comparisons of the predictions with experiment indicate excellent lift and drag agreement for conventional airfoils and wings. Limited comparisons of body-alone drag characteristics yield reasonable agreement. Also included are discussions for interference effects and techniques for summing the results above to obtain predictions for complete configurations.

Top