Science.gov

Sample records for methods publications computer

  1. A computational method for drug repositioning using publicly available gene expression data

    PubMed Central

    2015-01-01

    Motivation The identification of new therapeutic uses of existing drugs, or drug repositioning, offers the possibility of faster drug development, reduced risk, lesser cost and shorter paths to approval. The advent of high throughput microarray technology has enabled comprehensive monitoring of transcriptional response associated with various disease states and drug treatments. This data can be used to characterize disease and drug effects and thereby give a measure of the association between a given drug and a disease. Several computational methods have been proposed in the literature that make use of publicly available transcriptional data to reposition drugs against diseases. Method In this work, we carry out a data mining process using publicly available gene expression data sets associated with a few diseases and drugs, to identify the existing drugs that can be used to treat genes causing lung cancer and breast cancer. Results Three strong candidates for repurposing have been identified- Letrozole and GDC-0941 against lung cancer, and Ribavirin against breast cancer. Letrozole and GDC-0941 are drugs currently used in breast cancer treatment and Ribavirin is used in the treatment of Hepatitis C. PMID:26679199

  2. Exploration of preterm birth rates using the public health exposome database and computational analysis methods.

    PubMed

    Kershenbaum, Anne D; Langston, Michael A; Levine, Robert S; Saxton, Arnold M; Oyana, Tonny J; Kilbourne, Barbara J; Rogers, Gary L; Gittner, Lisaann S; Baktash, Suzanne H; Matthews-Juarez, Patricia; Juarez, Paul D

    2014-12-01

    Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother's age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

  3. Exploration of Preterm Birth Rates Using the Public Health Exposome Database and Computational Analysis Methods

    PubMed Central

    Kershenbaum, Anne D.; Langston, Michael A.; Levine, Robert S.; Saxton, Arnold M.; Oyana, Tonny J.; Kilbourne, Barbara J.; Rogers, Gary L.; Gittner, Lisaann S.; Baktash, Suzanne H.; Matthews-Juarez, Patricia; Juarez, Paul D.

    2014-01-01

    Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother’s age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

  4. BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability

    SciTech Connect

    Sellers, C.; Fox, B.; Paulz, J.

    1996-03-01

    The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

  5. Publication-quality computer graphics

    SciTech Connect

    Slabbekorn, M.H.; Johnston, R.B. Jr.

    1981-01-01

    A user-friendly graphic software package is being used at Oak Ridge National Laboratory to produce publication-quality computer graphics. Close interaction between the graphic designer and computer programmer have helped to create a highly flexible computer graphics system. The programmer-oriented environment of computer graphics has been modified to allow the graphic designer freedom to exercise his expertise with lines, form, typography, and color. The resultant product rivals or surpasses that work previously done by hand. This presentation of computer-generated graphs, charts, diagrams, and line drawings clearly demonstrates the latitude and versatility of the software when directed by a graphic designer.

  6. Special Publication 500-307 Cloud Computing

    E-print Network

    Special Publication 500-307 Cloud Computing Service Metrics Description NIST Cloud Computing Reference Architecture and Taxonomy Working Group NIST Cloud Computing Program Information Technology Computing Service Metrics Description NIST Cloud Computing Reference Architecture and Taxonomy Working Group

  7. Computers in Public Broadcasting: Who, What, Where.

    ERIC Educational Resources Information Center

    Yousuf, M. Osman

    This handbook offers guidance to public broadcasting managers on computer acquisition and development activities. Based on a 1981 survey of planned and current computer uses conducted by the Corporation for Public Broadcasting (CPB) Information Clearinghouse, computer systems in public radio and television broadcasting stations are listed by…

  8. Some Uses of Computers in Rhetoric and Public Address.

    ERIC Educational Resources Information Center

    Clevenger, Theodore, Jr.

    1969-01-01

    The author discusses the impact of the "computer revolution" on the field of rhetoric and public address in terms of the potential applications of computer methods to rhetorical problems. He first discusses the computer as a very fast calculator, giving the example of a study that probably would not have been undertaken if the calculations had had…

  9. Computational Methods for Crashworthiness

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (compiler); Carden, Huey D. (compiler)

    1993-01-01

    Presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Crashworthiness held at Langley Research Center on 2-3 Sep. 1992 are included. The presentations addressed activities in the area of impact dynamics. Workshop attendees represented NASA, the Army and Air Force, the Lawrence Livermore and Sandia National Laboratories, the aircraft and automotive industries, and academia. The workshop objectives were to assess the state-of-technology in the numerical simulation of crash and to provide guidelines for future research.

  10. Publication Bias in Methodological Computational Research

    PubMed Central

    Boulesteix, Anne-Laure; Stierle, Veronika; Hapfelmeier, Alexander

    2015-01-01

    The problem of publication bias has long been discussed in research fields such as medicine. There is a consensus that publication bias is a reality and that solutions should be found to reduce it. In methodological computational research, including cancer informatics, publication bias may also be at work. The publication of negative research findings is certainly also a relevant issue, but has attracted very little attention to date. The present paper aims at providing a new formal framework to describe the notion of publication bias in the context of methodological computational research, facilitate and stimulate discussions on this topic, and increase awareness in the scientific community. We report an exemplary pilot study that aims at gaining experiences with the collection and analysis of information on unpublished research efforts with respect to publication bias, and we outline the encountered problems. Based on these experiences, we try to formalize the notion of publication bias. PMID:26508827

  11. Computing in Public Administration: Practice and Education.

    ERIC Educational Resources Information Center

    Norris, Donald F.; Thompson, Lyke

    1988-01-01

    Presents a survey of common and leading-edge computer use practices followed by municipal government personnel and the directors of 12 masters degree programs in public administration. Concludes by suggesting directions for future developments both in public agencies and in the academy. (GEA)

  12. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Method of computing coverage. 80.771 Section 80.771 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the...

  13. Public Databases Supporting Computational Toxicology

    EPA Science Inventory

    A major goal of the emerging field of computational toxicology is the development of screening-level models that predict potential toxicity of chemicals from a combination of mechanistic in vitro assay data and chemical structure descriptors. In order to build these models, resea...

  14. Computer methods mechanics and

    E-print Network

    Yosibash, Zohar

    . Mech. Engrg. 129 (1996) 349-370 Superconvergent extraction of flux intensity factors and first. These are pointwise quantities. This paper presents a superconvergent method for the extraction of these quantities intensity factors of elasticity. We generalize this terminology, and refer to all coefficients C,,, whether

  15. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  16. Computational Methods Development at Ames

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  17. Computational Modeling Method for Superalloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John

    1997-01-01

    Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.

  18. Systems Science Methods in Public Health

    PubMed Central

    Luke, Douglas A.; Stamatakis, Katherine A.

    2012-01-01

    Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example in preparing for global pandemics. However, systems science study designs and analytic methods remain underutilized and are not widely featured in public health curricula or training. In this review we present an argument for the utility of systems science methods in public health, introduce three important systems science methods (system dynamics, network analysis, and agent-based modeling), and provide three case studies where these methods have been used to answer important public health science questions in the areas of infectious disease, tobacco control, and obesity. PMID:22224885

  19. Closing the "Digital Divide": Building a Public Computing Center

    ERIC Educational Resources Information Center

    Krebeck, Aaron

    2010-01-01

    The public computing center offers an economical and environmentally friendly model for providing additional public computer access when and where it is needed. Though not intended to be a replacement for a full-service branch, the public computing center does offer a budget-friendly option for quickly expanding high-demand services into the…

  20. Public participation: more than a method?

    PubMed Central

    Boaz, Annette; Chambers, Mary; Stuttaford, Maria

    2014-01-01

    While it is important to support the development of methods for public participation, we argue that this should not be at the expense of a broader consideration of the role of public participation. We suggest that a rights based approach provides a framework for developing more meaningful approaches that move beyond public participation as synonymous with consultation to value the contribution of lay knowledge to the governance of health systems and health research. PMID:25337604

  1. Cryptography Challenges for Computational Privacy in Public Clouds

    E-print Network

    International Association for Cryptologic Research (IACR)

    Cryptography Challenges for Computational Privacy in Public Clouds Sashank Dara Cisco Systems security but its readiness for this new generational shift of computing platform i.e. Cloud Computing into the underpinnings of Computational Privacy and lead to better solutions. I. INTRODUCTION Cloud computing came out

  2. IMS Public Lecture Are Quantum Computers The Next

    E-print Network

    Stephan, Frank

    IMS Public Lecture Are Quantum Computers The Next Generation Of Supercomputers? Jointly Organized by Abstract Quantum Computers are said to outperform all classical computers, even the classical computers of factoring large numbers, could be broken on a quantum computer. In this talk, we will see how to make sense

  3. 77 FR 4568 - Annual Computational Science Symposium; Public Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-30

    ... SERVICES Food and Drug Administration Annual Computational Science Symposium; Public Conference AGENCY... public conference entitled ``The FDA/PhUSE Annual Computational Science Symposium.'' The purpose of the conference is to help the broader community align and share experiences to advance computational science....

  4. PRINCE GEORGE PUBLIC LIBRARY COMPUTER INSTRUCTIONAL ASSISTANT JOB DESCRIPTION

    E-print Network

    Northern British Columbia, University of

    PRINCE GEORGE PUBLIC LIBRARY COMPUTER INSTRUCTIONAL ASSISTANT JOB DESCRIPTION Name of Employee: Job Position Reports to: Jeff Kozoris Position: Digital Literacy & Instructional Librarian ROLE DESCRIPTION to teach in group settings. This position involves instruction, research, public communication, accessing

  5. SOME APPROXIMATE METHODS FOR COMPUTING ELECTROMAGNETIC FIELDS

    E-print Network

    Torresani, Bruno

    SOME APPROXIMATE METHODS FOR COMPUTING ELECTROMAGNETIC FIELDS SCATTERED BY COMPLEX OBJECTS P discuss several approximate methods for computing electromagnetic scattering by objects of complex shape. Dependingon the relative size of the scatterer compared to the incident wavelength, different techniques have

  6. Nonlinear Piece In Hand Matrix Method for Enhancing Security of Multivariate Public Key Cryptosystems

    E-print Network

    International Association for Cryptologic Research (IACR)

    Nonlinear Piece In Hand Matrix Method for Enhancing Security of Multivariate Public Key. On the other hand, in most of multivariate public key cryptosystems proposed so far, the computational develop the concept, piece in hand matrix (PH matrix, for short), which aims to bring the computational

  7. 47 CFR 61.32 - Method of filing publications.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 2010-10-01 false Method of filing publications. 61.32 Section 61.32 Telecommunication...Dominant Carriers § 61.32 Method of filing publications. (a) Publications sent for filing must be addressed to...

  8. Supervised learning for computer vision: Kernel methods & sparse methods

    E-print Network

    Bach, Francis

    / applications 2 #12;Machine learning for computer vision · Multiplication of digital media · Many different. Archives du Val dOise - 1737 10 #12;Machine learning for computer vision · Multiplication of digital mediaSupervised learning for computer vision: Kernel methods & sparse methods Francis Bach SIERRA

  9. Methods and applications in computational protein design

    E-print Network

    Biddle, Jason Charles

    2010-01-01

    In this thesis, we summarize our work on applications and methods for computational protein design. First, we apply computational protein design to address the problem of degradation in stored proteins. Specifically, we ...

  10. A Computer-Assisted Instruction in Teaching Abstract Statistics to Public Affairs Undergraduates

    ERIC Educational Resources Information Center

    Ozturk, Ali Osman

    2012-01-01

    This article attempts to demonstrate the applicability of a computer-assisted instruction supported with simulated data in teaching abstract statistical concepts to political science and public affairs students in an introductory research methods course. The software is called the Elaboration Model Computer Exercise (EMCE) in that it takes a great…

  11. Evolution as Computation Evolutionary Theory (accepted for publication)

    E-print Network

    Mayfield, John

    1/21/05 1 Evolution as Computation Evolutionary Theory (accepted for publication) By: John E: jemayf@iastate.edu Key words: Evolution, Computation, Complexity, Depth Running head: Evolution of evolution must include life and also non-living processes that change over time in a manner similar

  12. NIST Special Publication 250-59 NIST Computer Time Services

    E-print Network

    NIST Special Publication 250-59 NIST Computer Time Services: Internet Time Service (ITS), Automated Computer Time Service (ACTS), and time.gov Web Sites Judah Levine Michael A. Lombardi Andrew N. Novick Time. Technol. Spec. Publ. 250-59, 77 pages (May 2002) CODEN: NSPUE2 #12;CONTENTS Page Chapter 1. Internet Time

  13. How You Can Protect Public Access Computers "and" Their Users

    ERIC Educational Resources Information Center

    Huang, Phil

    2007-01-01

    By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and…

  14. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W. (Veguita, NM); Ober, Curtis C. (Los Lunas, NM)

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  15. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Recordation of documents pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... Recordation of documents pertaining to computer shareware and donation of public...

  16. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Recordation of documents pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... Recordation of documents pertaining to computer shareware and donation of public...

  17. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Recordation of documents pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... Recordation of documents pertaining to computer shareware and donation of public...

  18. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Recordation of documents pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... Recordation of documents pertaining to computer shareware and donation of public...

  19. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Recordation of documents pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... Recordation of documents pertaining to computer shareware and donation of public...

  20. Wildlife software: procedures for publication of computer software

    USGS Publications Warehouse

    Samuel, M.D.

    1990-01-01

    Computers and computer software have become an integral part of the practice of wildlife science. Computers now play an important role in teaching, research, and management applications. Because of the specialized nature of wildlife problems, specific computer software is usually required to address a given problem (e.g., home range analysis). This type of software is not usually available from commercial vendors and therefore must be developed by those wildlife professionals with particular skill in computer programming. Current journal publication practices generally prevent a detailed description of computer software associated with new techniques. In addition, peer review of journal articles does not usually include a review of associated computer software. Thus, many wildlife professionals are usually unaware of computer software that would meet their needs or of major improvements in software they commonly use. Indeed most users of wildlife software learn of new programs or important changes only by word of mouth.

  1. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  2. Computational Methods for Biomolecular Electrostatics

    PubMed Central

    Dong, Feng; Olsen, Brett; Baker, Nathan A.

    2008-01-01

    An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der Waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

  3. Computational methods for biomolecular electrostatics.

    PubMed

    Dong, Feng; Olsen, Brett; Baker, Nathan A

    2008-01-01

    An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate, and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion, and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems, with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

  4. Computational Anatomy -Methods and Mathematical Challenges

    E-print Network

    Díaz, Lorenzo J.

    Computational Anatomy - Methods and Mathematical Challenges Martins Bruveris EPFL August 12, 2012´eformables pour la reconnaissance de formes et l' anatomie num´erique, PhD thesis, 2007] Martins Bruveris CA

  5. Computational Chemistry Using Modern Electronic Structure Methods

    ERIC Educational Resources Information Center

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  6. Computational Methods to Model Persistence.

    PubMed

    Vandervelde, Alexandra; Loris, Remy; Danckaert, Jan; Gelens, Lendert

    2016-01-01

    Bacterial persister cells are dormant cells, tolerant to multiple antibiotics, that are involved in several chronic infections. Toxin-antitoxin modules play a significant role in the generation of such persister cells. Toxin-antitoxin modules are small genetic elements, omnipresent in the genomes of bacteria, which code for an intracellular toxin and its neutralizing antitoxin. In the past decade, mathematical modeling has become an important tool to study the regulation of toxin-antitoxin modules and their relation to the emergence of persister cells. Here, we provide an overview of several numerical methods to simulate toxin-antitoxin modules. We cover both deterministic modeling using ordinary differential equations and stochastic modeling using stochastic differential equations and the Gillespie method. Several characteristics of toxin-antitoxin modules such as protein production and degradation, negative autoregulation through DNA binding, toxin-antitoxin complex formation and conditional cooperativity are gradually integrated in these models. Finally, by including growth rate modulation, we link toxin-antitoxin module expression to the generation of persister cells. PMID:26468111

  7. Computational Methods for Failure Analysis and Life Prediction

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (compiler); Harris, Charles E. (compiler); Housner, Jerrold M. (compiler); Hopkins, Dale A. (compiler)

    1993-01-01

    This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

  8. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...2013-07-01 2013-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

  9. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...2011-07-01 2011-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

  10. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...2010-07-01 2010-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

  11. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...2012-07-01 2012-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

  12. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...2014-07-01 2014-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

  13. Teaching Practical Public Health Evaluation Methods

    ERIC Educational Resources Information Center

    Davis, Mary V.

    2006-01-01

    Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…

  14. Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

    ERIC Educational Resources Information Center

    Jayakar, Krishna; Park, Eun-A

    2012-01-01

    The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

  15. Computational Methods for Rough Classification and Discovery.

    ERIC Educational Resources Information Center

    Bell, D. A.; Guan, J. W.

    1998-01-01

    Rough set theory is a new mathematical tool to deal with vagueness and uncertainty. Computational methods are presented for using rough sets to identify classes in datasets, finding dependencies in relations, and discovering rules which are hidden in databases. The methods are illustrated with a running example from a database of car test results.…

  16. Distributed Data Mining using a Public Resource Computing Framework

    NASA Astrophysics Data System (ADS)

    Cesario, Eugenio; de Caria, Nicola; Mastroianni, Carlo; Talia, Domenico

    The public resource computing paradigm is often used as a successful and low cost mechanism for the management of several classes of scientific and commercial applications that require the execution of a large number of independent tasks. Public computing frameworks, also known as “Desktop Grids”, exploit the computational power and storage facilities of private computers, or “workers”. Despite the inherent decentralized nature of the applications for which they are devoted, these systems often adopt a centralized mechanism for the assignment of jobs and distribution of input data, as is the case for BOINC, the most popular framework in this realm. We present a decentralized framework that aims at increasing the flexibility and robustness of public computing applications, thanks to two basic features: (i) the adoption of a P2P protocol for dynamically matching the job specifications with the worker characteristics, without relying on centralized resources; (ii) the use of distributed cache servers for an efficient dissemination and reutilization of data files. This framework is exploitable for a wide set of applications. In this work, we describe how a Java prototype of the framework was used to tackle the problem of mining frequent itemsets from a transactional dataset, and show some preliminary yet interesting performance results that prove the efficiency improvements that can derive from the presented architecture.

  17. Computing discharge using the index velocity method

    USGS Publications Warehouse

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression techniques in which the mean cross-sectional velocity for the standard section is related to the measured index velocity. Most ratings are simple-linear regressions, but more complex ratings may be necessary in some cases. Once the rating is established, validation measurements should be made periodically. Over time, validation measurements may provide additional definition to the rating or result in the creation of a new rating. The computation of discharge is the last step in the index velocity method, and in some ways it is the most straight-forward step. This step differs little from the steps used to compute discharge records for stage-discharge gaging stations. The ratings are entered into database software used for records computation, and continuous records of discharge are computed.

  18. Updated Panel-Method Computer Program

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1995-01-01

    Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

  19. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L. (Ames, IA)

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  20. The Contingent Valuation Method in Public Libraries

    ERIC Educational Resources Information Center

    Chung, Hye-Kyung

    2008-01-01

    This study aims to present a new model measuring the economic value of public libraries, combining the dissonance minimizing (DM) and information bias minimizing (IBM) format in the contingent valuation (CV) surveys. The possible biases which are tied to the conventional CV surveys are reviewed. An empirical study is presented to compare the model…

  1. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  2. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  3. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... PROCEDURES GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures...

  4. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  5. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  6. A method to compute periodic sums

    E-print Network

    Gumerov, Nail A

    2013-01-01

    In a number of problems in computational physics, a finite sum of kernel functions centered at $N$ particle locations located in a box in three dimensions must be extended by imposing periodic boundary conditions on box boundaries. Even though the finite sum can be efficiently computed via fast summation algorithms, such as the fast multipole method (FMM), the periodized extension is usually treated via a different algorithm, Ewald summation, accelerated via the fast Fourier transform (FFT). A different approach to compute this periodized sum just using a blackbox finite fast summation algorithm is presented in this paper. The method splits the periodized sum in to two parts. The first, comprising the contribution of all points outside a large sphere enclosing the box, and some of its neighbors, is approximated inside the box by a collection of kernel functions ("sources") placed on the surface of the sphere or using an expansion in terms of spectrally convergent local basis functions. The second part, compri...

  7. Heuristic Methods for Evolutionary Computation Zbigniew Michalewicz

    E-print Network

    Michalewicz, Zbigniew

    . In other words, evolutionary techniques are stochastic algorithms whose search methods model some natural phenomena: genetic inheritance and Darwinian strife for survival. Any evolutionary algorithm applied computation, genetic algorithms, infeasible individuals. 2 #12;1 Introduction During the last two decades

  8. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (editor); Housner, Jerrold M. (editor); Tanner, John A. (editor); Hayduk, Robert J. (editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  9. Efficient Methods to Compute Genomic Predictions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  10. Shifted power method for computing tensor eigenvalues.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-07-01

    Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

  11. Shifted power method for computing tensor eigenpairs.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  12. COMPUTING DISCRETE LOGARITHMS WITH THE PARALLELIZED KANGAROO METHOD

    E-print Network

    Bernstein, Daniel

    COMPUTING DISCRETE LOGARITHMS WITH THE PARALLELIZED KANGAROO METHOD EDLYN TESKE Abstract. The Pollard kangaroo method computes discrete logarithms in arbitrary cyclic groups. It is applied. This makes the kangaroo method the most powerful method to solve the discrete logarithm problem

  13. Survey of Public IaaS Cloud Computing API

    NASA Astrophysics Data System (ADS)

    Yamato, Yoji; Moriya, Takaaki; Ogawa, Takeshi; Akahani, Junichi

    Recently, Cloud computing is spread rapidly and many Cloud providers start their Cloud services. One of the Cloud computing problems is Cloud provider Lock In” for users. Actually, Cloud computing management APIs such as ordering or provisioning are different in each Cloud provider, so that users need to study and implement new APIs when they change Cloud providers. OGF and DMTF start the discussions of standardization of Cloud computing APIs, but there is no standard now. In this technical note, to clarify what APIs cloud providers should provide, we study common APIs for Cloud computing. We survey and compare Cloud computing APIs such as Rackspace Cloud Server, Sun Cloud, GoGrid, ElasticHosts, Amazon EC2 and FlexiScale which are currently provided as public IaaS Cloud APIs in the market. From the survey, the common APIs should support REST access style and provide account management, virtual server management, storage management, network management and resource usage management capabilities. We also show an example of OSS to provide these common APIs compared to normal hosting services OSS.

  14. Interior Point Methods for Computing Optimal Designs

    E-print Network

    Lu, Zhaosong

    2010-01-01

    In this paper we study interior point (IP) methods for solving optimal design problems. In particular, we propose a primal IP method for solving the problems with general convex optimality criteria and establish its global convergence. In addition, we reformulate the problems with A-, D- and E-criterion into linear or log-determinant semidefinite programs (SDPs) and apply standard primal-dual IP solvers such as SDPT3 [21,25] to solve the resulting SDPs. We also compare the IP methods with the widely used multiplicative algorithm introduced by Silvey et al. [18]. The computational results show that the IP methods generally outperform the multiplicative algorithm both in speed and solution quality. Moreover, our primal IP method theoretically converges for general convex optimal design problems while the multiplicative algorithm is only known to converge under some assumptions.

  15. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. PMID:23877155

  16. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  17. Computational methods for ideal compressible flow

    NASA Technical Reports Server (NTRS)

    Vanleer, B.

    1983-01-01

    Conservative dissipative difference schemes for computing one dimensional flow are introduced, and the recognition and representation of flow discontinuities are discussed. Multidimensional methods are outlined. Second order finite volume schemes are introduced. Conversion of difference schemes for a single linear convection equation into schemes for the hyperbolic system of the nonlinear conservation laws of ideal compressible flow is explained. Approximate Riemann solvers are presented. Monotone initial value interpolation; and limiters, switches, and artificial dissipation are considered.

  18. A computational method for viscous incompressible flows

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Chang, J. L. C.

    1984-01-01

    An implicit, finite-difference procedure for numerically solving viscous incompressible flows is presented. The pressure-field solution is based on the pseudocompressibility method in which a time-derivative pressure term is introduced into the mass-conservation equation to form a set of hyperbolic equations. The pressure-wave propagation and the spreading of the viscous effect is investigated using simple test problems. Computed results for external and internal flows are presented to verify the present method which has proved to be very robust in simulating incompressible flows.

  19. Computations of entropy bounds: Multidimensional geometric methods

    SciTech Connect

    Makaruk, H.E.

    1998-02-01

    The entropy bounds for constructive upper bound on the needed number-of-bits for solving a dichotomy is represented by the quotient of two multidimensional solid volumes. For minimization of this upper bound exact calculation of the volume of this quotient is needed. Three methods for exact computing of the volume of a given nD volume are presented: (1) general method for calculation any nD volume by slicing it into volumes of decreasing dimension is presented; (2) a method applying appropriate curvilinear coordinate system is described for volume bounded by symmetrical curvilinear hypersurfaces (spheres, cones, hyperboloids, ellipsoids, cylinders, etc.); and (3) an algorithm for dividing any nD complex into simplices and computing of the volume of the simplices is presented, supplemented by a general formula for calculation of volume of an nD simplex. These mathematical methods enable exact calculation of volume of any complicated multidimensional solids. The methods allow for the calculation of the minimal volume and lead to tighter bounds on the needed number-of-bits.

  20. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  1. Accelerated matrix element method with parallel computing

    NASA Astrophysics Data System (ADS)

    Schouten, D.; DeAbreu, A.; Stelzer, B.

    2015-07-01

    The matrix element method utilizes ab initio calculations of probability densities as powerful discriminants for processes of interest in experimental particle physics. The method has already been used successfully at previous and current collider experiments. However, the computational complexity of this method for final states with many particles and degrees of freedom sets it at a disadvantage compared to supervised classification methods such as decision trees, k nearest-neighbor, or neural networks. This note presents a concrete implementation of the matrix element technique using graphics processing units. Due to the intrinsic parallelizability of multidimensional integration, dramatic speedups can be readily achieved, which makes the matrix element technique viable for general usage at collider experiments.

  2. Numerical methods for problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Mead, Jodi Lorraine

    1998-12-01

    A goal of computational aeroacoustics is the accurate calculation of noise from a jet in the far field. This work concerns the numerical aspects of accurately calculating acoustic waves over large distances and long time. More specifically, the stability, efficiency, accuracy, dispersion and dissipation in spatial discretizations, time stepping schemes, and absorbing boundaries for the direct solution of wave propagation problems are determined. Efficient finite difference methods developed by Tam and Webb, which minimize dispersion and dissipation, are commonly used for the spatial and temporal discretization. Alternatively, high order pseudospectral methods can be made more efficient by using the grid transformation introduced by Kosloff and Tal-Ezer. Work in this dissertation confirms that the grid transformation introduced by Kosloff and Tal-Ezer is not spectrally accurate because, in the limit, the grid transformation forces zero derivatives at the boundaries. If a small number of grid points are used, it is shown that approximations with the Chebyshev pseudospectral method with the Kosloff and Tal-Ezer grid transformation are as accurate as with the Chebyshev pseudospectral method. This result is based on the analysis of the phase and amplitude errors of these methods, and their use for the solution of a benchmark problem in computational aeroacoustics. For the grid transformed Chebyshev method with a small number of grid points it is, however, more appropriate to compare its accuracy with that of high- order finite difference methods. This comparison, for an order of accuracy 10-3 for a benchmark problem in computational aeroacoustics, is performed for the grid transformed Chebyshev method and the fourth order finite difference method of Tam. Solutions with the finite difference method are as accurate. and the finite difference method is more efficient than, the Chebyshev pseudospectral method with the grid transformation. The efficiency of the Chebyshev pseudospectral method is further improved by developing Runge-Kutta methods for the temporal discretization which maximize imaginary stability intervals. Two new Runge-Kutta methods, which allow time steps almost twice as large as the maximal order schemes, while holding dissipation and dispersion fixed, are developed. In the process of studying dispersion and dissipation, it is determined that maximizing dispersion minimizes dissipation, and vice versa. In order to determine accurate and efficient absorbing boundary conditions, absorbing layers are studied and compared with one way wave equations. The matched layer technique for Maxwell equations is equivalent to the absorbing layer technique for the acoustic wave equation introduced by Kosloff and Kosloff. The numerical implementation of the perfectly matched layer for the acoustic wave equation with a large damping parameter results in a small portion of the wave transmitting into the absorbing layer. A large damping parameter also results in a large portion of the wave reflecting back into the domain. The perfectly matched layer is implemented on a single domain for the solution of the second order wave equation, and when implemented in this manner shows no advantage over the matched layer. Solutions of the second order wave equation, with the absorbing boundary condition imposed either by the matched layer or by the one way wave equations, are compared. The comparison shows no advantage of the matched layer over the one way wave equation for the absorbing boundary condition. Hence there is no benefit to be gained by using the matched layer, which necessarily increases the size of the computational domain.

  3. An analytic method to compute star cluster luminosity statistics

    NASA Astrophysics Data System (ADS)

    da Silva, Robert L.; Krumholz, Mark R.; Fumagalli, Michele; Fall, S. Michael

    2014-03-01

    The luminosity distribution of the brightest star clusters in a population of galaxies encodes critical pieces of information about how clusters form, evolve and disperse, and whether and how these processes depend on the large-scale galactic environment. However, extracting constraints on models from these data is challenging, in part because comparisons between theory and observation have traditionally required computationally intensive Monte Carlo methods to generate mock data that can be compared to observations. We introduce a new method that circumvents this limitation by allowing analytic computation of cluster order statistics, i.e. the luminosity distribution of the Nth most luminous cluster in a population. Our method is flexible and requires few assumptions, allowing for parametrized variations in the initial cluster mass function and its upper and lower cutoffs, variations in the cluster age distribution, stellar evolution and dust extinction, as well as observational uncertainties in both the properties of star clusters and their underlying host galaxies. The method is fast enough to make it feasible for the first time to use Markov chain Monte Carlo methods to search parameter space to find best-fitting values for the parameters describing cluster formation and disruption, and to obtain rigorous confidence intervals on the inferred values. We implement our method in a software package called the Cluster Luminosity Order-Statistic Code, which we have made publicly available.

  4. 76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ...Notice of Public Meeting--Cloud Computing Forum & Workshop IV AGENCY...SUMMARY: NIST announces the Cloud Computing Forum & Workshop IV to be held...the U.S. Government (USG) Cloud Computing Technology Roadmap...

  5. 77 FR 26509 - Notice of Public Meeting-Cloud Computing Forum & Workshop V

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-04

    ...Notice of Public Meeting--Cloud Computing Forum & Workshop V AGENCY: National...SUMMARY: NIST announces the Cloud Computing Forum & Workshop V to be held...the U.S. Government (USG) Cloud Computing Technology Roadmap...

  6. Delamination detection using methods of computational intelligence

    NASA Astrophysics Data System (ADS)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  7. Computational studies of sialyllactones: methods and uses.

    PubMed

    Parrill, A L; Mamuya, N; Dolata, D P; Gervay, J

    1997-06-01

    N-Acetylneuraminic acid (1) is a common sugar in many biological recognition processes. Neuraminidase enzymes recognize and cleave terminal sialic acids from cell surfaces. Viral entry into host cells requires neuraminidase activity, thus inhibition of neuraminidase is a useful strategy for development of drugs for viral infections. A recent crystal structure for influenza viral neuraminidase with sialic acid bound shows that the sialic acid is in a boat conformation [Prot Struct Funct Genet 14: 327 (1992)]. Our studies seek to determine if structural pre-organization can be achieved through the use of sialyllactones. Determination of whether siallylactones are pre-organized in a binding conformation requires conformational analysis. Our inability to find a systematic study comparing the results obtained by various computational methods for carbohydrate modeling led us to compare two different conformational analysis techniques, four different force fields, and three different solvent models. The computational models were compared based on their ability to reproduce experimental coupling constants for sialic acid, sialyl-1,4-lactone, and sialyl-1,7-lactone derivatives. This study has shown that the MM3 forcefield using the implicit solvent model for water implemented in Macromodel best reproduces the experimental coupling constants. The low-energy conformations generated by this combination of computational methods are pre-organized toward conformations which fit well into the active site of neuraminidase. PMID:9249154

  8. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  9. Review of Computational Stirling Analysis Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  10. Soft Computing Methods for Disulfide Connectivity Prediction

    PubMed Central

    Márquez-Chamorro, Alfonso E.; Aguilar-Ruiz, Jesús S.

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods. PMID:26523116

  11. Predicting the Number of Public Computer Terminals Needed for an On-Line Catalog: A Queuing Theory Approach.

    ERIC Educational Resources Information Center

    Knox, A. Whitney; Miller, Bruce A.

    1980-01-01

    Describes a method for estimating the number of cathode ray tube terminals needed for public use of an online library catalog. Authors claim method could also be used to estimate needed numbers of microform readers for a computer output microform (COM) catalog. Formulae are included. (Author/JD)

  12. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  13. Public Experiments and their Analysis with the Replication Method

    NASA Astrophysics Data System (ADS)

    Heering, Peter

    2007-06-01

    One of those who failed to establish himself as a natural philosopher in 18th century Paris was the future revolutionary Jean Paul Marat. He did not only publish several monographs on heat, optics and electricity in which he attempted to characterise his work as being purely empirical but he also tried to establish himself as a public lecturer. From the analysis of his experiments using the replication method it became obvious that the written description is missing several relevant aspects of the experiments. In my paper, I am going to discuss the experiences made in analysing these experiments and will suggest possible relations between these publications and the public demonstrations.

  14. Designing and Reporting on Computational Experiments with Heuristic Methods

    E-print Network

    Barr, Richard

    Designing and Reporting on Computational Experiments with Heuristic Methods Richard S. Barr Bruce L discusses the design of computational experiments to test heuristic methods and provides reporting of heuristics, full disclosure of experimental conditions, and in- tegrity in and reproducibility

  15. Computational electromagnetic methods for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3.0 times less volume than Figure-8 coils. Uncertainty quantification (UQ): The location/volume/depth of the stimulated region during TMS is often strongly affected by variability in the position and orientation of TMS coils, as well as anatomical differences between patients. A surrogate model-assisted UQ framework was developed and used to statistically characterize TMS depression therapy. The framework identifies key parameters that strongly affect TMS fields, and partially explains variations in TMS treatment responses.

  16. Computational predictive methods for fracture and fatigue

    NASA Technical Reports Server (NTRS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-01-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  17. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 2 2010-07-01 2010-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  18. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 2 2013-07-01 2013-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  19. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 2 2011-07-01 2011-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  20. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 2 2014-07-01 2014-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  1. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 2 2012-07-01 2012-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  2. Modules and methods for all photonic computing

    DOEpatents

    Schultz, David R. (Knoxville, TN); Ma, Chao Hung (Oak Ridge, TN)

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  3. Computational simulation methods for composite fracture mechanics

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  4. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  5. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  6. 76 FR 67418 - Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ...Publication 500-293, US Government Cloud Computing Technology Roadmap, Release...Publication 500-293, US Government Cloud Computing Technology Roadmap, Release...accelerate their adoption of cloud computing. The roadmap has been...

  7. Computational Methods for Electron-Atom Collisions

    NASA Astrophysics Data System (ADS)

    Bartschat, Klaus

    2011-10-01

    In recent years, much progress has been achieved in calculating reliable cross-section data for electron scattering from atoms and ions, in particular quasi-one and quasi-two electron systems such as H, He, the alkalis, and the alkaline-earth metals. Until recently, however, accurate calculations of electron collisions with more complex targets, such as the heavy noble gases Ne -Xe, have remained a significant challenge to theory. We will give an overview of the computational methods presently used for ab initio electron-atom collision calculations, with particular emphasis on their strengths and weaknesses, range of applicability, and expected accuracy. In particular, we will illustrate with a few examples how the B-spline R-matrix (BSR) method with non-orthogonal orbitals has been able to dramatically improve the quality of theoretical datasets for oscillator strengths and in particular for electron collisions with the heavy noble gases. This work was performed in collaboration with Oleg Zatsarinny. It is supported by the United States National Science Foundation under PHY-0757755 and PHY-0903818, and the TeraGrid allocation TG-PHY090031.

  8. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...2014-04-01 false Method and timing for real-time public reporting. 43.3 Section 43...TRADING COMMISSION (CONTINUED) REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...

  9. ComputEL 2014 2014 Workshop on the Use of Computational Methods in the

    E-print Network

    for Computational Linguistics Order copies of this and other ACL proceedings from: Association for Computational. On the other hand, it severely limits the ability of computational linguists to test their methods on the full that there is significant potential in collaboration between computational linguists (and other computer scientists

  10. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  11. Awareness of Accessibility Barriers in Computer-Based Instructional Materials and Faculty Demographics at South Dakota Public Universities

    ERIC Educational Resources Information Center

    Olson, Christopher

    2013-01-01

    Advances in technology and course delivery methods have enabled persons with disabilities to enroll in higher education at an increasing rate. Federal regulations state persons with disabilities must be granted equal access to the information contained in computer-based instructional materials, but faculty at the six public universities in South…

  12. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  13. Publics in Practice: Ubiquitous Computing at a Shelter for Homeless Mothers

    E-print Network

    Edwards, Keith

    Publics in Practice: Ubiquitous Computing at a Shelter for Homeless Mothers Christopher A. Le at a shelter for homeless mothers. Our system connects mobile phones, a shared display, and a Web application and organiza- tional coordination. Author Keywords Constructed Publics, Homeless, Urban Computing, Longitu

  14. Public health surveillance: historical origins, methods and evaluation.

    PubMed Central

    Declich, S.; Carter, A. O.

    1994-01-01

    In the last three decades, disease surveillance has grown into a complete discipline, quite distinct from epidemiology. This expansion into a separate scientific area within public health has not been accompanied by parallel growth in the literature about its principles and methods. The development of the fundamental concepts of surveillance systems provides a basis on which to build a better understanding of the subject. In addition, the concepts have practical value as they can be used in designing new systems as well as understanding or evaluating currently operating systems. This article reviews the principles of surveillance, beginning with a historical survey of the roots and evolution of surveillance, and discusses the goals of public health surveillance. Methods for data collection, data analysis, interpretation, and dissemination are presented, together with proposed procedures for evaluating and improving a surveillance system. Finally, some points to be considered in establishing a new surveillance system are presented. PMID:8205649

  15. An Immersed Boundary Method for Computing Anisotropic Permeability of

    E-print Network

    Al Hanbali, Ahmad

    An Immersed Boundary Method for Computing Anisotropic Permeability of Structured Porous Media David Research & Development, Neuch^atel, Switzerland PhD-TW Colloquium June 11, 2009 An Immersed Boundary Method remarks & outlook An Immersed Boundary Method for Computing Anisotropic Permeability of Structured Porous

  16. Saving lives: a computer simulation game for public education about emergencies

    SciTech Connect

    Morentz, J.W.

    1985-01-01

    One facet of the Information Revolution in which the nation finds itself involves the utilization of computers, video systems, and a variety of telecommunications capabilities by those who must cope with emergency situations. Such technologies possess a significant potential for performing emergency public education and transmitting key information that is essential for survival. An ''Emergency Public Information Competitive Challenge Grant,'' under the aegis of the Federal Emergency Management Agency (FEMA), has sponsored an effort to use computer technology - both large, time-sharing systems and small personal computers - to develop computer games which will help teach techniques of emergency management to the public at large. 24 references.

  17. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

    EPA Science Inventory

    The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

  18. Computational methods in sequence and structure prediction

    NASA Astrophysics Data System (ADS)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed a software package which is capable of designing novel protein structures at the atomic resolution. This software package allows us to perform protein structure design with a flexible backbone. The backbone flexibility includes loop region relaxation as well as a secondary structure collective mode relaxation scheme. (Abstract shortened by UMI.)

  19. Atomistic Method Applied to Computational Modeling of Surface Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Abel, Phillip B.

    2000-01-01

    The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on the BFS (Bozzolo, Ferrante, and Smith) method for the calculation of the energetics, consists of a small number of simple PCbased computer codes that deal with the different aspects of surface alloy formation. Two analysis modes are available within this package. The first mode provides an atom-by-atom description of real and virtual stages 1. during the process of surface alloying, based on the construction of catalogues of configurations where each configuration describes one possible atomic distribution. BFS analysis of this catalogue provides information on accessible states, possible ordering patterns, and details of island formation or film growth. More importantly, it provides insight into the evolution of the system. Software developed by the Computational Materials Group allows for the study of an arbitrary number of elements forming surface alloys, including an arbitrary number of surface atomic layers. The second mode involves large-scale temperature-dependent computer 2. simulations that use the BFS method for the energetics and provide information on the dynamic processes during surface alloying. These simulations require the implementation of Monte-Carlo-based codes with high efficiency within current workstation environments. This methodology capitalizes on the advantages of the BFS method: there are no restrictions on the number or type of elements or on the type of crystallographic structure considered. This removes any restrictions in the definition of the configuration catalogues used in the analytical calculations, thus allowing for the study of arbitrary ordering patterns, ultimately leading to the actual surface alloy structure. Moreover, the Monte Carlo numerical technique used for the large-scale simulations allows for a detailed visualization of the simulated process, the main advantage of this type of analysis being the ability to understand the underlying features that drive these processes. Because of the simplicity of the BFS method for e energetics used in these calculations, a detailed atom-by-atom analysis can be performed at any

  20. Checklist and Pollard Walk butterfly survey methods on public lands

    USGS Publications Warehouse

    Royer, R.A.; Austin, J.E.; Newton, W.E.

    1998-01-01

    Checklist and Pollard Walk butterfly survey methods were contemporaneously applied to seven public sites in North Dakota during the summer of 1995. Results were compared for effect of method and site on total number of butterflies and total number of species detected per hour. Checklist searching produced significantly more butterfly detections per hour than Pollard Walks at all sites. Number of species detected per hour did not differ significantly either among sites or between methods. Many species were detected by only one method, and at most sites generalist and invader species were more likely to be observed during checklist searches than during Pollard Walks. Results indicate that checklist surveys are a more efficient means for initial determination of a species list for a site, whereas for long-term monitoring the Pollard Walk is more practical and statistically manageable. Pollard Walk transects are thus recommended once a prairie butterfly fauna has been defined for a site by checklist surveys.

  1. Computational complexity for the two-point block method

    NASA Astrophysics Data System (ADS)

    See, Phang Pei; Majid, Zanariah Abdul

    2014-12-01

    In this paper, we discussed and compared the computational complexity for two-point block method and one-point method of Adams type. The computational complexity for both methods is determined based on the number of arithmetic operations performed and expressed in O(n). These two methods will be used to solve two-point second order boundary value problem directly and implemented using variable step size strategy adapted with the multiple shooting technique via three-step iterative method. Two numerical examples will be tested. The results show that the computational complexity of these methods is reliable to estimate the cost of these methods in term of the execution time. We conclude that the two-point block method has better computational performance compare to the one-point method as the total number of steps is larger.

  2. COMPUTATIONAL METHODS FOR PREDICTING TRANSMEMBRANE ALPHA HELICES

    E-print Network

    : COMPUTATIONAL MOLECULAR BIOLOGY FINAL PROJECT DECEMBER 6TH , 2002 #12;Introduction: Protein crystal structures structure and there is even a program on the web called EVA at http://cubic.bioc.columbia.edu/eva

  3. Computational Anatomy, Object Matching, and the Level Set Method

    E-print Network

    Ferguson, Thomas S.

    Computational Anatomy, Object Matching, and the Level Set Method Wei-Hsun Liao1, Luminita Vese2 matching in computational anatomy. We present a new framework for warping pairs of overlapping and non and the infinite dimensional group actions is discussed. 1 Introduction Computational anatomy [1, 2] is an emerging

  4. DEVELOPING METHODS FOR COMPUTER PROGRAMMING BY MUSICAL PERFORMANCE AND COMPOSITION

    E-print Network

    Miranda, Eduardo Reck

    DEVELOPING METHODS FOR COMPUTER PROGRAMMING BY MUSICAL PERFORMANCE AND COMPOSITION Alexis Kirke successful work in sonifying computer program code to help debugging. This paper investigates the reverse process, allowing music to be used to write computer programs. Such an approach would be less language

  5. Methods, Metrics and Motivation for a Green Computer Science Program

    E-print Network

    Way, Thomas

    of a truly paperless office [11] has yet to be realized, it certainly began a movement toward Green ComputingMethods, Metrics and Motivation for a Green Computer Science Program Mujtaba Talebi and Thomas Way are uniquely positioned to promote greater awareness of Green Computing, using the academic setting

  6. Computational aeroacoustics: Its methods and applications

    NASA Astrophysics Data System (ADS)

    Zheng, Shi

    The first part of this thesis deals with the methodology of computational aeroacoustics (CAA). It is shown that although the overall accuracy of a broadband optimized upwind scheme can be improved to some degree, a scheme that is accurate everywhere in a wide range is not possible because increasing the accuracy for large wavenumbers is always at the expense of decreasing that for smaller wavenumbers. Partially for avoiding such a dilemma, optimized multi-component schemes are proposed that are superior to optimized broadband schemes for a sound field with dominant wavenumbers. The Fourier analysis shows that even for broadband waves an optimized central multi-component scheme is at least comparable to an optimized central broadband scheme. Numerical implementation of the impedance boundary condition in the time domain is a unique and challenging topic in CAA. A benchmark problem is proposed for such implementation and its analytical solution is derived. A CAA code using Tam and Auriault's formulation of broadband time-domain impedance boundary condition accurately reproduces the analytical solution. For the duct environment, the code also accurately predicts the analytical solution of a semi-infinite impedance duct problem and the experimental data from the NASA Langley Flow Impedance Tube Facility. In the second part of the thesis are applications of the developed CAA codes. A time-domain method is formulated to separate the instability waves from the acoustic waves of the linearized Euler equations in a critical sheared mean flow. Its effectiveness is demonstrated with the CAA code solving a test problem. Other applications are concerned with optimization using the CAA codes. A noise prediction and optimization system for turbofan engine inlet duct design is developed and applied in three scenarios: liner impedance optimization, duct geometry optimization and liner layout optimization. The results show that the system is effective in finding design variable values in favor of a given objective. In a different context of optimization, a conceptual design for adaptive noise control is developed. It consists of a liner with controllable impedance and an expert system realized with an optimizer coupled with the CAA code. The expert system is shown to be able to find impedance properties that minimize the difference between the current and the desired acoustic fields.

  7. Public participation GIS: a method for identifying ecosystems services

    USGS Publications Warehouse

    Brown, Greg; Montag, Jessica; Lyon, Katie

    2012-01-01

    This study evaluated the use of an Internet-based public participation geographic information system (PPGIS) to identify ecosystem services in Grand County, Colorado. Specific research objectives were to examine the distribution of ecosystem services, identify the characteristics of participants in the study, explore potential relationships between ecosystem services and land use and land cover (LULC) classifications, and assess the methodological strengths and weakness of the PPGIS approach for identifying ecosystem services. Key findings include: (1) Cultural ecosystem service opportunities were easiest to identify while supporting and regulatory services most challenging, (2) participants were highly educated, knowledgeable about nature and science, and have a strong connection to the outdoors, (3) some LULC classifications were logically and spatially associated with ecosystem services, and (4) despite limitations, the PPGIS method demonstrates potential for identifying ecosystem services to augment expert judgment and to inform public or environmental policy decisions regarding land use trade-offs.

  8. A comparison of computational methods for identifying virulence factors.

    PubMed

    Zheng, Lu-Lu; Li, Yi-Xue; Ding, Juan; Guo, Xiao-Kui; Feng, Kai-Yan; Wang, Ya-Jun; Hu, Le-Le; Cai, Yu-Dong; Hao, Pei; Chou, Kuo-Chen

    2012-01-01

    Bacterial pathogens continue to threaten public health worldwide today. Identification of bacterial virulence factors can help to find novel drug/vaccine targets against pathogenicity. It can also help to reveal the mechanisms of the related diseases at the molecular level. With the explosive growth in protein sequences generated in the postgenomic age, it is highly desired to develop computational methods for rapidly and effectively identifying virulence factors according to their sequence information alone. In this study, based on the protein-protein interaction networks from the STRING database, a novel network-based method was proposed for identifying the virulence factors in the proteomes of UPEC 536, UPEC CFT073, P. aeruginosa PAO1, L. pneumophila Philadelphia 1, C. jejuni NCTC 11168 and M. tuberculosis H37Rv. Evaluated on the same benchmark datasets derived from the aforementioned species, the identification accuracies achieved by the network-based method were around 0.9, significantly higher than those by the sequence-based methods such as BLAST, feature selection and VirulentPred. Further analysis showed that the functional associations such as the gene neighborhood and co-occurrence were the primary associations between these virulence factors in the STRING database. The high success rates indicate that the network-based method is quite promising. The novel approach holds high potential for identifying virulence factors in many other various organisms as well because it can be easily extended to identify the virulence factors in many other bacterial species, as long as the relevant significant statistical data are available for them. PMID:22880014

  9. Preprint accepted for publication in Computers and Education Computer-Assisted Assignments in a Large Physics Class

    E-print Network

    1 Preprint accepted for publication in Computers and Education Computer-Assisted Assignments in a Large Physics Class M. Thoennessena and M. J. Harrisonb a Department of Physics & Astronomy and National, was used in large introductory physics class for the rst time. The students rated the system extremely

  10. Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing

    E-print Network

    International Association for Cryptologic Research (IACR)

    Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing Qian Wang1, {wjlou}@ece.wpi.edu Abstract. Cloud Computing has been envisioned as the next-generation architecture the integrity of data storage in Cloud Computing. In particular, we consider the task of allow- ing a third

  11. 77 FR 26509 - Notice of Public Meeting-Cloud Computing Forum & Workshop V

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-04

    ... National Institute of Standards and Technology Notice of Public Meeting--Cloud Computing Forum & Workshop V... announces the Cloud Computing Forum & Workshop V to be held on Tuesday, Wednesday and Thursday, June 5, 6... provide information on the U.S. Government (USG) Cloud Computing Technology Roadmap initiative....

  12. 76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ... National Institute of Standards and Technology Notice of Public Meeting--Cloud Computing Forum & Workshop...: NIST announces the Cloud Computing Forum & Workshop IV to be held on November 2, 3 and 4, 2011. This workshop will provide information on the U.S. Government (USG) Cloud Computing Technology...

  13. 77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-18

    ... National Institute of Standards and Technology Notice of Public Meeting--Cloud Computing and Big Data Forum...) announces a Cloud Computing and Big Data Forum and Workshop to be held on Tuesday, January 15, Wednesday... workshop. The NIST Cloud Computing and Big Data Forum and Workshop will bring together leaders...

  14. 77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-18

    ...Public Meeting--Cloud Computing and Big Data Forum and Workshop AGENCY: National...announces a Cloud Computing and Big Data Forum and Workshop to be held on Tuesday...workshop. The NIST Cloud Computing and Big Data Forum and Workshop will bring...

  15. 12 CFR 227.25 - Unfair balance computation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...2010-01-01 2010-01-01 false Unfair balance computation method. 227.25 Section...Account Practices Rule § 227.25 Unfair balance computation method. (a) General rule...bank must not impose finance charges on balances on a consumer credit card account...

  16. Designing and Reporting on Computational Experiments with Heuristic Methods

    E-print Network

    Barr, Richard

    Designing and Reporting on Computational Experiments with Heuristic Methods Richard S. Barr \\Lambda This report discusses the design of computational experiments to test heuristic methods and provides reporting of heuristics, full disclosure of experimental conditions, and in­ tegrity in and reproducibility

  17. A fourier pseudospectral method for some computational aeroacoustics

    E-print Network

    Huang, Xun

    A fourier pseudospectral method for some computational aeroacoustics problems Xun Huang* and Xin, Southampton, SO17 1BJ, UK ABSTRACT A Fourier pseudospectral time-domain method is applied to wave propagation problems pertinent to computational aeroacoustics. The original algorithm of the Fourier pseudospectral

  18. 12 CFR 227.25 - Unfair balance computation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... under 12 CFR 226.12 or 12 CFR 226.13; or (2) Adjustments to finance charges as a result of the return of... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in...

  19. Finite Element Computations for a Conservative Level Set Method

    E-print Network

    Frey, Pascal

    Finite Element Computations for a Conservative Level Set Method Applied to Two-Phase Stokes Flow D . . . . . . . . . . . . . . . . . . . 4 2.2 The finite element method . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Finite A G L I N D B O Master of Science Thesis Stockholm, Sweden 2006 #12;Finite Element Computations

  20. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  1. Computational Methods for Analyzing Health News Coverage

    ERIC Educational Resources Information Center

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…

  2. Bioinformatics Workshop #2 Computational Methods for Rational

    E-print Network

    Ronquist, Fredrik

    ' aligned -- the Human Papilloma Virus major capsid protein L1 -- type and strain differentiation. Fall 2006 of Computational Science6 (SCS). Author and Instructor: Steven M. Thompson #12;2 Steve Thompson BioInfo 4U 2538, to cutting-edge forensic pathology techniques, PCR is being used to analyze tinier concentrations of DNA than

  3. Computing with DNA 413 From: Methods in Molecular Biology, vol. 132: Bioinformatics Methods and Protocols

    E-print Network

    Kari, Lila

    Computing with DNA 413 413 From: Methods in Molecular Biology, vol. 132: Bioinformatics Methods of molecular biology to solve a diffi- cult computational problem. Adleman's experiment solved an instance computations. The main idea was the encoding of data in DNA strands and the use of tools from molecular biology

  4. COMSAC: Computational Methods for Stability and Control. Part 1

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

  5. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  6. Computational methods for internal flows with emphasis on turbomachinery

    NASA Technical Reports Server (NTRS)

    Mcnally, W. D.; Sockol, P. M.

    1981-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  7. Soft computing methods in design of superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1995-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modeled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  8. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  9. Computational methods for physical mapping of chromosomes

    SciTech Connect

    Torney, D.C.; Schenk, K.R. ); Whittaker, C.C. Los Alamos National Lab., NM ); White, S.W. )

    1990-01-01

    A standard technique for mapping a chromosome is to randomly select pieces, to use restriction enzymes to cut these pieces into fragments, and then to use the fragments for estimating the probability of overlap of these pieces. Typically, the order of the fragments within a piece is not determined, and the observed fragment data from each pair of pieces must be permuted N1 {times} N2 ways to evaluate the probability of overlap, N1 and N2 being the observed number of fragments in the two selected pieces. We will describe computational approaches used to substantially reduce the computational complexity of the calculation of overlap probability from fragment data. Presently, about 10{sup {minus}4} CPU seconds on one processor of an IBM 3090 is required for calculation of overlap probability from the fragment data of two randomly selected pieces, with an average of ten fragments per piece. A parallel version has been written using IBM clustered FORTRAN. Parallel measurements for 1, 6, and 12 processors will be presented. This approach has proven promising in the mapping of chromosome 16 at Los Alamos National Laboratory. We will also describe other computational challenges presented by physical mapping. 4 refs., 4 figs., 1 tab.

  10. Statistical and Computational Methods for Genetic Diseases: An Overview

    PubMed Central

    Di Taranto, Maria Donata

    2015-01-01

    The identification of causes of genetic diseases has been carried out by several approaches with increasing complexity. Innovation of genetic methodologies leads to the production of large amounts of data that needs the support of statistical and computational methods to be correctly processed. The aim of the paper is to provide an overview of statistical and computational methods paying attention to methods for the sequence analysis and complex diseases. PMID:26106440

  11. Computer-Aided Dispatch System as a Decision Making Tool in Public and Private Sectors

    E-print Network

    Lee, I-Jen

    We describe in detail seven distinct areas in both public and private sectors in which a real-time computer-aided dispatch system is applicable to the allocation of scarce resources. Characteristics of a real-time ...

  12. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  13. Universal Tailored Access: Automating Setup of Public and Classroom Computers.

    ERIC Educational Resources Information Center

    Whittaker, Stephen G.; Young, Ted; Toth-Cohen, Susan

    2002-01-01

    This article describes a setup smart access card that enables users with visual impairments to customize magnifiers and screen readers on computers by loading the floppy disk into the computer and finding and pressing two successive keys. A trial with four elderly users found instruction took about 15 minutes. (Contains 3 references.) (CR)

  14. Publicly Auditable Secure Multi-Party Computation* Carsten Baum

    E-print Network

    International Association for Cryptologic Research (IACR)

    -knowledge proofs during the computation. A recent trend in MPC protocols is to have a more expensive preprocessing allows us to use only cheap information-theoretic primitives in the actual computation. Unfortunately 279447. Supported by the Danish National Research Foundation, the National Science Foundation of China

  15. Computer Competencies for All Educators in North Carolina Public Schools.

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh.

    To assist school systems in establishing computer competencies for inservice teacher training and personnel hiring guidelines, the North Carolina State Board of Education in 1985 approved the recommendations of a state task force, and identified three levels of computer competencies for teachers (K-12), i.e., competencies needed by all educators,…

  16. Caller behaviour classification using computational intelligence methods.

    PubMed

    Patel, Pretesh B; Marwala, Tshilidzi

    2010-02-01

    A classification system that accurately categorizes caller interaction within Interactive Voice Response systems is essential in determining caller behaviour. Field and call performance classifier for pay beneficiary application are developed. Genetic Algorithms, Multi-Layer Perceptron neural network, Radial Basis Function neural network, Fuzzy Inference Systems and Support Vector Machine computational intelligent techniques were considered in this research. Exceptional results were achieved. Classifiers with accuracy values greater than 90% were developed. The preferred models for field 'Say amount', 'Say confirmation' and call performance classification are the ensemble of classifiers. However, the Multi-Layer Perceptron classifiers performed the best in field 'Say account' and 'Select beneficiary' classification. PMID:20180256

  17. Developing a multimodal biometric authentication system using soft computing methods.

    PubMed

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision. PMID:25502384

  18. The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

    ERIC Educational Resources Information Center

    DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

    2013-01-01

    Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…

  19. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  20. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  1. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  2. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  3. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  4. European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS Computational Fluid Dynamics Conference 2001

    E-print Network

    Müller,Bernhard

    is analogous to inte- gration by parts in the continuous energy estimate. They have been applied to linearEuropean Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS DIFFERENCE METHOD FOR LOW MACH NUMBER AEROACOUSTICS Bernhard M¨uller Department of Scientific Computing

  5. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  6. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...

  7. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...

  8. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...

  9. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...

  10. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...

  11. 78 FR 54453 - Notice of Public Meeting-Intersection of Cloud Computing and Mobility Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ...Public Meeting--Intersection of Cloud Computing and Mobility Forum and Workshop...INFORMATION: NIST hosted six prior Cloud Computing Forum & Workshop events in May...Government's experience with cloud computing, report on the status of...

  12. VerSum: Verifiable Computations over Large Public Logs

    E-print Network

    van den Hooff, Jelle

    VerSum allows lightweight clients to outsource expensive computations over large and frequently changing data structures, such as the Bitcoin or Namecoin blockchains, or a Certificate Transparency log. VerSum clients ensure ...

  13. Novel Methods for Communicating Plasma Science to the General Public

    NASA Astrophysics Data System (ADS)

    Zwicker, Andrew; Merali, Aliya; Wissel, S. A.; Delooper, John

    2012-10-01

    The broader implications of Plasma Science remains an elusive topic that the general public rarely discusses, regardless of their relevance to energy, the environment, and technology. Recently, we have looked beyond print media for methods to reach large numbers of people in creative and informative ways. These have included video, art, images, and music. For example, our submission to the ``What is a Flame?'' contest was ranked in the top 15 out of 800 submissions. Images of plasmas have won 3 out of 5 of the Princeton University ``Art of Science'' competitions. We use a plasma speaker to teach students of all ages about sound generation and plasma physics. We report on the details of each of these and future videos and animations under development.

  14. A shooting method for computing Lagrangian invariant tori Alejandro Luque

    E-print Network

    Villanueva, Jordi

    in a discrete FPU model and in the vicinity of the Lagrangian equilibrium points of a Restricted Three BodyA shooting method for computing Lagrangian invariant tori Alejandro Luque Instituto de Ciencias in the literature, we address the problem in terms of the computation of a single point on it. In this way we extend

  15. Brain-Computer Interface Overview, methods and opportunities

    E-print Network

    Marlin, Benjamin

    Brain-Computer Interface Overview, methods and opportunities Emtiyaz (Emt) CS, UBC #12;Overview://www.youtube.com/watch?v=NIG47YgndP8 http://www.youtube.com/watch?v=qCSSBEXBCbY #12;Why a Brain-Computer Interface? 1.Use to kill, use BCI Military Applications Other Fancy Use Locked-in Syndrome Amyotrophic Lateral Sclerosis

  16. SAR/QSAR methods in public health practice

    SciTech Connect

    Demchuk, Eugene Ruiz, Patricia; Chou, Selene; Fowler, Bruce A.

    2011-07-15

    Methods of (Quantitative) Structure-Activity Relationship ((Q)SAR) modeling play an important and active role in ATSDR programs in support of the Agency mission to protect human populations from exposure to environmental contaminants. They are used for cross-chemical extrapolation to complement the traditional toxicological approach when chemical-specific information is unavailable. SAR and QSAR methods are used to investigate adverse health effects and exposure levels, bioavailability, and pharmacokinetic properties of hazardous chemical compounds. They are applied as a part of an integrated systematic approach in the development of Health Guidance Values (HGVs), such as ATSDR Minimal Risk Levels, which are used to protect populations exposed to toxic chemicals at hazardous waste sites. (Q)SAR analyses are incorporated into ATSDR documents (such as the toxicological profiles and chemical-specific health consultations) to support environmental health assessments, prioritization of environmental chemical hazards, and to improve study design, when filling the priority data needs (PDNs) as mandated by Congress, in instances when experimental information is insufficient. These cases are illustrated by several examples, which explain how ATSDR applies (Q)SAR methods in public health practice.

  17. Platform-independent method for computer aided schematic drawings

    DOEpatents

    Vell, Jeffrey L. (Slingerlands, NY); Siganporia, Darius M. (Clifton Park, NY); Levy, Arthur J. (Fort Lauderdale, FL)

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  18. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  19. 2.093 Computer Methods in Dynamics, Fall 2002

    E-print Network

    Bathe, Klaus-Jürgen

    Formulation of finite element methods for analysis of dynamic problems in solids, structures, fluid mechanics, and heat transfer. Computer calculation of matrices and numerical solution of equilibrium equations by direct ...

  20. Adding It Up: Is Computer Use Associated with Higher Achievement in Public Elementary Mathematics Classrooms?

    ERIC Educational Resources Information Center

    Kao, Linda Lee

    2009-01-01

    Despite support for technology in schools, there is little evidence indicating whether using computers in public elementary mathematics classrooms is associated with improved outcomes for students. This exploratory study examined data from the Early Childhood Longitudinal Study, investigating whether students' frequency of computer use was related…

  1. The Battle to Secure Our Public Access Computers

    ERIC Educational Resources Information Center

    Sendze, Monique

    2006-01-01

    Securing public access workstations should be a significant part of any library's network and information-security strategy because of the sensitive information patrons enter on these workstations. As the IT manager for the Johnson County Library in Kansas City, Kan., this author is challenged to make sure that thousands of patrons get the access…

  2. Public involvement in multi-objective water level regulation development projects-evaluating the applicability of public involvement methods

    SciTech Connect

    Vaentaenen, Ari . E-mail: armiva@utu.fi; Marttunen, Mika . E-mail: Mika.Marttunen@ymparisto.fi

    2005-04-15

    Public involvement is a process that involves the public in the decision making of an organization, for example a municipality or a corporation. It has developed into a widely accepted and recommended policy in environment altering projects. The EU Water Framework Directive (WFD) took force in 2000 and stresses the importance of public involvement in composing river basin management plans. Therefore, the need to develop public involvement methods for different situations and circumstances is evident. This paper describes how various public involvement methods have been applied in a development project involving the most heavily regulated lake in Finland. The objective of the project was to assess the positive and negative impacts of regulation and to find possibilities for alleviating the adverse impacts on recreational use and the aquatic ecosystem. An exceptional effort was made towards public involvement, which was closely connected to planning and decision making. The applied methods were (1) steering group work, (2) survey, (3) dialogue, (4) theme interviews, (5) public meeting and (6) workshops. The information gathered using these methods was utilized in different stages of the project, e.g., in identifying the regulation impacts, comparing alternatives and compiling the recommendations for regulation development. After describing our case and the results from the applied public involvement methods, we will discuss our experiences and the feedback from the public. We will also critically evaluate our own success in coping with public involvement challenges. In addition to that, we present general recommendations for dealing with these problematic issues based on our experiences, which provide new insights for applying various public involvement methods in multi-objective decision making projects.

  3. Transonic Flow Computations Using Nonlinear Potential Methods

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.

  4. Numerical and Computational Methods Spring 2005

    E-print Network

    Wulff, Claudia

    .00-13.00 weeks 1-4, 6-11: LTM week 5: 10 AA 04 Monday 16.00-17.00 10 AA 04 Thursday 17.00-18.00 week 1: APLab 3 & 2 weeks 3, 4, 5: APLab 3 & 4 weeks 2, 6-11: LTM Maple Coursework ­ counts 15% Maple work exercise sheets on Monday 7th March at 12.00 in LTM. Numerical Methods Coursework ­ counts 10

  5. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation

    PubMed Central

    Hu, Hongzhi; Mao, Huajuan; Hu, Xiaohua; Hu, Feng; Sun, Xuemin; Jing, Zaiping; Duan, Yunsuo

    2015-01-01

    Due to the extensive social influence, public health emergency has attracted great attention in today's society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event's social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution) provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback) based on ACP simulation system which was successfully applied to the analysis of A (H1N1) Flu emergency. PMID:26609303

  6. Computational Methods in Quantum Field Theory

    E-print Network

    Kurt Langfeld

    2007-11-19

    After a brief introduction to the statistical description of data, these lecture notes focus on quantum field theories as they emerge from lattice models in the critical limit. For the simulation of these lattice models, Markov chain Monte-Carlo methods are widely used. We discuss the heat bath and, more modern, cluster algorithms. The Ising model is used as a concrete illustration of important concepts such as correspondence between a theory of branes and quantum field theory or the duality map between strong and weak couplings. The notes then discuss the inclusion of gauge symmetries in lattice models and, in particular, the continuum limit in which quantum Yang-Mills theories arise.

  7. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  8. Consensus methods: review of original methods and their main alternatives used in public health

    PubMed Central

    Bourrée, Fanny; Michel, Philippe; Salmi, Louis Rachid

    2008-01-01

    Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Données Santé Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039

  9. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  10. Computational methods to identify new antibacterial targets.

    PubMed

    McPhillie, Martin J; Cain, Ricky M; Narramore, Sarah; Fishwick, Colin W G; Simmons, Katie J

    2015-01-01

    The development of resistance to all current antibiotics in the clinic means there is an urgent unmet need for novel antibacterial agents with new modes of action. One of the best ways of finding these is to identify new essential bacterial enzymes to target. The advent of a number of in silico tools has aided classical methods of discovering new antibacterial targets, and these programs are the subject of this review. Many of these tools apply a cheminformatic approach, utilizing the structural information of either ligand or protein, chemogenomic databases, and docking algorithms to identify putative antibacterial targets. Considering the wealth of potential drug targets identified from genomic research, these approaches are perfectly placed to mine this rich resource and complement drug discovery programs. PMID:24974974

  11. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris (Palo Alto, CA); Hanrahan, Patrick (Portola Valley, CA)

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  12. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  13. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...2012-04-01 false Method and timing for real-time public reporting. 43.3 Section 43...COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...

  14. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...2013-04-01 false Method and timing for real-time public reporting. 43.3 Section 43...COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...

  15. Publication Bias in the Computer Science Education Research Literature

    E-print Network

    random sample of 352 recent computer science education articles, we reviewed the 38 empirical articles that used inferential statistical analyses. We found that (a) the proportion of articles reporting primarily research, (b) that an article's having a female first author was a strong predictor of an article's having

  16. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  17. Taught Course Centre Short Course "Computational Methods for Uncertainty Quantification"

    E-print Network

    Scheichl, Robert

    Taught Course Centre Short Course "Computational Methods for Uncertainty Quantification" Robert to avoid stability problems with the explicit Euler method. Compare the cost to achieve a certain tolerance to formulate a simple model problem that encapsulates the essential question. What type of uncertainty is it

  18. Calculating PI Using Historical Methods and Your Personal Computer.

    ERIC Educational Resources Information Center

    Mandell, Alan

    1989-01-01

    Provides a software program for determining PI to the 15th place after the decimal. Explores the history of determining the value of PI from Archimedes to present computer methods. Investigates Wallis's, Liebniz's, and Buffon's methods. Written for Tandy GW-BASIC (IBM compatible) with 384K. Suggestions for Apple II's are given. (MVL)

  19. Method for implementation of recursive hierarchical segmentation on parallel computers

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  20. Lecture Notes in Computer Science 4800 Commenced Publication in 1973

    E-print Network

    Dershowitz, Nachum

    , India Printed on acid-free paper SPIN: 12227471 06/3180 5 4 3 2 1 0 #12;Dedicated to Boris (Boaz}@post.tau.ac.il Library of Congress Control Number: 2008920893 CR Subject Classification (1998): F.1, F.2.1-2, F.4.1, F.3, D.2.4, D.2-3, I.2.2 LNCS Sublibrary: SL 1 ­ Theoretical Computer Science and General Issues ISSN

  1. A Comparative Assessment of Computer Literacy of Private and Public Secondary School Students in Lagos State, Nigeria

    ERIC Educational Resources Information Center

    Osunwusi, Adeyinka Olumuyiwa; Abifarin, Michael Segun

    2013-01-01

    The aim of this study was to conduct a comparative assessment of computer literacy of private and public secondary school students. Although the definition of computer literacy varies widely, this study treated computer literacy in terms of access to, and use of, computers and the internet, basic knowledge and skills required to use computers and…

  2. IACMM Israel Association for Computational Methods in Mechanics Israel Symposium on Computational Mechanics (ISCM-29)

    E-print Network

    Adler, Joan

    "­ IACMM ­ Israel Association for Computational Methods in Mechanics 29th Israel Symposium.ac.technion.aerodyne@givolid ! TECHNION - Israel Institute of Technology Faculty of Aerospace Engineering and Faculty of Mechanical:40( ,,,,'''' ,,,, ''''---- -POD )12:05( **** ,,,, **** ,,,, 12121212::::33330000 Israel Association

  3. A stochastic method for computing hadronic matrix elements

    DOE PAGESBeta

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; Drach, Vincent; Jansen, Karl; Hadjiyiannakou, Kyriakos; Renner, Dru B.

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  4. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  5. Proposed congestion control method for cloud computing environments

    E-print Network

    Kuribayashi, Shin-ichi

    2012-01-01

    As cloud computing services rapidly expand their customer base, it has become important to share cloud resources, so as to provide them economically. In cloud computing services, multiple types of resources, such as processing ability, bandwidth and storage, need to be allocated simultaneously. If there is a surge of requests, a competition will arise between these requests for the use of cloud resources. This leads to the disruption of the service and it is necessary to consider a measure to avoid or relieve congestion of cloud computing environments. This paper proposes a new congestion control method for cloud computing environments which reduces the size of required resource for congested resource type instead of restricting all service requests as in the existing networks. Next, this paper proposes the user service specifications for the proposed congestion control method, and clarifies the algorithm to decide the optimal size of required resource to be reduced, based on the load offered to the system. I...

  6. Managing expectations when publishing tools and methods for computational proteomics.

    PubMed

    Martens, Lennart; Kohlbacher, Oliver; Weintraub, Susan T

    2015-05-01

    Computational tools are pivotal in proteomics because they are crucial for identification, quantification, and statistical assessment of data. The gateway to finding the best choice of a tool or approach for a particular problem is frequently journal articles, yet there is often an overwhelming variety of options that makes it hard to decide on the best solution. This is particularly difficult for nonexperts in bioinformatics. The maturity, reliability, and performance of tools can vary widely because publications may appear at different stages of development. A novel idea might merit early publication despite only offering proof-of-principle, while it may take years before a tool can be considered mature, and by that time it might be difficult for a new publication to be accepted because of a perceived lack of novelty. After discussions with members of the computational mass spectrometry community, we describe here proposed recommendations for organization of informatics manuscripts as a way to set the expectations of readers (and reviewers) through three different manuscript types that are based on existing journal designations. Brief Communications are short reports describing novel computational approaches where the implementation is not necessarily production-ready. Research Articles present both a novel idea and mature implementation that has been suitably benchmarked. Application Notes focus on a mature and tested tool or concept and need not be novel but should offer advancement from improved quality, ease of use, and/or implementation. Organizing computational proteomics contributions into these three manuscript types will facilitate the review process and will also enable readers to identify the maturity and applicability of the tool for their own workflows. PMID:25764342

  7. Data analysis through interactive computer animation method (DATICAM)

    SciTech Connect

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process.

  8. Customizing computational methods for visual analytics with big data.

    PubMed

    Choo, Jaegul; Park, Haesun

    2013-01-01

    The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data. PMID:24808056

  9. Automatic detection of lung nodules in computed tomography images: training and validation of algorithms using public research databases

    NASA Astrophysics Data System (ADS)

    Camarlinghi, Niccolò

    2013-09-01

    Lung cancer is one of the main public health issues in developed countries. Lung cancer typically manifests itself as non-calcified pulmonary nodules that can be detected reading lung Computed Tomography (CT) images. To assist radiologists in reading images, researchers started, a decade ago, the development of Computer Aided Detection (CAD) methods capable of detecting lung nodules. In this work, a CAD composed of two CAD subprocedures is presented: , devoted to the identification of parenchymal nodules, and , devoted to the identification of the nodules attached to the pleura surface. Both CADs are an upgrade of two methods previously presented as Voxel Based Neural Approach CAD . The novelty of this paper consists in the massive training using the public research Lung International Database Consortium (LIDC) database and on the implementation of new features for classification with respect to the original VBNA method. Finally, the proposed CAD is blindly validated on the ANODE09 dataset. The result of the validation is a score of 0.393, which corresponds to the average sensitivity of the CAD computed at seven predefined false positive rates: 1/8, 1/4, 1/2, 1, 2, 4, and 8 FP/CT.

  10. New computational methods and algorithms for semiconductor science and nanotechnology

    NASA Astrophysics Data System (ADS)

    Gamoke, Benjamin C.

    The design and implementation of sophisticated computational methods and algorithms are critical to solve problems in nanotechnology and semiconductor science. Two key methods will be described to overcome challenges in contemporary surface science. The first method will focus on accurately cancelling interactions in a molecular system, such as modeling adsorbates on periodic surfaces at low coverages, a problem for which current methodologies are computationally inefficient. The second method pertains to the accurate calculation of core-ionization energies through X-ray photoelectron spectroscopy. The development can provide assignment of peaks in X-ray photoelectron spectra, which can determine the chemical composition and bonding environment of surface species. Finally, illustrative surface-adsorbate and gas-phase studies using the developed methods will also be featured.

  11. On computer-intensive simulation and estimation methods for rare-event analysis in epidemic models.

    PubMed

    Clémençon, Stéphan; Cousien, Anthony; Felipe, Miraine Dávila; Tran, Viet Chi

    2015-12-10

    This article focuses, in the context of epidemic models, on rare events that may possibly correspond to crisis situations from the perspective of public health. In general, no close analytic form for their occurrence probabilities is available, and crude Monte Carlo procedures fail. We show how recent intensive computer simulation techniques, such as interacting branching particle methods, can be used for estimation purposes, as well as for generating model paths that correspond to realizations of such events. Applications of these simulation-based methods to several epidemic models fitted from real datasets are also considered and discussed thoroughly. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26242476

  12. Software for computing eigenvalue bounds for iterative subspace matrix methods

    NASA Astrophysics Data System (ADS)

    Shepard, Ron; Minkoff, Michael; Zhou, Yunkai

    2005-07-01

    This paper describes software for computing eigenvalue bounds to the standard and generalized hermitian eigenvalue problem as described in [Y. Zhou, R. Shepard, M. Minkoff, Computing eigenvalue bounds for iterative subspace matrix methods, Comput. Phys. Comm. 167 (2005) 90-102]. The software discussed in this manuscript applies to any subspace method, including Lanczos, Davidson, SPAM, Generalized Davidson Inverse Iteration, Jacobi-Davidson, and the Generalized Jacobi-Davidson methods, and it is applicable to either outer or inner eigenvalues. This software can be applied during the subspace iterations in order to truncate the iterative process and to avoid unnecessary effort when converging specific eigenvalues to a required target accuracy, and it can be applied to the final set of Ritz values to assess the accuracy of the converged results. Program summaryTitle of program: SUBROUTINE BOUNDS_OPT Catalogue identifier: ADVE Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVE Computers: any computer that supports a Fortran 90 compiler Operating systems: any computer that supports a Fortran 90 compiler Programming language: Standard Fortran 90 High speed storage required:5m+5 working-precision and 2m+7 integer for m Ritz values No. of bits in a word: The floating point working precision is parameterized with the symbolic constant WP No. of lines in distributed program, including test data, etc.: 2452 No. of bytes in distributed program, including test data, etc.: 281 543 Distribution format: tar.gz Nature of physical problem: The computational solution of eigenvalue problems using iterative subspace methods has widespread applications in the physical sciences and engineering as well as other areas of mathematical modeling (economics, social sciences, etc.). The accuracy of the solution of such problems and the utility of those errors is a fundamental problem that is of importance in order to provide the modeler with information of the reliability of the computational results. Such applications include using these bounds to terminate the iterative procedure at specified accuracy limits. Method of solution: The Ritz values and their residual norms are computed and used as input for the procedure. While knowledge of the exact eigenvalues is not required, we require that the Ritz values are isolated from the exact eigenvalues outside of the Ritz spectrum and that there are no skipped eigenvalues within the Ritz spectrum. Using a multipass refinement approach, upper and lower bounds are computed for each Ritz value. Typical running time: While typical applications would deal with m<20, for m=100000, the running time is 0.12 s on an Apple PowerBook.

  13. Curriculum modules, software laboratories, and an inexpensive hardware platform for teaching computational methods to undergraduate computer science students

    NASA Astrophysics Data System (ADS)

    Peck, Charles Franklin

    Computational methods are increasingly important to 21st century research and education; bioinformatics and climate change are just two examples of this trend. In this context computer scientists play an important role, facilitating the development and use of the methods and tools used to support computationally-based approaches. The undergraduate curriculum in computer science is one place where computational tools and methods can be introduced to facilitate the development of appropriately prepared computer scientists. To facilitate the evolution of the pedagogy, this dissertation identifies, develops, and organizes curriculum materials, software laboratories, and the reference design for an inexpensive portable cluster computer, all of which are specifically designed to support the teaching of computational methods to undergraduate computer science students. Keywords. computational science, computational thinking, computer science, undergraduate curriculum.

  14. Public open space, physical activity, urban design and public health: Concepts, methods and research agenda.

    PubMed

    Koohsari, Mohammad Javad; Mavoa, Suzanne; Villanueva, Karen; Sugiyama, Takemi; Badland, Hannah; Kaczynski, Andrew T; Owen, Neville; Giles-Corti, Billie

    2015-05-01

    Public open spaces such as parks and green spaces are key built environment elements within neighbourhoods for encouraging a variety of physical activity behaviours. Over the past decade, there has been a burgeoning number of active living research studies examining the influence of public open space on physical activity. However, the evidence shows mixed associations between different aspects of public open space (e.g., proximity, size, quality) and physical activity. These inconsistencies hinder the development of specific evidence-based guidelines for urban designers and policy-makers for (re)designing public open space to encourage physical activity. This paper aims to move this research agenda forward, by identifying key conceptual and methodological issues that may contribute to inconsistencies in research examining relations between public open space and physical activity. PMID:25779691

  15. Computational Methods for Structural Mechanics and Dynamics, part 1

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (editor); Housner, Jerrold M. (editor); Tanner, John A. (editor); Hayduk, Robert J. (editor)

    1989-01-01

    The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

  16. A Higher Order Iterative Method for Computing the Drazin Inverse

    PubMed Central

    Soleymani, F.; Stanimirovi?, Predrag S.

    2013-01-01

    A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747

  17. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  18. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Zbigniew (Shoreham, NY); Falkowski, Paul (Stony Brook, NY)

    1990-01-01

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

  19. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  20. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  1. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Z.; Falkowski, P.

    1990-07-17

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.

  2. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    ERIC Educational Resources Information Center

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer

  3. Learning Computational Methods for Partial Differential Equations from the Web

    E-print Network

    Jaun, André

    Learning Computational Methods for Partial Differential Equations from the Web Andr´e Jaun1 , Johan@fusion.kth.se, Web-page: http://pde.fusion.kth.se 2 Center for Educational Development, Chalmers, SE 412 96 G the web1 and has been tested with postgraduate students from re- mote universities. Short video

  4. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  5. EQUILIBRIUM AND NONEQUILIBRIUM FOUNDATIONS OF FREE ENERGY COMPUTATIONAL METHODS

    SciTech Connect

    C. JARZYNSKI

    2001-03-01

    Statistical mechanics provides a rigorous framework for the numerical estimation of free energy differences in complex systems such as biomolecules. This paper presents a brief review of the statistical mechanical identities underlying a number of techniques for computing free energy differences. Both equilibrium and nonequilibrium methods are covered.

  6. Arabic Computational Morphology: Knowledge-Based and Empirical Methods

    E-print Network

    Arabic Computational Morphology: Knowledge-Based and Empirical Methods Abdelhadi Soudi, Antal van to handle the various complexities of Arabic morphology, most of which had little implementation associated-based and empirical approaches, respectively. Finally, Part IV (Chapters 12­15) demonstrates how Arabic morphology

  7. Interactive method for computation of viscous flow with recirculation

    NASA Technical Reports Server (NTRS)

    Brandeis, J.; Rom, J.

    1981-01-01

    An interactive method is proposed for the solution of two-dimensional, laminar flow fields with identifiable regions of recirculation, such as the shear-layer-driven cavity flow. The method treats the flow field as composed of two regions, with an appropriate mathematical model adopted for each region. The shear layer is computed by the compressible boundary layer equations, and the slowly recirculating flow by the incompressible Navier-Stokes equations. The flow field is solved iteratively by matching the local solutions in the two regions. For this purpose a new matching method utilizing an overlap between the two computational regions is developed, and shown to be most satisfactory. Matching of the two velocity components, as well as the change in velocity with respect to depth is amply accomplished using the present approach, and the stagnation points corresponding to separation and reattachment of the dividing streamline are computed as part of the interactive solution. The interactive method is applied to the test problem of a shear layer driven cavity. The computational results are used to show the validity and applicability of the present approach.

  8. Interval methods for computing various refinements of Nash equilibria

    E-print Network

    Sainudiin, Raazesh

    . A "second phase" to delete points that are not Nash equilibria, using 0th-order tools. ­ No simple approachInterval methods for computing various refinements of Nash equilibria Bartlomiej Jacek Kubica To find Nash equilibrium, and especially all equilibria, for continuous games is a hard task

  9. pyro: Python-based tutorial for computational methods for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  10. A Spectral Time-Domain Method for Computational Electrodynamics

    E-print Network

    Lambers, James

    A Spectral Time-Domain Method for Computational Electrodynamics James V. Lambers Abstract Block equations, such as Maxwell's equations. 1 Introduction We consider Maxwell's equation on the cube [0 the curl of both sides of (2), we decouple the vector fields ^E and ^H and obtain the equations James V

  11. Selection and Integration of a Computer Simulation for Public Budgeting and Finance (PBS 116).

    ERIC Educational Resources Information Center

    Banas, Ed Jr.

    1998-01-01

    Describes the development of a course on public budgeting and finance, which integrated the use of SimCity Classic, a computer-simulation software, with traditional lecture, guest speakers, and collaborative-learning activities. Explains the rationale for the course design and discusses the results from the first semester of teaching the course.…

  12. Publications Forrest M. Hoffman and William W. Hargrove. Cluster computing: Linux taken to *

    E-print Network

    Hoffman, Forrest M.

    Publications Forrest M. Hoffman and William W. Hargrove. Cluster computing: Linux taken to * *the extreme. Linux Magazine, 1(1):56-59, 1999. Forrest M. Hoffman. Concepts in Beowulfery. Linux Magazine, 4(1):40-41, January* * 2002a. Forrest M. Hoffman. Configuring a Beowulf Cluster. Linux Magazine

  13. Publications Forrest M. Hoffman and William W. Hargrove. Cluster computing: Linux taken to the extreme. Linux

    E-print Network

    Hoffman, Forrest M.

    Publications Forrest M. Hoffman and William W. Hargrove. Cluster computing: Linux taken to the extreme. Linux Magazine, 1(1):56­59, 1999. Forrest M. Hoffman. Concepts in Beowulfery. Linux Magazine, 4(1):40­41, January 2002a. Forrest M. Hoffman. Configuring a Beowulf Cluster. Linux Magazine, 4(2):42­45, February

  14. Advanced Telecommunications and Computer Technologies in Georgia Public Elementary School Library Media Centers.

    ERIC Educational Resources Information Center

    Rogers, Jackie L.

    The purpose of this study was to determine what recent progress had been made in Georgia public elementary school library media centers regarding access to advanced telecommunications and computer technologies as a result of special funding. A questionnaire addressed the following areas: automation and networking of the school library media center…

  15. VerSum: Verifiable Computations over Large Public Logs Jelle van den Hooff

    E-print Network

    blockchains, or a Certificate Transparency log. VERSUM clients ensure that the output is correct by comparing publicly available logs, whose validity is guaranteed. The logs are large (e.g., the Bitcoin blockchain is added to the Bitcoin blockchain). To run computations over these logs requires a Permission to make

  16. Under consideration for publication in Formal Aspects of Computing On Closure Under Stuttering

    E-print Network

    Chechik, Marsha

    Under consideration for publication in Formal Aspects of Computing On Closure Under Stuttering, Ontario M5S 3G4, Canada. Keywords: LTL, model-checking, closure under stuttering, event-based sys- tems;cation and veri#12;cation. One of the essen- tial properties of LTL formulas is closure under stuttering

  17. Learning From Engineering and Computer Science About Communicating The Field To The Public

    NASA Astrophysics Data System (ADS)

    Moore, S. L.; Tucek, K.

    2014-12-01

    The engineering and computer science community has taken the lead in actively informing the public about their discipline, including the societal contributions and career opportunities. These efforts have been intensified in regards to informing underrepresented populations in STEM about engineering and computer science. Are there lessons to be learned by the geoscience community in communicating the societal impacts and career opportunities in the geosciences, especially in regards to broadening participation and meeting Next Generation Science Standards? An estimated 35 percent increase in the number of geoscientist jobs in the United States forecasted for the period between 2008 and 2018, combined with majority populations becoming minority populations, make it imperative that we improve how we increase the public's understanding of the geosciences and how we present our message to targeted populations. This talk will look at recommendations from the National Academy of Engineering's Changing the Conversation: Messages for Improving the Public Understanding of Engineering, and communication strategies by organizations such as Code.org, to highlight practices that the geoscience community can adopt to increase public awareness of the societal contributions of the geosciences, the career opportunities in the geosciences, and the importance of the geosciences in the Next Generation Science Standards. An effort to communicate geoscience to the public, Earth is Calling, will be compared and contrasted to these efforts, and used as an example of how geological societies and other organizations can engage the general public and targeted groups about the geosciences.

  18. Variational-moment method for computing magnetohydrodynamic equilibria

    NASA Astrophysics Data System (ADS)

    Lao, L. L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since was generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed.

  19. Computer-aided methods of determining thyristor thermal transients

    SciTech Connect

    Lu, E.; Bronner, G.

    1988-08-01

    An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs.

  20. Computing the crystal growth rate by the interface pinning method.

    PubMed

    Pedersen, Ulf R; Hummel, Felix; Dellago, Christoph

    2015-01-28

    An essential parameter for crystal growth is the kinetic coefficient given by the proportionality between supercooling and average growth velocity. Here, we show that this coefficient can be computed in a single equilibrium simulation using the interface pinning method where two-phase configurations are stabilized by adding a spring-like bias field coupling to an order-parameter that discriminates between the two phases. Crystal growth is a Smoluchowski process and the crystal growth rate can, therefore, be computed from the terminal exponential relaxation of the order parameter. The approach is investigated in detail for the Lennard-Jones model. We find that the kinetic coefficient scales as the inverse square-root of temperature along the high temperature part of the melting line. The practical usability of the method is demonstrated by computing the kinetic coefficient of the elements Na and Si from first principles. A generalized version of the method may be used for computing the rates of crystal nucleation or other rare events. PMID:25637966

  1. An effective method for computing the noise in biochemical networks

    PubMed Central

    Zhang, Jiajun; Nie, Qing; He, Miao; Zhou, Tianshou

    2013-01-01

    We present a simple yet effective method, which is based on power series expansion, for computing exact binomial moments that can be in turn used to compute steady-state probability distributions as well as the noise in linear or nonlinear biochemical reaction networks. When the method is applied to representative reaction networks such as the ON-OFF models of gene expression, gene models of promoter progression, gene auto-regulatory models, and common signaling motifs, the exact formulae for computing the intensities of noise in the species of interest or steady-state distributions are analytically given. Interestingly, we find that positive (negative) feedback does not enlarge (reduce) noise as claimed in previous works but has a counter-intuitive effect and that the multi-OFF (or ON) mechanism always attenuates the noise in contrast to the common ON-OFF mechanism and can modulate the noise to the lowest level independently of the mRNA mean. Except for its power in deriving analytical expressions for distributions and noise, our method is programmable and has apparent advantages in reducing computational cost. PMID:23464139

  2. Digital data storage systems, computers, and data verification methods

    DOEpatents

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  3. A bibliography on finite element and related methods analysis in reactor physics computations (1971--1997)

    SciTech Connect

    Carpenter, D.C.

    1998-01-01

    This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

  4. Pedagogical Methods of Teaching "Women in Public Speaking."

    ERIC Educational Resources Information Center

    Pederson, Lucille M.

    A course on women in public speaking, developed at the University of Cincinnati, focuses on the rhetoric of selected women who have been involved in various movements and causes in the United States in the twentieth century. Women studied include educator Mary McLeod Bethune, Congresswoman Jeannette Rankin, suffragette Carrie Chapman Catt, Helen…

  5. "Equal Educational Opportunity": Alternative Financing Methods for Public Education.

    ERIC Educational Resources Information Center

    Akin, John S.

    This paper traces the evaluation of state-local public education finance systems to present; examines the prevalent foundation system of finance; discusses the "Serrano" decision and its implications for foundation systems; and, after an examination of three possible new approaches, recommends an education finance system. The first of the new…

  6. Public Experiments and Their Analysis with the Replication Method

    ERIC Educational Resources Information Center

    Heering, Peter

    2007-01-01

    One of those who failed to establish himself as a natural philosopher in 18th century Paris was the future revolutionary Jean Paul Marat. He did not only publish several monographs on heat, optics and electricity in which he attempted to characterise his work as being purely empirical but he also tried to establish himself as a public lecturer.…

  7. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  8. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    SciTech Connect

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  9. Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS.

    PubMed

    Jalali, Arash; Olabode, Olusegun A; Bell, Christopher M

    2012-01-01

    As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities. PMID:23569644

  10. Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS

    PubMed Central

    Jalali, Arash; Olabode, Olusegun A.; Bell, Christopher M.

    2012-01-01

    As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities. PMID:23569644

  11. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  12. Computational methods in metabolic engineering for strain design.

    PubMed

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. PMID:25576846

  13. What happens when someone talks in public to an audience they know to be entirely computer gen-

    E-print Network

    Slater, Mel

    designed a virtual public speaking scenario, followed by an experimental study. In this work we wanted compared to more general social interactions. A public speaking scenario involves specific stylizedWhat happens when someone talks in public to an audience they know to be entirely computer gen

  14. A hierarchical method for molecular docking using cloud computing.

    PubMed

    Kang, Ling; Guo, Quan; Wang, Xicheng

    2012-11-01

    Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs. PMID:23017886

  15. GRACE: Public Health Recovery Methods following an Environmental Disaster

    PubMed Central

    Svendsen, ER; Whittle, N; Wright, L; McKeown, RE; Sprayberry, K; Heim, M; Caldwell, R; Gibson, JJ; Vena, J.

    2014-01-01

    Different approaches are necessary when Community Based Participatory Research (CBPR) of environmental illness is initiated after an environmental disaster within a community. Often such events are viewed as golden scientific opportunities to do epidemiological studies. However, we believe that in such circumstances, community engagement and empowerment needs to be integrated into the public health service efforts in order for both those and any science to be successful, with special care being taken to address the immediate health needs of the community first rather than the pressing needs to answer important scientific questions. We will demonstrate how we have simultaneously provided valuable public health service, embedded generalizable scientific knowledge, and built a successful foundation for supplemental CBPR through our on-going recovery work after the chlorine gas disaster in Graniteville, South Carolina. PMID:20439226

  16. Practical methods to improve the development of computational software

    SciTech Connect

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-07-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  17. Computer processing improves hydraulics optimization with new methods

    SciTech Connect

    Gavignet, A.A.; Wick, C.J.

    1987-12-01

    In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configuration system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.

  18. Characterization of Meta-Materials Using Computational Electromagnetic Methods

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Shin, Joon

    2005-01-01

    An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

  19. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  20. Computational biology in the cloud: methods and new insights from computing at scale.

    PubMed

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available. PMID:23424149

  1. Assessing computational methods of cis-regulatory module prediction.

    PubMed

    Su, Jing; Teichmann, Sarah A; Down, Thomas A

    2010-01-01

    Computational methods attempting to identify instances of cis-regulatory modules (CRMs) in the genome face a challenging problem of searching for potentially interacting transcription factor binding sites while knowledge of the specific interactions involved remains limited. Without a comprehensive comparison of their performance, the reliability and accuracy of these tools remains unclear. Faced with a large number of different tools that address this problem, we summarized and categorized them based on search strategy and input data requirements. Twelve representative methods were chosen and applied to predict CRMs from the Drosophila CRM database REDfly, and across the human ENCODE regions. Our results show that the optimal choice of method varies depending on species and composition of the sequences in question. When discriminating CRMs from non-coding regions, those methods considering evolutionary conservation have a stronger predictive power than methods designed to be run on a single genome. Different CRM representations and search strategies rely on different CRM properties, and different methods can complement one another. For example, some favour homotypical clusters of binding sites, while others perform best on short CRMs. Furthermore, most methods appear to be sensitive to the composition and structure of the genome to which they are applied. We analyze the principal features that distinguish the methods that performed well, identify weaknesses leading to poor performance, and provide a guide for users. We also propose key considerations for the development and evaluation of future CRM-prediction methods. PMID:21152003

  2. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1971-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  3. On a method computing transient wave propagation in ionospheric regions

    NASA Technical Reports Server (NTRS)

    Gray, K. G.; Bowhill, S. A.

    1978-01-01

    A consequence of an exoatmospheric nuclear burst is an electromagnetic pulse (EMP) radiated from it. In a region far enough away from the burst, where nonlinear effects can be ignored, the EMP can be represented by a large-amplitude narrow-time-width plane-wave pulse. If the ionosphere intervenes the origin and destination of the EMP, frequency dispersion can cause significant changes in the original pulse upon reception. A method of computing these dispersive effects of transient wave propagation is summarized. The method described is different from the standard transform techniques and provides physical insight into the transient wave process. The method, although exact, can be used in approximating the early-time transient response of an ionospheric region by a simple integration with only explicit knowledge of the electron density, electron collision frequency, and electron gyrofrequency required. As an illustration of the method, it is applied to a simple example and contrasted with the corresponding transform solution.

  4. Computation of multi-material interactions using point method

    SciTech Connect

    Zhang, Duan Z; Ma, Xia; Giguere, Paul T

    2009-01-01

    Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations. Numerical examples are given to demonstrate the new scheme.

  5. Secure encapsulation and publication of biological services in the cloud computing environment.

    PubMed

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  6. Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

    PubMed Central

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  7. Adapting Methods of Evaluation to Publications Used in Admissions.

    ERIC Educational Resources Information Center

    Bradham, Jo Allen

    1980-01-01

    Suggests ways to adapt five methods of evaluation to recruitment literature. Methods discussed are: (1) evaluation as measurement; (2) evaluation as the assessment of congruence between objectives and achievement; (3) evaluation as professional judgement; (4) evaluation as decision-maker; and (5) evaluation as comprehensive or goal-free…

  8. Investigation of the "Convince Me" Computer Environment as a Tool for Critical Argumentation about Public Policy Issues

    ERIC Educational Resources Information Center

    Adams, Stephen T.

    2003-01-01

    The "Convince Me" computer environment supports critical thinking by allowing users to create and evaluate computer-based representations of arguments. This study investigates theoretical and design considerations pertinent to using "Convince Me" as an educational tool to support reasoning about public policy issues. Among computer environments…

  9. 76 FR 67418 - Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ... Government Cloud Computing Technology Roadmap, Release 1.0 (Draft) AGENCY: National Institute of Standards... first draft of Special Publication 500-293, US Government Cloud Computing Technology Roadmap, Release 1... technology for U.S. Government (USG) agencies to accelerate their adoption of cloud computing. The...

  10. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    SciTech Connect

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  11. Review methods for image segmentation from computed tomography images

    SciTech Connect

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-12-04

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

  12. Optimum threshold selection method of centroid computation for Gaussian spot

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; Wang, Caixia

    2015-10-01

    Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( using?n +?? n as the threshold), ?n and ?n are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.

  13. Evolutionary computational methods to predict oral bioavailability QSPRs.

    PubMed

    Bains, William; Gilbert, Richard; Sviridenko, Lilya; Gascon, Jose-Miguel; Scoffin, Robert; Birchall, Kris; Harvey, Inman; Caldwell, John

    2002-01-01

    This review discusses evolutionary and adaptive methods for predicting oral bioavailability (OB) from chemical structure. Genetic Programming (GP), a specific form of evolutionary computing, is compared with some other advanced computational methods for OB prediction. The results show that classifying drugs into 'high' and 'low' OB classes on the basis of their structure alone is solvable, and initial models are already producing output that would be useful for pharmaceutical research. The results also suggest that quantitative prediction of OB will be tractable. Critical aspects of the solution will involve the use of techniques that can: (i) handle problems with a very large number of variables (high dimensionality); (ii) cope with 'noisy' data; and (iii) implement binary choices to sub-classify molecules with behavior that are qualitatively different. Detailed quantitative predictions will emerge from more refined models that are hybrids derived from mechanistic models of the biology of oral absorption and the power of advanced computing techniques to predict the behavior of the components of those models in silico. PMID:11865672

  14. Approximation method to compute domain related integrals in structural studies

    NASA Astrophysics Data System (ADS)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2015-11-01

    Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the calculus of the integrals associated to the transverse section problems. Thus we use a virtual rectangle which is framing the triangle, being generated supplementary right angled triangles. The sign of rectangle and the signs of the supplementary triangles are conditioned by the sign of the initial triangle. In this way, a generally located triangle for which we have direct calculus relations may be used to generate the discretization of any domain in transverse section associated integrals. A significant consequence of the paper is the opportunity to create modern computer aided engineering applications for structural studies, which use: intelligent applied mathematics background, modern informatics technologies and advanced computing techniques, such as calculus parallelization.

  15. Informed public choices for low-carbon electricity portfolios using a computer decision tool.

    PubMed

    Mayer, Lauren A Fleishman; Bruine de Bruin, Wändi; Morgan, M Granger

    2014-04-01

    Reducing CO2 emissions from the electricity sector will likely require policies that encourage the widespread deployment of a diverse mix of low-carbon electricity generation technologies. Public discourse informs such policies. To make informed decisions and to productively engage in public discourse, citizens need to understand the trade-offs between electricity technologies proposed for widespread deployment. Building on previous paper-and-pencil studies, we developed a computer tool that aimed to help nonexperts make informed decisions about the challenges faced in achieving a low-carbon energy future. We report on an initial usability study of this interactive computer tool. After providing participants with comparative and balanced information about 10 electricity technologies, we asked them to design a low-carbon electricity portfolio. Participants used the interactive computer tool, which constrained portfolio designs to be realistic and yield low CO2 emissions. As they changed their portfolios, the tool updated information about projected CO2 emissions, electricity costs, and specific environmental impacts. As in the previous paper-and-pencil studies, most participants designed diverse portfolios that included energy efficiency, nuclear, coal with carbon capture and sequestration, natural gas, and wind. Our results suggest that participants understood the tool and used it consistently. The tool may be downloaded from http://cedmcenter.org/tools-for-cedm/informing-the-public-about-low-carbon-technologies/ . PMID:24564708

  16. Research Support in Hungary Machine scheduling LED public lighting Microsimulation in public transportation Finally Optimization and Operation Research methods for

    E-print Network

    Balázs, Bánhelyi

    Research Support in Hungary Machine scheduling LED public lighting Microsimulation in public Innovation Problems #12;Research Support in Hungary Machine scheduling LED public lighting Microsimulation in Hungary Machine scheduling LED public lighting Microsimulation in public transportation Finally The word

  17. A Computational Method for Identifying Yeast Cell Cycle Transcription Factors.

    PubMed

    Wu, Wei-Sheng

    2016-01-01

    The eukaryotic cell cycle is a complex process and is precisely regulated at many levels. Many genes specific to the cell cycle are regulated transcriptionally and are expressed just before they are needed. To understand the cell cycle process, it is important to identify the cell cycle transcription factors (TFs) that regulate the expression of cell cycle-regulated genes. Here, we describe a computational method to identify cell cycle TFs in yeast by integrating current ChIP-chip, mutant, transcription factor-binding site (TFBS), and cell cycle gene expression data. For each identified cell cycle TF, our method also assigned specific cell cycle phases in which the TF functions and identified the time lag for the TF to exert regulatory effects on its target genes. Moreover, our method can identify novel cell cycle-regulated genes as a by-product. PMID:26254926

  18. Comparison of different simulation methods for multiplane computer generated holograms

    NASA Astrophysics Data System (ADS)

    Kämpfe, Thomas; Hudelist, Florian; Waddie, Andrew J.; Taghizadeh, Mohammad R.; Kley, Ernst-Bernhard; Tunnermann, Andreas

    2008-04-01

    Computer generated holograms (CGH) are used to transform an incoming light distribution into a desired output. Recently multi plane CGHs became of interest since they allow the combination of some well known design methods for thin CGHs with unique properties of thick holograms. Iterative methods like the iterative Fourier transform algorithm (IFTA) require an operator that transforms a required optical function into an actual physical structure (e.g. a height structure). Commonly the thin element approximation (TEA) is used for this purpose. Together with the angular spectrum of plane waves (APSW) it has also been successfully used in the case of multi plane CGHs. Of course, due to the approximations inherent in TEA, it can only be applied above a certain feature size. In this contribution we want to give a first comparison of the TEA & ASPW approach with simulation results from the Fourier modal method (FMM) for the example of one dimensional, pattern generating, multi plane CGH.

  19. On implicit Runge-Kutta methods for parallel computations

    NASA Technical Reports Server (NTRS)

    Keeling, Stephen L.

    1987-01-01

    Implicit Runge-Kutta methods which are well-suited for parallel computations are characterized. It is claimed that such methods are first of all, those for which the associated rational approximation to the exponential has distinct poles, and these are called multiply explicit (MIRK) methods. Also, because of the so-called order reduction phenomenon, there is reason to require that these poles be real. Then, it is proved that a necessary condition for a q-stage, real MIRK to be A sub 0-stable with maximal order q + 1 is that q = 1, 2, 3, or 5. Nevertheless, it is shown that for every positive integer q, there exists a q-stage, real MIRK which is I-stable with order q. Finally, some useful examples of algebraically stable MIRKs are given.

  20. A computationally light classification method for mobile wellness platforms.

    PubMed

    Könönen, Ville; Mäntyjärvi, Jani; Similä, Heidi; Pärkkä, Juha; Ermes, Miikka

    2008-01-01

    The core of activity recognition in mobile wellness devices is a classification engine which maps observations from sensors to estimated classes. There exists a vast number of different classification algorithms that can be used for this purpose in the machine learning literature. Unfortunately, the computational and space requirements of these methods are often too high for the current mobile devices. In this paper we study a simple linear classifier and find, automatically with SFS and SFFS feature selection methods, a suitable set of features to be used with the classification method. The results show that the simple classifier performs comparable to more complex nonlinear k-Nearest Neighbor Classifier. This depicts great potential in implementing the classifier in small mobile wellness devices. PMID:19162872

  1. COMSAC: Computational Methods for Stability and Control. Part 2

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  2. PREFACE: Theory, Modelling and Computational methods for Semiconductors

    NASA Astrophysics Data System (ADS)

    Migliorato, Max; Probert, Matt

    2010-04-01

    These conference proceedings contain the written papers of the contributions presented at the 2nd International Conference on: Theory, Modelling and Computational methods for Semiconductors. The conference was held at the St Williams College, York, UK on 13th-15th Jan 2010. The previous conference in this series took place in 2008 at the University of Manchester, UK. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in Semiconductor science and technology, where there is a substantial potential for time saving in R&D. The development of high speed computer architectures is finally allowing the routine use of accurate methods for calculating the structural, thermodynamic, vibrational and electronic properties of semiconductors and their heterostructures. This workshop ran for three days, with the objective of bringing together UK and international leading experts in the field of theory of group IV, III-V and II-VI semiconductors together with postdocs and students in the early stages of their careers. The first day focused on providing an introduction and overview of this vast field, aimed particularly at students at this influential point in their careers. We would like to thank all participants for their contribution to the conference programme and these proceedings. We would also like to acknowledge the financial support from the Institute of Physics (Computational Physics group and Semiconductor Physics group), the UK Car-Parrinello Consortium, Accelrys (distributors of Materials Studio) and Quantumwise (distributors of Atomistix). The Editors Acknowledgements Conference Organising Committee: Dr Matt Probert (University of York) and Dr Max Migliorato (University of Manchester) Programme Committee: Dr Marco Califano (University of Leeds), Dr Jacob Gavartin (Accelrys Ltd, Cambridge), Dr Stanko Tomic (STFC Daresbury Laboratory), Dr Gabi Slavcheva (Imperial College London) Proceedings edited and compiled by Dr Max Migliorato and Dr Matt Probert

  3. Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity so that they are being frequently employed for specific real world applications within NASA. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by highly complex geometries. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the peculiarities of applying the immersed boundary method to this moving boundary problem, we will provide a detailed aeroacoustic analysis of the noise generation mechanisms encountered in the open rotor flow. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. The noise generation mechanisms are analyzed employing spectral analysis, proper orthogonal decomposition and the causality method.

  4. A multigrid nonoscillatory method for computing high speed flows

    NASA Technical Reports Server (NTRS)

    Li, C. P.; Shieh, T. H.

    1993-01-01

    A multigrid method using different smoothers has been developed to solve the Euler equations discretized by a nonoscillatory scheme up to fourth order accuracy. The best smoothing property is provided by a five-stage Runge-Kutta technique with optimized coefficients, yet the most efficient smoother is a backward Euler technique in factored and diagonalized form. The singlegrid solution for a hypersonic, viscous conic flow is in excellent agreement with the solution obtained by the third order MUSCL and Roe's method. Mach 8 inviscid flow computations for a complete entry probe have shown that the accuracy is at least as good as the symmetric TVD scheme of Yee and Harten. The implicit multigrid method is four times more efficient than the explicit multigrid technique and 3.5 times faster than the single-grid implicit technique. For a Mach 8.7 inviscid flow over a blunt delta wing at 30 deg incidence, the CPU reduction factor from the three-level multigrid computation is 2.2 on a grid of 37 x 41 x 73 nodes.

  5. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  6. A literature review of neck pain associated with computer use: public health implications

    PubMed Central

    Green, Bart N

    2008-01-01

    Prolonged use of computers during daily work activities and recreation is often cited as a cause of neck pain. This review of the literature identifies public health aspects of neck pain as associated with computer use. While some retrospective studies support the hypothesis that frequent computer operation is associated with neck pain, few prospective studies reveal causal relationships. Many risk factors are identified in the literature. Primary prevention strategies have largely been confined to addressing environmental exposure to ergonomic risk factors, since to date, no clear cause for this work-related neck pain has been acknowledged. Future research should include identifying causes of work related neck pain so that appropriate primary prevention strategies may be developed and to make policy recommendations pertaining to prevention. PMID:18769599

  7. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.

  8. Method and apparatus for managing transactions with connected computers

    DOEpatents

    Goldsmith, Steven Y. (Albuquerque, NM); Phillips, Laurence R. (Corrales, NM); Spires, Shannon V. (Albuquerque, NM)

    2003-01-01

    The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.

  9. Computational Studies of Protein Aggregation: Methods and Applications

    NASA Astrophysics Data System (ADS)

    Morriss-Andrews, Alex; Shea, Joan-Emma

    2015-04-01

    Protein aggregation involves the self-assembly of normally soluble proteins into large supramolecular assemblies. The typical end product of aggregation is the amyloid fibril, an extended structure enriched in β-sheet content. The aggregation process has been linked to a number of diseases, most notably Alzheimer's disease, but fibril formation can also play a functional role in certain organisms. This review focuses on theoretical studies of the process of fibril formation, with an emphasis on the computational models and methods commonly used to tackle this problem.

  10. Fan Flutter Computations Using the Harmonic Balance Method

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Thomas, Jeffrey P.; Reddy, T.S.R.

    2009-01-01

    An experimental forward-swept fan encountered flutter at part-speed conditions during wind tunnel testing. A new propulsion aeroelasticity code, based on a computational fluid dynamics (CFD) approach, was used to model the aeroelastic behavior of this fan. This threedimensional code models the unsteady flowfield due to blade vibrations using a harmonic balance method to solve the Navier-Stokes equations. This paper describes the flutter calculations and compares the results to experimental measurements and previous results from a time-accurate propulsion aeroelasticity code.

  11. Computational methods for improving thermal imaging for consumer devices

    NASA Astrophysics Data System (ADS)

    Lynch, Colm N.; Devaney, Nicholas; Drimbarean, Alexandru

    2015-05-01

    In consumer imaging, the spatial resolution of thermal microbolometer arrays is limited by the large physical size of the individual detector elements. This also limits the number of pixels per image. If thermal sensors are to find a place in consumer imaging, as the newly released FLIR One would suggest, this resolution issue must be addressed. Our work focuses on improving the output quality of low resolution thermal cameras through computational means. The method we propose utilises sub-pixel shifts and temporal variations in the scene, using information from thermal and visible channels. Results from simulations and lab experiments are presented.

  12. Reforming the Social Studies Methods Course. SSEC Publication No. 155.

    ERIC Educational Resources Information Center

    Patrick, John J.

    Numerous criticisms of college social studies methods courses have generated various reform efforts. Three of these reforms are examined, including competency-based teacher education, the value analysis approach to teacher education, and the human relations approach to teacher education. Competency-based courses develop among future teachers…

  13. Library Orientation Methods, Mental Maps, and Public Services Planning.

    ERIC Educational Resources Information Center

    Ridgeway, Trish

    Two library orientation methods, a self-guided cassette walking tour and a slide-tape program, were administered to 202 freshmen students to determine if moving through the library increased students' ability to develop a mental map of the library. An effort was made to ensure that the two orientation programs were equivalent. Results from the 148…

  14. Demand for public transport services: Integrating qualitative and quantitative methods

    E-print Network

    Bierlaire, Michel

    Abstract This research is in the context of a mode choice study in Switzerland. This paper represents. The integration of the latent variables requires qualitative methods to be able to come up with an initial set powerful transport mode choice model at hand. The research is carried out in the context of a collaborative

  15. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  16. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  17. An experiment in hurricane track prediction using parallel computing methods

    NASA Technical Reports Server (NTRS)

    Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

    1994-01-01

    The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

  18. A hybrid method for the parallel computation of Green's functions

    SciTech Connect

    Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric

    2009-08-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.

  19. Computational methods for ab initio detection of microRNAs.

    PubMed

    Allmer, Jens; Yousef, Malik

    2012-01-01

    MicroRNAs are small RNA sequences of 18-24 nucleotides in length, which serve as templates to drive post-transcriptional gene silencing. The canonical microRNA pathway starts with transcription from DNA and is followed by processing via the microprocessor complex, yielding a hairpin structure. Which is then exported into the cytosol where it is processed by Dicer and then incorporated into the RNA-induced silencing complex. All of these biogenesis steps add to the overall specificity of miRNA production and effect. Unfortunately, their modes of action are just beginning to be elucidated and therefore computational prediction algorithms cannot model the process but are usually forced to employ machine learning approaches. This work focuses on ab initio prediction methods throughout; and therefore homology-based miRNA detection methods are not discussed. Current ab initio prediction algorithms, their ties to data mining, and their prediction accuracy are detailed. PMID:23087705

  20. Parallel computation of multigroup reactivity coefficient using iterative method

    SciTech Connect

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-09

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  1. Publicity.

    ERIC Educational Resources Information Center

    Chisholm, Joan

    Publicity for preschool cooperatives is described. Publicity helps produce financial support for preschool cooperatives. It may take the form of posters, brochures, newsletters, open house, newspaper coverage, and radio and television. Word of mouth and general good will in the community are the best avenues of publicity that a cooperative nursery…

  2. A fast phase space method for computing creeping rays

    SciTech Connect

    Motamed, Mohammad . E-mail: mohamad@nada.kth.se; Runborg, Olof . E-mail: olofr@nada.kth.se

    2006-11-20

    Creeping rays can give an important contribution to the solution of medium to high frequency scattering problems. They are generated at the shadow lines of the illuminated scatterer by grazing incident rays and propagate along geodesics on the scatterer surface, continuously shedding diffracted rays in their tangential direction. In this paper, we show how the ray propagation problem can be formulated as a partial differential equation (PDE) in a three-dimensional phase space. To solve the PDE we use a fast marching method. The PDE solution contains information about all possible creeping rays. This information includes the phase and amplitude of the field, which are extracted by a fast post-processing. Computationally, the cost of solving the PDE is less than tracing all rays individually by solving a system of ordinary differential equations. We consider an application to mono-static radar cross section problems where creeping rays from all illumination angles must be computed. The numerical results of the fast phase space method and a comparison with the results of ray tracing are presented.

  3. Matching wind turbine rotors and loads: computational methods for designers

    SciTech Connect

    Seale, J.B.

    1983-04-01

    This report provides a comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications. The user must supply: (1) turbine aerodynamic efficiency as a function of tipspeed ratio; (2) mechanical load torque as a function of rotation speed; (3) useful delivered power as a function of incoming mechanical power; (4) site average windspeed and, for maximum accuracy, distribution data. The description of the data includes governing limits consistent with the capacities of components. The report develops, a step-by-step method for converting the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) a decision is made how turbine power is to be governed (it may self-govern) to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics come into play to predict longterm energy output. Most systems can be approximated by a graph-and-calculator approach: Computer-generated families of coefficient curves provide data for algebraic scaling formulas. The method leads not only to energy predictions, but also to insight into the processes being modeled. Direct use of a computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out witn in-depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps, including three different load-compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

  4. An Overview of a Decade of Journal Publications about Culture and Human-Computer Interaction (HCI)

    NASA Astrophysics Data System (ADS)

    Clemmensen, Torkil; Roese, Kerstin

    In this paper, we analyze the concept of human-computer interaction in cultural and national contexts. Building and extending upon the framework for understanding research in usability and culture by Honold [3], we give an overview of publications in culture and HCI between 1998 and 2008, with a narrow focus on high-level journal publications only. The purpose is to review current practice in how cultural HCI issues are studied, and to analyse problems with the measures and interpretation of this studies. We find that Hofstede's cultural dimensions has been the dominating model of culture, participants have been picked because they could speak English, and most studies have been large scale quantitative studies. In order to balance this situation, we recommend that more researchers and practitioners do qualitative, empirical work studies.

  5. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...2010-04-01 false Methods of computing depreciation. 1.167(b)-0 Section 1...167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable...consistently applied method of computing depreciation may be used or continued in use...

  6. Computational modeling of multicellular constructs with the material point method.

    PubMed

    Guilkey, James E; Hoying, James B; Weiss, Jeffrey A

    2006-01-01

    Computational modeling of the mechanics of cells and multicellular constructs with standard numerical discretization techniques such as the finite element (FE) method is complicated by the complex geometry, material properties and boundary conditions that are associated with such systems. The objectives of this research were to apply the material point method (MPM), a meshless method, to the modeling of vascularized constructs by adapting the algorithm to accurately handle quasi-static, large deformation mechanics, and to apply the modified MPM algorithm to large-scale simulations using a discretization that was obtained directly from volumetric confocal image data. The standard implicit time integration algorithm for MPM was modified to allow the background computational grid to remain fixed with respect to the spatial distribution of material points during the analysis. This algorithm was used to simulate the 3D mechanics of a vascularized scaffold under tension, consisting of growing microvascular fragments embedded in a collagen gel, by discretizing the construct with over 13.6 million material points. Baseline 3D simulations demonstrated that the modified MPM algorithm was both more accurate and more robust than the standard MPM algorithm. Scaling studies demonstrated the ability of the parallel code to scale to 200 processors. Optimal discretization was established for the simulations of the mechanics of vascularized scaffolds by examining stress distributions and reaction forces. Sensitivity studies demonstrated that the reaction force during simulated extension was highly sensitive to the modulus of the microvessels, despite the fact that they comprised only 10.4% of the volume of the total sample. In contrast, the reaction force was relatively insensitive to the effective Poisson's ratio of the entire sample. These results suggest that the MPM simulations could form the basis for estimating the modulus of the embedded microvessels through a parameter estimation scheme. Because of the generality and robustness of the modified MPM algorithm, the relative ease of generating spatial discretizations from volumetric image data, and the ability of the parallel computational implementation to scale to large processor counts, it is anticipated that this modeling approach may be extended to many other applications, including the analysis of other multicellular constructs and investigations of cell mechanics. PMID:16095601

  7. Inverse Problem Methods as a Public Health Tool in Pneumococcal Vaccination

    E-print Network

    Inverse Problem Methods as a Public Health Tool in Pneumococcal Vaccination Karyn L. Sutton1,2 , H these methods to the study of pneumococcal vaccination strategies as a relevant example which poses many vaccine policies through the estimation of parameters if vaccine history is recorded along with infection

  8. Methods of Conserving Heating Energy Utilized in Thirty-One Public School Systems.

    ERIC Educational Resources Information Center

    Davis, Kathy Eggers

    The Memphis City School System was notified by Memphis Light, Gas, and Water that it was necessary to reduce its consumption of natural gas during the winter of 1975-76. A survey was developed and sent to 44 large public school systems to determine which methods of heating energy conservation were used most frequently and which methods were most…

  9. Nonlinear Piece In Hand Perturbation Vector Method for Enhancing Security of Multivariate Public Key Cryptosystems

    E-print Network

    International Association for Cryptologic Research (IACR)

    of MPKCs is becoming one of the main themes of this area. The piece in hand (PH, for short) matrix methodNonlinear Piece In Hand Perturbation Vector Method for Enhancing Security of Multivariate Public University 1­13­27 Kasuga, Bunkyo-ku, Tokyo, 112­8551 Japan Abstract. The piece in hand (PH) is a general

  10. Computational methods for the detection of cis-regulatory modules.

    PubMed

    Van Loo, Peter; Marynen, Peter

    2009-09-01

    Metazoan transcription regulation occurs through the concerted action of multiple transcription factors that bind co-operatively to cis-regulatory modules (CRMs). The annotation of these key regulators of transcription is lagging far behind the annotation of the transcriptome itself. Here, we give an overview of existing computational methods to detect these CRMs in metazoan genomes. We subdivide these methods into three classes: CRM scanners screen sequences for CRMs based on predefined models that often consist of multiple position weight matrices (PWMs). CRM builders construct models of similar CRMs controlling a set of co-regulated or co-expressed genes. CRM genome screeners screen sequences or complete genomes for CRMs as homotypic or heterotypic clusters of binding sites for any combination of transcription factors. We believe that CRM scanners are currently the most advanced methods, although their applicability is limited. Finally, we argue that CRM builders that make use of PWM libraries will benefit greatly from future advances and will prove to be most instrumental for the annotation of regulatory regions in metazoan genomes. PMID:19498042

  11. Inter-Domain Redundancy Path Computation Methods Based on PCE

    NASA Astrophysics Data System (ADS)

    Hayashi, Rie; Oki, Eiji; Shiomoto, Kohei

    This paper evaluates three inter-domain redundancy path computation methods based on PCE (Path Computation Element). Some inter-domain paths carry traffic that must be assured of high quality and high reliability transfer such as telephony over IP and premium virtual private networks (VPNs). It is, therefore, important to set inter-domain redundancy paths, i. e. primary and secondary paths. The first scheme utilizes an existing protocol and the basic PCE implementation. It does not need any extension or modification. In the second scheme, PCEs make a virtual shortest path tree (VSPT) considering the candidates of primary paths that have corresponding secondary paths. The goal is to reduce blocking probability; corresponding secondary paths may be found more often after a primary path is decided; no protocol extension is necessary. In the third scheme, PCEs make a VSPT considering all candidates of primary and secondary paths. Blocking probability is further decreased since all possible candidates are located, and the sum of primary and secondary path cost is reduced by choosing the pair with minimum cost among all path pairs. Numerical evaluations show that the second and third schemes offer only a few percent reduction in blocking probability and path pair total cost, while the overheads imposed by protocol revision and increase of the amount of calculation and information to be exchanged are large. This suggests that the first scheme, the most basic and simple one, is the best choice.

  12. Novel computational methods to design protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Zhou, Alice Qinhua; O'Hern, Corey; Regan, Lynne

    2014-03-01

    Despite the abundance of structural data, we still cannot accurately predict the structural and energetic changes resulting from mutations at protein interfaces. The inadequacy of current computational approaches to the analysis and design of protein-protein interactions has hampered the development of novel therapeutic and diagnostic agents. In this work, we apply a simple physical model that includes only a minimal set of geometrical constraints, excluded volume, and attractive van der Waals interactions to 1) rank the binding affinity of mutants of tetratricopeptide repeat proteins with their cognate peptides, 2) rank the energetics of binding of small designed proteins to the hydrophobic stem region of the influenza hemagglutinin protein, and 3) predict the stability of T4 lysozyme and staphylococcal nuclease mutants. This work will not only lead to a fundamental understanding of protein-protein interactions, but also to the development of efficient computational methods to rationally design protein interfaces with tunable specificity and affinity, and numerous applications in biomedicine. NSF DMR-1006537, PHY-1019147, Raymond and Beverly Sackler Institute for Biological, Physical and Engineering Sciences, and Howard Hughes Medical Institute.

  13. Computational carbohydrate chemistry: what theoretical methods can tell us.

    PubMed

    Woods, R J

    1998-03-01

    Computational methods have had a long history of application to carbohydrate systems and their development in this regard is discussed. The conformational analysis of carbohydrates differs in several ways from that of other biomolecules. Many glycans appear to exhibit numerous conformations coexisting in solution at room temperature and a conformational analysis of a carbohydrate must address both spatial and temporal properties. When solution nuclear magnetic resonance data are used for comparison, the simulation must give rise to ensemble-averaged properties. In contrast, when comparing to experimental data obtained from crystal structures a simulation of a crystal lattice, rather than of an isolated molecule, is appropriate. Molecular dynamics simulations are well suited for such condensed phase modeling. Interactions between carbohydrates and other biological macromolecules are also amenable to computational approaches. Having obtained a three-dimensional structure of the receptor protein, it is possible to model with accuracy the conformation of the carbohydrate in the complex. An example of the application of free energy perturbation simulations to the prediction of carbohydrate-protein binding energies is presented. PMID:9579797

  14. Computational methods for the verification of adaptive control systems

    NASA Astrophysics Data System (ADS)

    Prasanth, Ravi K.; Boskovic, Jovan; Mehra, Raman K.

    2004-08-01

    Intelligent and adaptive control systems will significantly challenge current verification and validation (V&V) processes, tools, and methods for flight certification. Although traditional certification practices have produced safe and reliable flight systems, they will not be cost effective for next-generation autonomous unmanned air vehicles (UAVs) due to inherent size and complexity increases from added functionality. Affordable V&V of intelligent control systems is by far the most important challenge in the development of UAVs faced by both commercial and military aerospace industry in the United States. This paper presents a formal modeling framework for a class of adaptive control systems and an associated computational scheme. The class of systems considered include neural network-based flight control systems and vehicle health management systems. This class of systems and indeed all adaptive systems are hybrid systems whose continuum dynamics is nonlinear. Our computational procedure is iterative and each iteration has two sequential steps. The first step is to derive an approximating finite-state automaton whose behaviors contain the behaviors of the hybrid system. The second step is to check if the language accepted by the approximating automaton is empty (emptiness checking). The iterations are terminated if the language accepted is empty; otherwise, the approximation is refined and the iteration is continued. This procedure will never produce an "error-free" certificate when the actual system contains errors which is an important requirement in V&V of safety critical systems.

  15. Methods and computer readable medium for improved radiotherapy dosimetry planning

    DOEpatents

    Wessol, Daniel E.; Frandsen, Michael W.; Wheeler, Floyd J.; Nigg, David W.

    2005-11-15

    Methods and computer readable media are disclosed for ultimately developing a dosimetry plan for a treatment volume irradiated during radiation therapy with a radiation source concentrated internally within a patient or incident from an external beam. The dosimetry plan is available in near "real-time" because of the novel geometric model construction of the treatment volume which in turn allows for rapid calculations to be performed for simulated movements of particles along particle tracks therethrough. The particles are exemplary representations of alpha, beta or gamma emissions emanating from an internal radiation source during various radiotherapies, such as brachytherapy or targeted radionuclide therapy, or they are exemplary representations of high-energy photons, electrons, protons or other ionizing particles incident on the treatment volume from an external source. In a preferred embodiment, a medical image of a treatment volume irradiated during radiotherapy having a plurality of pixels of information is obtained.

  16. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  17. Helping Students Soar to Success on Computers: An Investigation of the Soar Study Method for Computer-Based Learning

    ERIC Educational Resources Information Center

    Jairam, Dharmananda; Kiewra, Kenneth A.

    2010-01-01

    This study used self-report and observation techniques to investigate how students study computer-based materials. In addition, it examined if a study method called SOAR can facilitate computer-based learning. SOAR is an acronym that stands for the method's 4 theoretically driven and empirically supported components: select (S), organize (O),…

  18. Prepublication version. Accepted for publication in Neural Computation, 2014. Spine head calcium as a measure of summed postsynaptic activity for

    E-print Network

    Graham, Bruce

    Prepublication version. Accepted for publication in Neural Computation, 2014. 1 Spine head, University of Glasgow, Glasgow, G12 8QB, U.K. Running title: Spine head calcium driving synaptic plasticity use a computational model of a hippocampal CA1 pyramidal cell to demonstrate that spine head calcium

  19. A Critical Review of Computer-Assisted Learning in Public Health via the Internet, 1999-2008

    ERIC Educational Resources Information Center

    Corda, Kirsten W.; Polacek, Georgia N. L. J.

    2009-01-01

    Computers and the internet have been utilized as viable avenues for public health education delivery. Yet the effectiveness, e.g., behavior change, from use of these tools has been limited. Previous reviews have focused on single health topics such as smoking cessation and weight loss. This review broadens the scope to consider computer-assisted…

  20. Kenneth J. Turner. Specification architecture illustrated in a communications context (pre-publication version). Computer Networks and

    E-print Network

    Turner, Ken

    -publication version). Computer Networks and ISDN Systems, 29(4):397-411, March 1997. 1 Specification ArchitectureKenneth J. Turner. Specification architecture illustrated in a communications context (pre Illustrated in a Communications Context Kenneth J. Turner Department of Computing Science and Mathematics

  1. A stoichiometric calibration method for dual energy computed tomography.

    PubMed

    Bourque, Alexandra E; Carrier, Jean-François; Bouchard, Hugo

    2014-04-21

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0 mm, 0.5 mm and 0.4 mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy. PMID:24694786

  2. A stoichiometric calibration method for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

    2014-04-01

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0 mm, 0.5 mm and 0.4 mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy.

  3. Interactive computer methods for generating mineral-resource maps

    USGS Publications Warehouse

    Calkins, James Alfred; Crosby, A.S.; Huffman, T.E.; Clark, A.L.; Mason, G.T.; Bascle, R.J.

    1980-01-01

    Inasmuch as maps are a basic tool of geologists, the U.S. Geological Survey's CRIB (Computerized Resources Information Bank) was constructed so that the data it contains can be used to generate mineral-resource maps. However, by the standard methods used-batch processing and off-line plotting-the production of a finished map commonly takes 2-3 weeks. To produce computer-generated maps more rapidly, cheaply, and easily, and also to provide an effective demonstration tool, we have devised two related methods for plotting maps as alternatives to conventional batch methods. These methods are: 1. Quick-Plot, an interactive program whose output appears on a CRT (cathode-ray-tube) device, and 2. The Interactive CAM (Cartographic Automatic Mapping system), which combines batch and interactive runs. The output of the Interactive CAM system is final compilation (not camera-ready) paper copy. Both methods are designed to use data from the CRIB file in conjunction with a map-plotting program. Quick-Plot retrieves a user-selected subset of data from the CRIB file, immediately produces an image of the desired area on a CRT device, and plots data points according to a limited set of user-selected symbols. This method is useful for immediate evaluation of the map and for demonstrating how trial maps can be made quickly. The Interactive CAM system links the output of an interactive CRIB retrieval to a modified version of the CAM program, which runs in the batch mode and stores plotting instructions on a disk, rather than on a tape. The disk can be accessed by a CRT, and, thus, the user can view and evaluate the map output on a CRT immediately after a batch run, without waiting 1-3 days for an off-line plot. The user can, therefore, do most of the layout and design work in a relatively short time by use of the CRT, before generating a plot tape and having the map plotted on an off-line plotter.

  4. Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture

    DOEpatents

    Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2011-10-11

    Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

  5. Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.

    2011-01-01

    This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…

  6. Standard practice for digital imaging and communication nondestructive evaluation (DICONDE) for computed radiography (CR) test methods

    E-print Network

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This practice facilitates the interoperability of computed radiography (CR) imaging and data acquisition equipment by specifying image data transfer and archival storage methods in commonly accepted terms. This practice is intended to be used in conjunction with Practice E2339 on Digital Imaging and Communication in Nondestructive Evaluation (DICONDE). Practice E2339 defines an industrial adaptation of the NEMA Standards Publication titled Digital Imaging and Communications in Medicine (DICOM, see http://medical.nema.org), an international standard for image data acquisition, review, storage and archival storage. The goal of Practice E2339, commonly referred to as DICONDE, is to provide a standard that facilitates the display and analysis of NDE results on any system conforming to the DICONDE standard. Toward that end, Practice E2339 provides a data dictionary and a set of information modules that are applicable to all NDE modalities. This practice supplements Practice E2339 by providing information objec...

  7. Publicity and public relations

    NASA Technical Reports Server (NTRS)

    Fosha, Charles E.

    1990-01-01

    This paper addresses approaches to using publicity and public relations to meet the goals of the NASA Space Grant College. Methods universities and colleges can use to publicize space activities are presented.

  8. PHY 3301H --COMPUTATIONAL METHODS IN THE PHYSICAL SCIENCES

    E-print Network

    Buldyrev, Sergey

    -521-43108-5 Computational Physics, by J. M. Thijssen, ISBN:0-521-57588-5 Supplementary Texts: The C programming Language% Periodic computational projects 40% Final computer project 20% Course Summary: This hands-on course will also accuire very useful programming skills including Mathematica, C and C++. #12;Prerequisites: Year

  9. Special computer-aided computed tomography (CT) volume measurement and comparison method for pulmonary tuberculosis (TB)

    PubMed Central

    Liu, Jingming; Sun, Zhaogang; Xie, Ruming; Gao, Mengqiu; Li, Chuanyou

    2015-01-01

    The computed tomography (CT) manifestations in pulmonary tuberculosis (PTB) patients are complex and could not be quantitatively evaluated. We aimed to establish a new method to objectively measure the lung injury level in PTB by thoracic CT and make quantitative comparisons. In the retrospective study, a total of 360 adults were selected and divided into four groups according to their CT manifestations and medical history: Normal group, PTB group, PTB with diabetes mellitus (DM) group and Death caused by PTB group. Five additional patients who had continuous CT scans were chosen for preliminary longitudinal analysis. We established a new computer-aided CT volume measurement and comparison method for PTB patients (CACTV-PTB) which measured lung volume (LV) and thoracic volume (TV). RLT was calculated as the ratio of LV to TV and comparisons were performed among different groups. Standardized RLT (SRLT) was used in the longitudinal analysis among different patients. In the Normal group, LV and TV were positively correlated in linear regression (?=-0.5+0.46X, R2=0.796, P<0.01). RLT values were significantly different among four groups (Normal: 0.40±0.05, PTB: 0.37±0.08, PTB+DM: 0.34±0.06, Death: 0.23±0.04). The curves of SRLT value from different patients shared a same start point and could be compared directly. Utilizing the novel objective method CACTV-PTB makes it possible to compare the severity and dynamic change among different PTB patients. Our early experience also suggested that the lung injury is severer in the PTB+DM group than in the PTB group.

  10. Recent developments in the Green's function method. [for aerodynamics computer program

    NASA Technical Reports Server (NTRS)

    Tseng, K.; Puglise, J. A.; Morino, L.

    1977-01-01

    A recent computational development on the Green's function method (the method used in the computer program SOUSSA: Steady, Oscillatory and Unsteady Subsonic and Supersonic Aerodynamics) is presented. A scheme consisting of combined numerical (Gaussian quadrature) and analytical procedures for the evaluation of the source and doublet integrals used in the program is presented. This combination results in 80 to 90% reduction in computer time.

  11. Traditional Method versus Computer-Aided Instruction Method in Teaching BASIC Programming to Vocational High School Students.

    ERIC Educational Resources Information Center

    Koohang, Alex A.

    The purpose of this study was to investigate the effectiveness of computer-aided instruction as compared with the traditional lecture method of cognitive learning of new curriculum materials. It was hypothesized that students instructed by the computer-aided instruction method would gain higher knowledge of the subject matter in terms of cognitive…

  12. Non-unitary probabilistic quantum computing circuit and method

    NASA Technical Reports Server (NTRS)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  13. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

    SciTech Connect

    Luttman, A.

    2012-03-30

    The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

  14. Method for computing flowing two-fluid plasma equilibria

    NASA Astrophysics Data System (ADS)

    Steinhauer, Loren

    2004-11-01

    An algorithm is constructed for computing the flowing equilibria of a two-fluid plasma in two dimensions. The method relies on a successive-overrelaxation technique applied to the coupled pair of differential equations that for a flowing two fluid. These have the form L_1? = f_0(n,? ,? ) + ? -2f_1(n,? ,? ), L_2? = g_0(n,? ,? ) + ?-2g_1(n,? ,? ) where ? (r,z) is the magnetic surface variable (cylindrical coordinates), ? (r,z) is the ion flow surface variable, ? << 1 is the two-fluid parameter (ratio of the ion skin depth to plasma size), n is the density, and L_1, L2 are second order operators (the former is proportional to the familiar Grad-Shafranov operator). The system is closed by a Bernoulli equation for the density. Two special measures are employed in solving this system. (1) Since the equations are stiff (by virtue of the 1/? ^2 factors on the right side), the ion surface variable is expanded as ? = ? + ? ? 1 + ... to convert to a pair of "unstiff" equations. (2) The respective separatrix surfaces ? (r,z) = 0 and ? (r,z) = 0 are prespecified; the field and flow outside the separatrix (vertical field region) is then solved by an extension procedure. The shape of the separatrix can be adjusted to get a reasonable boundary condition in the external field region. This work is supported by USDOE.

  15. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  16. 77 FR 22326 - Privacy Act of 1974, as Amended by Public Law 100-503; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-13

    ...by Public Law 100-503; Notice of a Computer Matching Program AGENCY: Office of Financial...Information System (PARIS) notice of a computer matching program between the Department...amended by Public Law 100-503, the Computer Matching and Privacy Protection Act...

  17. Methods of Language Assessment: A Survey of California Public School Clinicians.

    ERIC Educational Resources Information Center

    Wilson, Kristine S.; And Others

    1991-01-01

    Public school speech-language clinicians (n=266) in California were surveyed regarding methods for assessing the language of children ages 4-9. Results are discussed in terms of formal and informal expressive and receptive language assessment and ways in which new assessment tools are identified and incorporated. (Author/JDD)

  18. Educating the Public About Health: A Planning Guide. Health Planning Methods and Technology Series.

    ERIC Educational Resources Information Center

    Sullivan, Daniel

    A comprehensive overview of major issues involved in educating the public about health, with emphasis on methods and approaches designed to foster community participation in health planning, is presented in this guide. It is intended to provide ideas for those engaged in health education program development with ideas for use in planning,…

  19. 16.901 Computational Methods in Aerospace Engineering, Spring 2003

    E-print Network

    Darmofal, David L.

    Introduction to computational techniques arising in aerospace engineering. Applications drawn from aerospace structures, aerodynamics, dynamics and control, and aerospace systems. Techniques include: numerical integration ...

  20. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  1. A method for computing the leading-edge suction in a higher-order panel method

    NASA Technical Reports Server (NTRS)

    Ehlers, F. E.; Manro, M. E.

    1984-01-01

    Experimental data show that the phenomenon of a separation induced leading edge vortex is influenced by the wing thickness and the shape of the leading edge. Both thickness and leading edge shape (rounded rather than point) delay the formation of a vortex. Existing computer programs used to predict the effect of a leading edge vortex do not include a procedure for determining whether or not a vortex actually exists. Studies under NASA Contract NAS1-15678 have shown that the vortex development can be predicted by using the relationship between the leading edge suction coefficient and the parabolic nose drag. The linear theory FLEXSTAB was used to calculate the leading edge suction coefficient. This report describes the development of a method for calculating leading edge suction using the capabilities of the higher order panel methods (exact boundary conditions). For a two dimensional case, numerical methods were developed using the double strength and downwash distribution along the chord. A Gaussian quadrature formula that directly incorporates the logarithmic singularity in the downwash distribution, at all panel edges, was found to be the best method.

  2. Students' Attitudes towards Control Methods in Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Hintze, Hanne; And Others

    1988-01-01

    Describes study designed to investigate dental students' attitudes toward computer-assisted teaching as applied in programs for oral radiology in Denmark. Programs using personal computers and slide projectors with varying degrees of learner and teacher control are described, and differences in attitudes between male and female students are…

  3. The reminiscence bump in autobiographical memory and for public events: A comparison across different cueing methods.

    PubMed

    Koppel, Jonathan; Berntsen, Dorthe

    2016-01-01

    The reminiscence bump has been found for both autobiographical memories and memories of public events. However, there have been few comparisons of the bump across each type of event. In the current study, therefore, we compared the bump for autobiographical memories versus the bump for memories of public events. We did so between-subjects, through two cueing methods administered within-subjects, the cue word method and the important memories method. For word-cued memories, we found a similar bump from ages 5 to 19 for both types of memories. However, the bump was more pronounced for autobiographical memories. For most important memories, we found a bump from ages 20 to 29 in autobiographical memory, but little discernible age pattern for public events. Rather, specific public events (e.g., the Fall of the Berlin Wall) dominated recall, producing a chronological distribution characterised by spikes in citations according to the years these events occurred. Follow-up analyses suggested that the bump in most important autobiographical memories was a function of the cultural life script. Our findings did not yield support for any of the dominant existing accounts of the bump as underlying the bump in word-cued memories. PMID:25529327

  4. ULO Course Learning Outcome Assessment Method Pedagogy 02-01 Build on what public speaking entails to adv-

    E-print Network

    Barrash, Warren

    COMM441 ULO Course Learning Outcome Assessment Method Pedagogy 02-01 Build on what public speaking, extemporaneous style, and proficient speaking about a specific topic. 02-02 Build on what public speaking entails, extemporaneous style, and proficient speaking about a specific topic. 03-01 Build on what public speaking entails

  5. 3D modeling method for computer animate based on modified weak structured light method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  6. Public Key Cryptography Shai Simonson

    E-print Network

    Simonson, Shai

    Public Key Cryptography Shai Simonson Stonehill College Introduction When teaching mathematics methods are a part of every computer scientist's education. In public-key cryptography, also called trapdoor or one-way cryptography, the encoding scheme is public, yet the decoding scheme remains secret

  7. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  8. Asronomical refraction: Computational methods for all zenith angles

    NASA Technical Reports Server (NTRS)

    Auer, L. H.; Standish, E. M.

    2000-01-01

    It is shown that the problem of computing astronomical refraction for any value of the zenith angle may be reduced to a simple, nonsingular, numerical quadrature when the proper choice is made for the independent variable of integration.

  9. Computational Methods for Learning Bayesian Networks from High-Throughput

    E-print Network

    Subramanian, Devika

    Department of Computer Science, Rice University, 6100 Main St., Houston, TX 77005 E-mail: devika@rice and protein expression levels, there is increasing interest in learning the underlying relationships between

  10. Computer-intensive methods in traffic safety research.

    PubMed

    Stanislaw, Harold

    2002-01-01

    The analysis of traffic safety data archives has improved markedly with the development of procedures that are heavily dependent upon computers. Three such procedures are described here. The first procedure involves using computers to assist in the identification and correction of invalid data. The second procedure makes greater computational demands, and involves using computerized algorithms to fill in the "gaps" that typically occur in archival data when information regarding key variables is not available. The third and most computer-intensive procedure involves using data mining techniques to search archives for interesting and important relationships between variables. These procedures are illustrated using examples from data archives that describe the characteristics of traffic accidents in the USA and Australia. PMID:12189106

  11. Research on Assessment Methods for Urban Public Transport Development in China

    PubMed Central

    Zou, Linghong; Guo, Hongwei

    2014-01-01

    In recent years, with the rapid increase in urban population, the urban travel demands in Chinese cities have been increasing dramatically. As a result, developing comprehensive urban transport systems becomes an inevitable choice to meet the growing urban travel demands. In urban transport systems, public transport plays the leading role to promote sustainable urban development. This paper aims to establish an assessment index system for the development level of urban public transport consisting of a target layer, a criterion layer, and an index layer. Review on existing literature shows that methods used in evaluating urban public transport structure are dominantly qualitative. To overcome this shortcoming, fuzzy mathematics method is used for describing qualitative issues quantitatively, and AHP (analytic hierarchy process) is used to quantify expert's subjective judgment. The assessment model is established based on the fuzzy AHP. The weight of each index is determined through the AHP and the degree of membership of each index through the fuzzy assessment method to obtain the fuzzy synthetic assessment matrix. Finally, a case study is conducted to verify the rationality and practicability of the assessment system and the proposed assessment method. PMID:25530756

  12. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  13. Methods and Prospects for Human Computer Performance of Popular Music1

    E-print Network

    Dannenberg, Roger B.

    1 Methods and Prospects for Human Computer Performance of Popular Music1 Roger B. Dannenberg1, Department of Electrical Engineering, New York, NY Abstract Computers are often used in popular music are in complete control or pre-recorded or sequenced music where musicians follow the computer's drums or click

  14. Iterative Correction Methods for the Beam Hardening Artifacts in Computed Tomography

    E-print Network

    Faridani, Adel

    or Medicine in 1979 for their respective contributions to the development of CT [3] [5]. When a CT scan in computed tomography. We implement each method, using previous work done by M. Alarfaj [1], and provide:23 2 #12;Chapter 1 Introduction 1.1 X-ray Computed Tomography X-ray computed tomography (CT) is a form

  15. FINITE VOLUME METHODS APPLIED TO THE COMPUTATIONAL MODELLING OF WELDING PHENOMENA

    E-print Network

    Taylor, Gary

    1 FINITE VOLUME METHODS APPLIED TO THE COMPUTATIONAL MODELLING OF WELDING PHENOMENA Gareth A.Taylor@brunel.ac.uk ABSTRACT This paper presents the computational modelling of welding phenomena within a versatile numerical) and Computational Solid Mechanics (CSM). With regard to the CFD modelling of the weld pool fluid dynamics, heat

  16. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  17. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    NASA Technical Reports Server (NTRS)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  18. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  19. Comparisons of Two Viscous Models for Vortex Methods in Parallel Computation

    NASA Astrophysics Data System (ADS)

    Lee, Sang Hwan; Jin, Dong Sik; Yoon, Jin Sup

    A parallel implementation of vortex methods dealing with unsteady viscous flows on a distributed computing environment through Parallel Virtual Machine (PVM) is reported in this paper. We test the recently developed diffusion schemes of vortex methods. We directly compare the particle strength exchange method with the vorticity distribution method in terms of their accuracy and computational efficiency. Comparisons between both viscous models described are presented for the impulsively started flows past a circular cylinder at Reynolds number 60. We also present the comparisons of both methods in their parallel computation efficiency and speed-up ratio.

  20. Publications Forrest M. Ho#man and William W. Hargrove. Cluster computing: Linux taken to the extreme. Linux

    E-print Network

    Hoffman, Forrest M.

    Publications Forrest M. Ho#man and William W. Hargrove. Cluster computing: Linux taken to the extreme. Linux Magazine, 1(1):56--59, 1999. Forrest M. Ho#man. Concepts in Beowulfery. Linux Magazine, 4(1):40--41, January 2002a. Forrest M. Ho#man. Configuring a Beowulf Cluster. Linux Magazine, 4(2):42--45, February

  1. 78 FR 54453 - Notice of Public Meeting-Intersection of Cloud Computing and Mobility Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... National Institute of Standards and Technology Notice of Public Meeting--Intersection of Cloud Computing... Technology (NIST) announces the Intersection of Cloud and Mobility Forum and Workshop to be held on Tuesday... breakout sessions held each day. The NIST Intersection of Cloud and Mobility Forum and Workshop will...

  2. SOURCE WATER PROTECTION OF PUBLIC DRINKING WATER WELLS: COMPUTER MODELING OF ZONES CONTRIBUTING RECHARGE TO PUMPING WELLS

    EPA Science Inventory

    Computer technology to assist states, tribes, and clients in the design of wellhead and source water protection areas for public water supply wells is being developed through two distinct SubTasks: (Sub task 1) developing a web-based wellhead decision support system, WellHEDSS, t...

  3. ICRP Publication 116—the first ICRP/ICRU application of the male and female adult reference computational phantoms

    NASA Astrophysics Data System (ADS)

    Petoussi-Henss, Nina; Bolch, Wesley E.; Eckerman, Keith F.; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G.; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria

    2014-09-01

    ICRP Publication 116 on ‘Conversion coefficients for radiological protection quantities for external radiation exposures’, provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantities revealed that the latter quantities continue to provide a good approximation of effective dose for photons, neutrons and electrons for the ‘conventional’ energy ranges considered previously (ICRP 1996, ICRU 1998), but not at the higher energies of ICRP Publication 116.

  4. A Review of Data Quality Assessment Methods for Public Health Information Systems

    PubMed Central

    Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping

    2014-01-01

    High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users’ concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process. PMID:24830450

  5. A general method for the computation of probabilities in systems of first order chemical reactions

    E-print Network

    Djuriæ, Petar M.

    A general method for the computation of probabilities in systems of first order chemical reactions for the computation of molecular population distributions in a system of first-order chemical reactions. The method to model the chemical reactions in a stochastic way rather than with the traditional differential equations

  6. An Integrated Path Integral and Free-Energy Perturbation-Umbrella Sampling Method for Computing

    E-print Network

    Minnesota, University of

    An Integrated Path Integral and Free-Energy Perturbation-Umbrella Sampling Method for Computing integral and free-energy perturbation-umbrella sampling (PI-FEP/UM) method for computing kinetic isotope effects is achieved by coupled free-energy perturbation and umbrella sampling for reactions involving

  7. Anderson Acceleration of the Alternating Projections Method for Computing the Nearest

    E-print Network

    Higham, Nicholas J.

    Anderson Acceleration of the Alternating Projections Method for Computing the Nearest Correlation ISSN 1749-9097 #12;Anderson Acceleration of the Alternating Projections Method for Computing number of iterations to converge to within a given tolerance. We show that Anderson acceleration

  8. An efficient and spectrally accurate numerical method for computing dynamics of rotating BoseEinstein condensates

    E-print Network

    Bao, Weizhu

    . By applying a time-splitting technique for decoupling the nonlinearity and properly using the alternatingAn efficient and spectrally accurate numerical method for computing dynamics of rotating Bose accurate numerical method for computing the dynamics of rotating Bose­Einstein condensates (BEC) in two

  9. FAST COMPUTATIONAL METHODS FOR RESERVOIR FLOW MODELS Teng Chen, Nicholas Gewecke, Zhen Li,

    E-print Network

    FAST COMPUTATIONAL METHODS FOR RESERVOIR FLOW MODELS By Teng Chen, Nicholas Gewecke, Zhen Li://www.ima.umn.edu #12;Fast Computational Methods for Reservoir Flow Models Teng Chen Nicholas Gewecke Zhen Li Andrea Rubiano§ Robert Shuttleworth¶ Bo Yang Xinghui Zhong August 23, 2009 Abstract Numerical reservoir

  10. A finite element method for the computation of transonic flow past airfoils

    NASA Technical Reports Server (NTRS)

    Eberle, A.

    1980-01-01

    A finite element method for the computation of the transonic flow with shocks past airfoils is presented using the artificial viscosity concept for the local supersonic regime. Generally, the classic element types do not meet the accuracy requirements of advanced numerical aerodynamics requiring special attention to the choice of an appropriate element. A series of computed pressure distributions exhibits the usefulness of the method.

  11. COMPUTER MODELS AND METHODS FOR A DISABLED ACCESS ANALYSIS DESIGN ENVIRONMENT

    E-print Network

    Stanford University

    COMPUTER MODELS AND METHODS FOR A DISABLED ACCESS ANALYSIS DESIGN ENVIRONMENT A DISSERTATION of usability constraints. This research develops computer models and methods providing designers with disabled-based disabled access code, the Americans with Disabilities Act Accessibility Guidelines (ADAAG). The research

  12. Progress Towards Computational Method for Circulation Control Airfoils

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rumsey, C. L.; Anders, S. G.

    2005-01-01

    The compressible Reynolds-averaged Navier-Stokes equations are solved for circulation control airfoil flows. Numerical solutions are computed with both structured and unstructured grid solvers. Several turbulence models are considered, including the Spalart-Allmaras model with and without curvature corrections, the shear stress transport model of Menter, and the k-enstrophy model. Circulation control flows with jet momentum coefficients of 0.03, 0.10, and 0.226 are considered. Comparisons are made between computed and experimental pressure distributions, velocity profiles, Reynolds stress profiles, and streamline patterns. Including curvature effects yields the closest agreement with the measured data.

  13. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  14. The ijk forms of factorization methods. I - Vector computers

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1988-01-01

    This paper gives a detailed exposition of the 'ijk forms' of LU and Choleski factorization. Several aspects of these different organizations are discussed and their properties on vector computers are compared. Extensions of the ijk formalism to other algorithms is also given.

  15. Simple computer method provides contours for radiological images

    NASA Technical Reports Server (NTRS)

    Newell, J. D.; Keller, R. A.; Baily, N. A.

    1975-01-01

    Computer is provided with information concerning boundaries in total image. Gradient of each point in digitized image is calculated with aid of threshold technique; then there is invoked set of algorithms designed to reduce number of gradient elements and to retain only major ones for definition of contour.

  16. New Methods of Mobile Computing: From Smartphones to Smart Education

    ERIC Educational Resources Information Center

    Sykes, Edward R.

    2014-01-01

    Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It…

  17. Computational Methods for Atmospheric Science, ATS607 Colorado State University

    E-print Network

    Collett Jr., Jeffrey L.

    ://pierce.atmos.colostate.edu Office hours: During the lab classes or by appointment. Teaching assistant: Landan Macdonald the Atmospheric Science graduate students a range of computer- programming skills to enhance their research necessarily in this order): Basic syntax Plotting File input/output (text, Excel, NetCDF) Plotting over maps

  18. Behavior Research Methods, Instruments, and Computers Copyright 2002 BRMIC

    E-print Network

    Duchowski, Andrew T.

    the following domains: Neuroscience, Psychology, Industrial Engineering and Human Factors, Marketing with the behaviorist movement in experimental psychology; (ca. 1970­1998) marked by improvements in eye movement during the experiment. Bolstered by advancements in computational power, rich- ness of graphical displays

  19. [Computation method for optimization of recipes for protein content].

    PubMed

    Kovalev, N I; Karzeva, N J; Fiterer, V O

    1987-01-01

    The authors propose a calculated protein utilization coefficient. This coefficient considers the difference between the utilization rates of the proteins being contained in the mixture and their amino-acid composition. The proposed formula allows calculations by computer. The data obtained show high correlations with the results received by biological tests with Tetrahymena cultures. PMID:3431579

  20. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    ERIC Educational Resources Information Center

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  1. Computed radiography imaging plates and associated methods of manufacture

    DOEpatents

    Henry, Nathaniel F.; Moses, Alex K.

    2015-08-18

    Computed radiography imaging plates incorporating an intensifying material that is coupled to or intermixed with the phosphor layer, allowing electrons and/or low energy x-rays to impart their energy on the phosphor layer, while decreasing internal scattering and increasing resolution. The radiation needed to perform radiography can also be reduced as a result.

  2. Small Scale Distance Education; "The Personal (Computer) Touch"; Tutorial Methods for TMA's Using a Computer.

    ERIC Educational Resources Information Center

    Fritsch, Helmut; And Others

    1989-01-01

    The authors present reports of current research on distance education at the FernUniversitat in West Germany. Fritsch discusses adapting distance education techniques for small classes. Kuffner describes procedures for providing feedback to students using personalized computer-generated letters. Klute discusses using a computer with tutorial…

  3. Fast computation method for a Fresnel hologram using three-dimensional affine transformations in real space.

    PubMed

    Sakata, Hironobu; Sakamoto, Yuji

    2009-12-01

    Calculating computer-generated holograms takes a tremendous amount of computation time. We propose a fast method for calculating object lights for Fresnel holograms without the use of a Fourier transform. This method generates object lights of variously shaped patches from a basic object light for a fixed-shape patch by using three-dimensional affine transforms. It can thus calculate holograms that display complex objects including patches of various shapes. Computer simulations and optical experiments demonstrate the effectiveness of this method. The results show that it performs twice as fast as a method that uses a Fourier transform. PMID:19956293

  4. Shock Capturing, Level Sets and PDE Based Methods in Computer Vision and Image Processing: A Review of Osher's Contributions

    E-print Network

    Ferguson, Thomas S.

    Shock Capturing, Level Sets and PDE Based Methods in Computer Vision and Image Processing: A Review of discontinuous solutions to Hamilton-Jacobi equations. Among computer vision and image processing we will review in image processing and computer vision. Key Words: shock capturing method, level set method, computer

  5. Application of traditional CFD methods to nonlinear computational aeroacoustics problems

    NASA Technical Reports Server (NTRS)

    Chyczewski, Thomas S.; Long, Lyle N.

    1995-01-01

    This paper describes an implementation of a high order finite difference technique and its application to the category 2 problems of the ICASE/LaRC Workshop on Computational Aeroacoustics (CAA). Essentially, a popular Computational Fluid Dynamics (CFD) approach (central differencing, Runge-Kutta time integration and artificial dissipation) is modified to handle aeroacoustic problems. The changes include increasing the order of the spatial differencing to sixth order and modifying the artificial dissipation so that it does not significantly contaminate the wave solution. All of the results were obtained from the CM5 located at the Numerical Aerodynamic Simulation Laboratory. lt was coded in CMFortran (very similar to HPF), using programming techniques developed for communication intensive large stencils, and ran very efficiently.

  6. Computational Methods for the Analysis of Array Comparative Genomic Hybridization

    PubMed Central

    Chari, Raj; Lockwood, William W.; Lam, Wan L.

    2006-01-01

    Array comparative genomic hybridization (array CGH) is a technique for assaying the copy number status of cancer genomes. The widespread use of this technology has lead to a rapid accumulation of high throughput data, which in turn has prompted the development of computational strategies for the analysis of array CGH data. Here we explain the principles behind array image processing, data visualization and genomic profile analysis, review currently available software packages, and raise considerations for future software development. PMID:17992253

  7. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    SciTech Connect

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  8. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    SciTech Connect

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria; Zhou, Ning; Huang, Zhenyu

    2012-07-26

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulation of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.

  9. Frequency response modeling and control of flexible structures: Computational methods

    NASA Technical Reports Server (NTRS)

    Bennett, William H.

    1989-01-01

    The dynamics of vibrations in flexible structures can be conventiently modeled in terms of frequency response models. For structural control such models capture the distributed parameter dynamics of the elastic structural response as an irrational transfer function. For most flexible structures arising in aerospace applications the irrational transfer functions which arise are of a special class of pseudo-meromorphic functions which have only a finite number of right half place poles. Computational algorithms are demonstrated for design of multiloop control laws for such models based on optimal Wiener-Hopf control of the frequency responses. The algorithms employ a sampled-data representation of irrational transfer functions which is particularly attractive for numerical computation. One key algorithm for the solution of the optimal control problem is the spectral factorization of an irrational transfer function. The basis for the spectral factorization algorithm is highlighted together with associated computational issues arising in optimal regulator design. Options for implementation of wide band vibration control for flexible structures based on the sampled-data frequency response models is also highlighted. A simple flexible structure control example is considered to demonstrate the combined frequency response modeling and control algorithms.

  10. Computational Systems Biology in Cancer: Modeling Methods and Applications

    PubMed Central

    Materi, Wayne; Wishart, David S.

    2007-01-01

    In recent years it has become clear that carcinogenesis is a complex process, both at the molecular and cellular levels. Understanding the origins, growth and spread of cancer, therefore requires an integrated or system-wide approach. Computational systems biology is an emerging sub-discipline in systems biology that utilizes the wealth of data from genomic, proteomic and metabolomic studies to build computer simulations of intra and intercellular processes. Several useful descriptive and predictive models of the origin, growth and spread of cancers have been developed in an effort to better understand the disease and potential therapeutic approaches. In this review we describe and assess the practical and theoretical underpinnings of commonly-used modeling approaches, including ordinary and partial differential equations, petri nets, cellular automata, agent based models and hybrid systems. A number of computer-based formalisms have been implemented to improve the accessibility of the various approaches to researchers whose primary interest lies outside of model development. We discuss several of these and describe how they have led to novel insights into tumor genesis, growth, apoptosis, vascularization and therapy. PMID:19936081

  11. Analysis of multigrid methods on massively parallel computers: Architectural implications

    NASA Technical Reports Server (NTRS)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  12. Computational Method for Electrical Potential and Other Field Problems

    ERIC Educational Resources Information Center

    Hastings, David A.

    1975-01-01

    Proposes the finite differences relaxation method as a teaching tool in secondary and university level courses discussing electrical potential, temperature distribution in a region, and similar problems. Outlines the theory and operating procedures of the method, and discusses examples of teaching applications, including possible laboratory…

  13. Computing the effective diffusivity using a spectral method

    E-print Network

    2001-05-03

    equation using a Fourier–Chebyshev spectral method. ... [3]. In the finite difference or finite element methods, the local property of a material is assumed ... in a microstructure by solving the time-dependent diffu- ... therefore the fact that grain boundary diffusivity Dgb is .... (9·D(r)9) is spectrally equivalent to the Laplacian op-.

  14. Computation of separated flow on a ramp using the space marching conservative supra-characteristics method

    NASA Technical Reports Server (NTRS)

    Stookesberry, D. C.; Tannehill, J. C.

    1986-01-01

    Steady, hypersonic viscous flows over compression corners with streamwise separation have been computed using the space marching Conservative Supra-Characteristics Method (CSCM-S) of Lombard. The CSCM-S method permits stable space marching of the parabolized Navier-Stokes (PNS) equations through large separated flow regions. The present method has been used to compute surface pressure, heat transfer, and skin friction coefficients for two compression corner cases studied experimentally by Holden and Moselle. The computed results compare favorably with previous Navier-Stokes results and the experimental data. The present method has also been compared with the conventional Beam-Warming scheme for solving the PNS equations. Comparison are made for accuracy, computer time, and computer storage.

  15. BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology.

    PubMed

    Gilson, Michael K; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny

    2016-01-01

    BindingDB, www.bindingdb.org, is a publicly accessible database of experimental protein-small molecule interaction data. Its collection of over a million data entries derives primarily from scientific articles and, increasingly, US patents. BindingDB provides many ways to browse and search for data of interest, including an advanced search tool, which can cross searches of multiple query types, including text, chemical structure, protein sequence and numerical affinities. The PDB and PubMed provide links to data in BindingDB, and vice versa; and BindingDB provides links to pathway information, the ZINC catalog of available compounds, and other resources. The BindingDB website offers specialized tools that take advantage of its large data collection, including ones to generate hypotheses for the protein targets bound by a bioactive compound, and for the compounds bound by a new protein of known sequence; and virtual compound screening by maximal chemical similarity, binary kernel discrimination, and support vector machine methods. Specialized data sets are also available, such as binding data for hundreds of congeneric series of ligands, drawn from BindingDB and organized for use in validating drug design methods. BindingDB offers several forms of programmatic access, and comes with extensive background material and documentation. Here, we provide the first update of BindingDB since 2007, focusing on new and unique features and highlighting directions of importance to the field as a whole. PMID:26481362

  16. BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology

    PubMed Central

    Gilson, Michael K.; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny

    2016-01-01

    BindingDB, www.bindingdb.org, is a publicly accessible database of experimental protein-small molecule interaction data. Its collection of over a million data entries derives primarily from scientific articles and, increasingly, US patents. BindingDB provides many ways to browse and search for data of interest, including an advanced search tool, which can cross searches of multiple query types, including text, chemical structure, protein sequence and numerical affinities. The PDB and PubMed provide links to data in BindingDB, and vice versa; and BindingDB provides links to pathway information, the ZINC catalog of available compounds, and other resources. The BindingDB website offers specialized tools that take advantage of its large data collection, including ones to generate hypotheses for the protein targets bound by a bioactive compound, and for the compounds bound by a new protein of known sequence; and virtual compound screening by maximal chemical similarity, binary kernel discrimination, and support vector machine methods. Specialized data sets are also available, such as binding data for hundreds of congeneric series of ligands, drawn from BindingDB and organized for use in validating drug design methods. BindingDB offers several forms of programmatic access, and comes with extensive background material and documentation. Here, we provide the first update of BindingDB since 2007, focusing on new and unique features and highlighting directions of importance to the field as a whole. PMID:26481362

  17. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  18. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions

    SciTech Connect

    Dragt, A.J.; Gluckstern, R.L.

    1990-11-01

    The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high behavior of longitudinal and transverse coupling impendances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides.

  19. A BOTTOM VELOCITY COMPUTATION METHOD FOR ESTIMATING BED VARIATION IN A CHANNEL WITH SUBMERGED GROINS E

    NASA Astrophysics Data System (ADS)

    Uchida, Tatsuhiko; Fukuoka, Shoji

    A practical numerical bed variation method is required for evaluating various functions of groins installed in rivers. The conventional depth integrated (2D) computation method, which has been used for practical computations of flood flows and bed variations in rivers, is inadequate for local scouring around hydraulic structures due to 3D flow. However, the use of full 3D turbulence flow computation methods is not common for floods in rivers. Therefore, a refined depth integrated model is required for the practical use. In this paper, the Bottom Velocity Computation (BVC) method is proposed to evaluate velocity acting on sediment particles effectively and compute bed variation in a channel. In the BVC method, depth-integrated horizontal vorticity and water surface velocity equations are computed simultaneously with shallow water equations and a depth averaged turbulence energy transport equation. To compute bed variation around submerged groins, evaluation methods for non-equilibrium bed load and bed tractive force by using bottom velocity is presented. The applicability of the method is discusse d through the comparisons with the laboratory experimental results of flows and bed variations in a channel with submerged groins.

  20. Computational Methods for Continuum Models of Platelet Aggregation

    NASA Astrophysics Data System (ADS)

    Wang, Nien-Tzu; Fogelson, Aaron L.

    1999-05-01

    Platelet aggregation plays an important role in blood clotting. Robust numerical methods for simulating the behavior of Fogelson's continuum models of platelet aggregation have been developed which in particular involve a hybrid finite-difference and spectral method for the models' link evolution equation. This partial differential equation involvesfourspatial dimensions and time. The new methods are used to begin investigating the influence of new chemically induced activation, link formation, and shear-induced link breaking in determining when aggregates develop sufficient strength to remain intact and when they are broken apart by fluid stresses.

  1. Computational methods for high-throughput pooled genetic experiments

    E-print Network

    Edwards, Matthew Douglas

    2011-01-01

    Advances in high-throughput DNA sequencing have created new avenues of attack for classical genetics problems. This thesis develops and applies principled methods for analyzing DNA sequencing data from multiple pools of ...

  2. Comparison of computation methods for CBM production performance 

    E-print Network

    Mora, Carlos A.

    2009-06-02

    methane production is somewhat complicated and has led to numerous methods of approximating production performance. Many CBM reservoirs go through a dewatering period before significant gas production occurs. With dewatering, desorption of gas...

  3. Computational Intelligence Methods For Rule-Based Data Understanding

    E-print Network

    Setiono, Rudy

    in statistics, linear discrimination methods, support vector machines, and multilayered perceptron (MLP) neural symbolic knowledge and refine the resulting knowledge-based expert systems? Black-box statistical unacceptable risks for medical, industrial, and financial applications. Reasoning with logical rules

  4. Introduction to Computational Methods for Stability and Control (COMSAC)

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Fremaux, C. Michael; Chambers, Joseph R.

    2004-01-01

    This Symposium is intended to bring together the often distinct cultures of the Stability and Control (S&C) community and the Computational Fluid Dynamics (CFD) community. The COMSAC program is itself a new effort by NASA Langley to accelerate the application of high end CFD methodologies to the demanding job of predicting stability and control characteristics of aircraft. This talk is intended to set the stage for needing a program like COMSAC. It is not intended to give details of the program itself. The topics include: 1) S&C Challenges; 2) Aero prediction methodology; 3) CFD applications; 4) NASA COMSAC planning; 5) Objectives of symposium; and 6) Closing remarks.

  5. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  6. The Voronoi Implicit Interface Method for computing multiphase physics

    PubMed Central

    Saye, Robert I.; Sethian, James A.

    2011-01-01

    We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. PMID:22106269

  7. A new method to compute standard-weight equations that reduces length-related bias

    USGS Publications Warehouse

    Gerow, K.G.; Anderson-Sprecher, R. C.; Hubert, W.A.

    2005-01-01

    We propose a new method for developing standard-weight (Ws) equations for use in the computation of relative weight (Wr) because the regression line-percentile (RLP) method often leads to length-related biases in Ws equations. We studied the structural properties of W s equations developed by the RLP method through simulations, identified reasons for biases, and compared Ws equations computed by the RLP method and the new method. The new method is similar to the RLP method but is based on means of measured weights rather than on means of weights predicted from regression models. The new method also models curvilinear W s relationships not accounted for by the RLP method. For some length-classes in some species, the relative weights computed from Ws equations developed by the new method were more than 20 Wr units different from those using Ws equations developed by the RLP method. We recommend assessment of published Ws equations developed by the RLP method for length-related bias and use of the new method for computing new Ws equations when bias is identified. ?? Copyright by the American Fisheries Society 2005.

  8. Simple and fast cosine approximation method for computer-generated hologram calculation.

    PubMed

    Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ito, Tomoyoshi

    2015-12-14

    The cosine function is a heavy computational operation in computer-generated hologram (CGH) calculation; therefore, it is implemented by substitution methods such as a look-up table. However, the computational load and required memory space of such methods are still large. In this study, we propose a simple and fast cosine function approximation method for CGH calculation. As a result, we succeeded in creating CGH with sufficient quality and made the calculation time 1.6 times as fast at maximum compared to using the look-up table of the cosine function on CPU implementation. PMID:26699035

  9. Methods of legitimation: how ethics committees decide which reasons count in public policy decision-making.

    PubMed

    Edwards, Kyle T

    2014-07-01

    In recent years, liberal democratic societies have struggled with the question of how best to balance expertise and democratic participation in the regulation of emerging technologies. This study aims to explain how national deliberative ethics committees handle the practical tension between scientific expertise, ethical expertise, expert patient input, and lay public input by explaining two institutions' processes for determining the legitimacy or illegitimacy of reasons in public policy decision-making: that of the United Kingdom's Human Fertilisation and Embryology Authority (HFEA) and the United States' American Society for Reproductive Medicine (ASRM). The articulation of these 'methods of legitimation' draws on 13 in-depth interviews with HFEA and ASRM members and staff conducted in January and February 2012 in London and over Skype, as well as observation of an HFEA deliberation. This study finds that these two institutions employ different methods in rendering certain arguments legitimate and others illegitimate: while the HFEA attempts to 'balance' competing reasons but ultimately legitimizes arguments based on health and welfare concerns, the ASRM seeks to 'filter' out arguments that challenge reproductive autonomy. The notably different structures and missions of each institution may explain these divergent approaches, as may what Sheila Jasanoff (2005) terms the distinctive 'civic epistemologies' of the US and the UK. Significantly for policy makers designing such deliberative committees, each method differs substantially from that explicitly or implicitly endorsed by the institution. PMID:24833251

  10. A Survey of LR-Parsing Methods The Graph Method For Computing Fixed Points

    E-print Network

    Gallier, Jean

    with a brief presentation of LR(1) parsers. 1 #12;1 LR(0)-Characteristic Automata The purpose of LR to obtain an SLR(k) or an LALR(k) parser from an LR(0) parser is the computation of lookahead sets of FIRST, FOLLOW, and LALR(1) Lookahead Sets Jean Gallier Department of Computer and Information Science

  11. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  12. Leading Computational Methods on Scalar and Vector HEC Platforms

    SciTech Connect

    Oliker, Leonid; Carter, Jonathan; Wehner, Michael; Canning, Andrew; Ethier, Stephane; Mirin, Arthur; Bala, Govindasamy; Parks, David; Worley, Patrick H; Kitawaki, Shigemune; Tsuda, Yoshinori

    2005-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: atmospheric modeling (CAM), magnetic fusion (GTC), plasma physics (LBMHD3D), and material science (PARATEC). We compare performance of the vector-based Cray X1, Earth Simulator, and newly-released NEC SX-8 and Cray X1E, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: the first reported vector performance results for CAM simulations utilizing a finite-volume dynamical core on a high-resolution atmospheric grid; a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ESpromodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors, with modern parallel vector systems: the Cray X1, Earth Simulator (ES), and the NEC SX-8. Additionally, we examine performance of CAM on the recently-released Cray X1E. Our research team was the first international group to conduct a performance evaluation study at the Earth Simulator Center; remote ES access is not available.Our work builds on our previous efforts [16, 17] and makes several significant contributions: the first reported vector performance results for CAM simulations utilizing a finite-volume dynamical core on a high-resolution atmospheric grid; a new datadecomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.

  13. Private and Public Sector Enterprise Resource Planning System Post-Implementation Practices: A Comparative Mixed Method Investigation

    ERIC Educational Resources Information Center

    Bachman, Charles A.

    2010-01-01

    While private sector organizations have implemented enterprise resource planning (ERP) systems since the mid 1990s, ERP implementations within the public sector lagged by several years. This research conducted a mixed method, comparative assessment of post "go-live" ERP implementations between public and private sector organization. Based on a…

  14. Methods Used to Assess the Susceptibility to Contamination of Transient, Non-Community Public Ground-Water Supplies in Indiana

    USGS Publications Warehouse

    Arihood, Leslie D.; Cohen, David A.

    2006-01-01

    The Safe Water Drinking Act of 1974 as amended in 1996 gave each State the responsibility of developing a Source-Water Assessment Plan (SWAP) that is designed to protect public-water supplies from contamination. Each SWAP must include three elements: (1) a delineation of the source-water protection area, (2) an inventory of potential sources of contaminants within the area, and (3) a determination of the susceptibility of the public-water supply to contamination from the inventoried sources. The Indiana Department of Environmental Management (IDEM) was responsible for preparing a SWAP for all public-water supplies in Indiana, including about 2,400 small public ground-water supplies that are designated transient, non-community (TNC) supplies. In cooperation with IDEM, the U.S. Geological Survey compiled information on conditions near the TNC supplies and helped IDEM complete source-water assessments for each TNC supply. The delineation of a source-water protection area (called the assessment area) for each TNC ground-water supply was defined by IDEM as a circular area enclosed by a 300-foot radius centered at the TNC supply well. Contaminants of concern (COCs) were defined by IDEM as any of the 90 contaminants for which the U.S. Environmental Protection Agency has established primary drinking-water standards. Two of these, nitrate as nitrogen and total coliform bacteria, are Indiana State-regulated contaminants for TNC water supplies. IDEM representatives identified potential point and nonpoint sources of COCs within the assessment area, and computer database retrievals were used to identify potential point sources of COCs in the area outside the assessment area. Two types of methods-subjective and subjective hybrid-were used in the SWAP to determine susceptibility to contamination. Subjective methods involve decisions based upon professional judgment, prior experience, and (or) the application of a fundamental understanding of processes without the collection and analysis of data for a specific condition. Subjective hybrid methods combine subjective methods with quantitative hydrologic analyses. The subjective methods included an inventory of potential sources and associated contaminants, and a qualitative description of the inherent susceptibility of the area around the TNC supply. The description relies on a classification of the hydrogeologic and geomorphic characteristics of the general area around the TNC supply in terms of its surficial geology, regional aquifer system, the occurrence of fine- and coarse-grained geologic materials above the screen of the TNC well, and the potential for infiltration of contaminants. The subjective hybrid method combined the results of a logistic regression analysis with a subjective analysis of susceptibility and a subjective set of definitions that classify the thickness of fine-grained geologic materials above the screen of a TNC well in terms of impedance to vertical flow. The logistic regression determined the probability of elevated concentrations of nitrate as nitrogen (greater than or equal to 3 milligrams per liter) in ground water associated with specific thicknesses of fine-grained geologic materials above the screen of a TNC well. In this report, fine-grained geologic materials are referred to as a geologic barrier that generally impedes vertical flow through an aquifer. A geologic barrier was defined to be thin for fine-grained materials between 0 and 45 feet thick, moderate for materials between 45 and 75 feet thick, and thick if the fine-grained materials were greater than 75 feet thick. A flow chart was used to determine the susceptibility rating for each TNC supply. The flow chart indicated a susceptibility rating using (1) concentrations of nitrate as nitrogen and total coliform bacteria reported from routine compliance monitoring of the TNC supply, (2) the presence or absence of potential sources of regulated contaminants (nitrate as nitrogen and coliform bac

  15. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  16. Computer capillaroscopy as a new cardiological diagnostics method

    NASA Astrophysics Data System (ADS)

    Gurfinkel, Youri I.; Korol, Oleg A.; Kufal, George E.

    1998-04-01

    The blood flow in capillary vessels plays an important role in sustaining the vital activity of the human organism. The computerized capillaroscope is used for the investigations of nailfold (eponychium) capillary blood flow. An important advantage of the instrument is the possibility of performing non-invasive investigations, i.e., without damage to skin or vessels and causing no pain or unpleasant sensations. The high-class equipment and software allow direct observation of capillary blood flow dynamics on a computer screen at a 700 - 1300 times magnification. For the first time in the clinical practice, it has become possible to precisely measure the speed of capillary blood flow, as well as the frequency of aggregate formation (glued together in clots of blood particles). In addition, provision is made for automatic measurement of capillary size and wall thickness and automatic recording of blood aggregate images for further visual study, documentation, and electronic database management.

  17. Computation of Spectroscopic Factors with the Coupled-Cluster Method

    SciTech Connect

    Jensen, O.; Hagen, Gaute; Papenbrock, T.; Dean, David Jarvis; Vaagen, J. S.

    2010-01-01

    We present a calculation of spectroscopic factors within coupled-cluster theory. Our derivation of algebraic equations for the one-body overlap functions are based on coupled-cluster equation-of-motion solutions for the ground and excited states of the doubly magic nucleus with mass number A and the odd-mass neighbor with mass A-1. As a proof-of-principle calculation, we consider ^{16}O and the odd neighbors ^{15}O and ^{15}N, and compute the spectroscopic factor for nucleon removal from ^{16}O. We employ a renormalized low-momentum interaction of the V_{low-k} type derived from a chiral interaction at next-to-next-to-next-to-leading order. We study the sensitivity of our results by variation of the momentum cutoff, and then discuss the treatment of the center of mass.

  18. Eliciting road traffic injuries cost among Iranian drivers’ public vehicles using willingness to pay method

    PubMed Central

    Ainy, Elaheh; Soori, Hamid; Ganjali, Mojtaba; Baghfalaki, Taban

    2015-01-01

    Background and Aim: To allocate resources at the national level and ensure the safety level of roads with the aim of economic efficiency, cost calculation can help determine the size of the problem and demonstrate the economic benefits resulting from preventing such injuries. This study was carried out to elicit the cost of traffic injuries among Iranian drivers of public vehicles. Materials and Methods: In a cross-sectional study, 410 drivers of public vehicles were randomly selected from all the drivers in city of Tehran, Iran. The research questionnaire was prepared based on the standard for willingness to pay (WTP) method (stated preference (SP), contingent value (CV), and revealed preference (RP) model). Data were collected along with a scenario for vehicle drivers. Inclusion criteria were having at least high school education and being in the age range of 18 to 65 years old. Final analysis of willingness to pay was carried out using Weibull model. Results: Mean WTP was 3,337,130 IRR among drivers of public vehicles. Statistical value of life was estimated 118,222,552,601,648 IRR, for according to 4,694 dead drivers, which was equivalent to 3,940,751,753 $ based on the dollar free market rate of 30,000 IRR (purchase power parity). Injury cost was 108,376,366,437,500 IRR, equivalent to 3,612,545,548 $. In sum, injury and death cases came to 226,606,472,346,449 IRR, equivalent to 7,553,549,078 $. Moreover in 2013, cost of traffic injuries among the drivers of public vehicles constituted 1.25% of gross national income, which was 604,300,000,000$. WTP had a significant relationship with gender, daily payment, more payment for time reduction, more pay to less traffic, and minibus drivers. Conclusion: Cost of traffic injuries among drivers of public vehicles included 1.25% of gross national income, which was noticeable; minibus drivers had less perception of risk reduction than others. PMID:26157655

  19. Method and Apparatus for Computed Imaging Backscatter Radiography

    NASA Technical Reports Server (NTRS)

    Shedlock, Daniel (Inventor); Meng, Christopher (Inventor); Sabri, Nissia (Inventor); Dugan, Edward T. (Inventor); Jacobs, Alan M. (Inventor)

    2013-01-01

    Systems and methods of x-ray backscatter radiography are provided. A single-sided, non-destructive imaging technique utilizing x-ray radiation to image subsurface features is disclosed, capable of scanning a region using a fan beam aperture and gathering data using rotational motion.

  20. Limitations of the current methods used to compute meteors orbits

    NASA Astrophysics Data System (ADS)

    Egal, A.; Gural, P.; Vaubaillon, J.; Colas, F.; Thuillot, W.

    2015-10-01

    The Cameras for BEtter Resolution NETwork (CABERNET) project aims to provide the most accurate meteoroid orbits achievable working with digital recordings of night sky imagery. The level of performance obtained is governed by the technical attributes of the collection systems and having both accurate and robust data processing. The technical challenges have been met by employing three cameras, each with a field of view of 40°x26° and a spatial (angular) resolution of 0.01°/pixel. The single image snapshots of meteors achieve temporal discrimination along the track through the use of an electronic shutter coupled to the cameras, operating at a sample rate between 100Hz and 200Hz. The numerical processing of meteor trajectories has already been explored by many authors. This included an examination of the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-fit parameterization method published by Gural (2012). After a comparison of these three techniques, we chose to implement Gural 's method, employing several non-linear minimization techniques and trying to match the modeling as close as possible to the basic data measured, i.e. the meteor space-time positions in the sequence of images. This approach results in a more precise and reliable way to determine both the meteor trajectory and velocity through the atmosphere.

  1. International Conference on Computational Methods Marine Engineering MARINE 2005

    E-print Network

    Löhner, Rainald

    , Projection Schemes, VOF, Level Set, FEM, CFD Abstract. A Volume of Fluid (VOF) technique has been developed implemented. The VOF technique is validated against the classic dam-break problem, as well as series of 2-D that the present CFD method is capable of simulating violent free surface flows with strong nonlinear behaviour. 1

  2. Submitted to Comput. Methods Appl. Mech. Eng. AN OVERVIEW OF ...

    E-print Network

    2005-02-17

    into three classes, namely the pressure-correction methods, the .... an open, connected, bounded domain ? ? Rd (d = 2, or 3) with a sufficiently smooth boundary. ?. ..... and solving the projection step as a weak Poisson problem, the discrete field uk+1 h ..... we have performed convergence tests using P2/P1 finite elements.

  3. Using Computers in Relation to Learning Climate in CLIL Method

    ERIC Educational Resources Information Center

    Binterová, Helena; Komínková, Olga

    2013-01-01

    The main purpose of the work is to present a successful implementation of CLIL method in Mathematics lessons in elementary schools. Nowadays at all types of schools (elementary schools, high schools and universities) all over the world every school subject tends to be taught in a foreign language. In 2003, a document called Action plan for…

  4. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    NASA Astrophysics Data System (ADS)

    Ro?u, ?erban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Ro?u, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  5. A Boundary Integral Method for Computing Elastic Moment Tensors for Ellipses and Ellipsoids

    E-print Network

    Kang, Hyeonbae

    elastic composites. In this paper, we compute the elastic moment tensors for ellipses and ellipsoids of perturbations of the elastic energy [11, 10] and in models of the effective properties of dilute elasticA Boundary Integral Method for Computing Elastic Moment Tensors for Ellipses and Ellipsoids Habib

  6. Computational Methods for Estimation in the Modeling of Nonlinear Elastomers \\Lambda

    E-print Network

    Computational Methods for Estimation in the Modeling of Nonlinear Elastomers \\Lambda H. T. Banks y to model nonlinear dynamics in elastomers. Our efforts include the development of computational techniques vibration suppression devices. Materials such as elastomers, rubber­like composites typically filled

  7. Binding energies of tyrosine kinase inhibitors: Error assessment of computational methods for imatinib and nilotinib binding.

    PubMed

    Fong, Clifford W

    2015-10-01

    The binding energies of imatinib and nilotinib to tyrosine kinase have been determined by quantum mechanical (QM) computations, and compared with literature binding energy studies using molecular mechanics (MM). The potential errors in the computational methods include these critical factors. PMID:26025598

  8. A new computational method for cable theory problems Bulin J. Cao and L. F. Abbott

    E-print Network

    Abbott, Laurence

    . INTRODUCTION Cable theory is the primary tool used to relate the geo- metric form ofa neuron to its electrical function ( 1-3). The basic problem of cable theory is to compute the membrane potential everywhereA new computational method for cable theory problems Bulin J. Cao and L. F. Abbott Physics

  9. MHRDRing Z-Pinches and Related Geometries: Four Decades of Computational Modeling Using Still Unconventional Methods

    SciTech Connect

    Lindemuth, Irvin R.

    2009-01-21

    For approximately four decades, Z-pinches and related geometries have been computationally modeled using unique Alternating Direction Implicit (ADI) numerical methods. Computational results have provided illuminating and often provocative interpretations of experimental results. A number of past and continuing applications are reviewed and discussed.

  10. Logical Methods in Computer Science Vol. 9(4:13)2013, pp. 126

    E-print Network

    Doyen, Laurent

    Logical Methods in Computer Science Vol. 9(4:13)2013, pp. 1­26 www.lmcs-online.org Submitted Nov. 14, 2012 Published Nov. 14, 2013 AVOIDING SHARED CLOCKS IN NETWORKS OF TIMED AUTOMATA SANDIE COMPUTER SCIENCE DOI:10.2168/LMCS-9(4:13)2013 c S. Balaguer and T. Chatain CC Creative Commons #12;2 S

  11. CLOUD COMPUTING TECHNOLOGIES PROGRAM An eleven-week in-depth program in the principles, methods, and technologies of Cloud Computing

    E-print Network

    Schaefer, Marcus

    CLOUD COMPUTING TECHNOLOGIES PROGRAM An eleven-week in-depth program in the principles, methods, and technologies of Cloud Computing DePaul University's Cloud Computing Technologies Program provides a broad understanding of the different leading Cloud Computing technologies. The program is designed to quickly educate

  12. Under consideration for publication in Formal Aspects of Computing Deriving Dense Linear Algebra

    E-print Network

    Batory, Don

    computing 1. Introduction Linear algebra libraries reside at the bottom of the scientific computing food-matrix operations (like matrix-matrix multiplication). The loop steps through matrices with block sizes chosen so

  13. Novel methods in computational analysis and design of protein-protein interactions : applications to phosphoregulated interactions

    E-print Network

    Joughin, Brian Alan

    2007-01-01

    This thesis presents a number of novel computational methods for the analysis and design of protein-protein complexes, and their application to the study of the interactions of phosphopeptides with phosphopeptide-binding ...

  14. A numerical method for the computation of profile loss of turbine blades

    NASA Astrophysics Data System (ADS)

    Chandraker, A. L.

    Two schemes are presented for computing the profile loss of turbine blades. The first, a generalized 'loss-correlation' scheme, based on a set of semiempirical expressions, is an extension of the Ainley and Mathieson (1951) method. It can predict the profile loss closer to the experimental results than the existing similar schemes and is easy to implement on a small computing machine. The second, a numerical 'loss-analysis' method, based on the combination of potential flow field analysis in the cascade core, and turbulent boundary layer flow analysis in the boundary region, can be used to compute the surface loading, boundary layer characteristics, and thereby the loss-coefficient for axial flow turbine blades. Contrary to the case of loss-correlation methods, no separate corrections for Reynolds and Mach numbers are needed, and no empirical corrections for the effect of curvature are incorporated. The surface loading and energy loss computed by this method agreed well with the available experimental results.

  15. Application of Computer-Assisted Learning Methods in the Teaching of Chemical Spectroscopy.

    ERIC Educational Resources Information Center

    Ayscough, P. B.; And Others

    1979-01-01

    Discusses the application of computer-assisted learning methods to the interpretation of infrared, nuclear magnetic resonance, and mass spectra; and outlines extensions into the area of integrated spectroscopy. (Author/CMV)

  16. A unified moving grid gas-kinetic method in Eulerian space for viscous flow computation

    E-print Network

    Xu, Kun

    boundaries, such as dam break problem and airfoil oscillations. In order to further increase the robustness on developing computational fluid dynamics (CFD) methods based on the above two coordinates system

  17. Principled computational methods for the validation discovery of genetic regulatory networks

    E-print Network

    Hartemink, Alexander J. (Alexander John), 1972-

    2001-01-01

    As molecular biology continues to evolve in the direction of high-throughput collection of data, it has become increasingly necessary to develop computational methods for analyzing observed data that are at once both ...

  18. Computational Method for Drug Target Search and Application in Drug Discovery

    E-print Network

    Chen, Yuzong

    Ligand-protein inverse docking has recently been introduced as a computer method for identification of potential protein targets of a drug. A protein structure database is searched to find proteins to which a drug can bind ...

  19. Computational studies of hydrogen storage materials and the development of related methods

    E-print Network

    Mueller, Timothy Keith

    2007-01-01

    Computational methods, including density functional theory and the cluster expansion formalism, are used to study materials for hydrogen storage. The storage of molecular hydrogen in the metal-organic framework with formula ...

  20. Computational methods for efficient nuclear data management in Monte Carlo neutron transport simulations

    E-print Network

    Walsh, Jonathan A. (Jonathan Alan)

    2014-01-01

    This thesis presents the development and analysis of computational methods for efficiently accessing and utilizing nuclear data in Monte Carlo neutron transport code simulations. Using the OpenMC code, profiling studies ...

  1. A Method to Implement Direct Anonymous Attestation Department of Computer Science and Engineering

    E-print Network

    International Association for Cryptologic Research (IACR)

    , software integrity attestation, etc. More introduction about TPMs, and trusted computing platforms can as follows. Section 2 analyzes the characteristics of TPMs and our method. Section 3 reviews some definitions

  2. A Method to Implement Direct Anonymous Attestation Department of Computer Science and Engineering

    E-print Network

    International Association for Cryptologic Research (IACR)

    , software integrity attestation, etc. More introduction about TPMs, and trusted computing platforms can is organized as follows. Section 2 analyzes the characteristics of TPMs and our method. Section 3 reviews some

  3. Factors Affecting Texas Farm Commodity Prices and Index Computation Methods, 1910-58. 

    E-print Network

    Strong, G. B.; Kincannon, J. A.

    1959-01-01

    2 . AND INDEX COMPUTATION METHODS, 1 9 10 - 58 . ..I .\\:,g It rwr - *% $* , . ,> , y. 5'- ' TEXAS AGRICULTURAL EXPERIMENT STATION .1 I IS. DIRECTOR. COLLEOE STATION. TEXAS Index of Prices Received by Texas Farmers.. .lO Selection of Commodities...

  4. COMPUTATIONAL METHODS FOR STUDYING THE INTERACTION BETWEEN POLYCYCLIC AROMATIC HYDROCARBONS AND BIOLOGICAL MACROMOLECULES

    EPA Science Inventory

    Computational Methods for Studying the Interaction between Polycyclic Aromatic Hydrocarbons and Biological Macromolecules .

    The mechanisms for the processes that result in significant biological activity of PAHs depend on the interaction of these molecules or their metabol...

  5. Computational methods for constructing protein structure models from 3D electron microscopy maps

    E-print Network

    Kihara, Daisuke

    Computational methods for constructing protein structure models from 3D electron microscopy maps online 21 June 2013 Keywords: Electron microscopy Structure fitting Macromolecular structure modeling. All rights reserved. 1. Introduction Electron density maps from cryo-electron microscopy (cryo

  6. Multi-centred mixed-methods PEPFAR HIV care & support public health evaluation: study protocol

    PubMed Central

    2010-01-01

    Background A public health response is essential to meet the multidimensional needs of patients and families affected by HIV disease in sub-Saharan Africa. In order to appraise curret provision of HIV care and support in East Africa, and to provide evidence-based direction to future care programming, and Public Health Evaluation was commissioned by the PEPFAR programme of the US Government. Methods/Design This paper described the 2-Phase international mixed methods study protocol utilising longitudinal outcome measurement, surveys, patient and family qualitative interviews and focus groups, staff qualitative interviews, health economics and document analysis. Aim 1) To describe the nature and scope of HIV care and support in two African countries, including the types of facilities available, clients seen, and availability of specific components of care [Study Phase 1]. Aim 2) To determine patient health outcomes over time and principle cost drivers [Study Phase 2]. The study objectives are as follows. 1) To undertake a cross-sectional survey of service configuration and activity by sampling 10% of the facilities being funded by PEPFAR to provide HIV care and support in Kenya and Uganda (Phase 1) in order to describe care currently provided, including pharmacy drug reviews to determine availability and supply of essential drugs in HIV management. 2) To conduct patient focus group discussions at each of these (Phase 1) to determine care received. 3) To undertake a longitudinal prospective study of 1200 patients who are newly diagnosed with HIV or patients with HIV who present with a new problem attending PEPFAR care and support services. Data collection includes self-reported quality of life, core palliative outcomes and components of care received (Phase 2). 4) To conduct qualitative interviews with staff, patients and carers in order to explore and understand service issues and care provision in more depth (Phase 2). 5) To undertake document analysis to appraise the clinical care procedures at each facility (Phase 2). 6) To determine principle cost drivers including staff, overhead and laboratory costs (Phase 2). Discussion This novel mixed methods protocol will permit transparent presentation of subsequent dataset results publication, and offers a substantive model of protocol design to measure and integrate key activities and outcomes that underpin a public health approach to disease management in a low-income setting. PMID:20920241

  7. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  8. Method for measuring the public's appreciation and knowledge of bank notes

    NASA Astrophysics Data System (ADS)

    de Heij, Hans A. M.

    2002-04-01

    No matter how sophisticated a banknotes' security features are, they are only effective if the public uses them. Surveys conducted by the De Nederlandsche Bank (the Dutch central bank, hereinafter: DNB) in the period 1989-1999 have shown that: the more people like a banknote, the more they know about it, including its security features; there is a positive correlation between the appreciation of a banknote (beautiful or ugly) and the knowledge of its security features, its picture and text elements; hardly anybody from the general public knows more than 4 security features by heart, which is why the number of security features for the public should be confined to a maximum of 4; the average number of security features known to a Dutchman was about 1.7 in 1999; over the years, the awareness of banknote security features gradually increased from 1.03 to 1983 to 1.7 in 1999, as a result of new banknote design and information campaigns. In 1999, DNB conducted its last opinion poll on NLG-notes. After the introduction of the euro banknotes on 1 January 2002, a new era of measurements will start. It is DNB's intention to apply the same method for the euro notes as it is used to for the NLG-notes, as this will permit: A comparison of the results of surveys on Dutch banknotes with those of surveys on the new euro notes (NLG) x (EUR); a comparison between the results of similar surveys conducted in other euro countries: (EUR1)x(EUR2). Furthermore, it will enable third parties to compare their banknote model XXX with the euro: (XXX)x(EUR). This article deals with the survey and the results regarding the NLG- notes and is, moreover, intended as an invitation to use the survey method described.

  9. Robust, efficient computational methods for axially symmetric optical aspheres.

    PubMed

    Forbes, G W

    2010-09-13

    Whether in design or the various stages of fabrication and testing, an effective representation of an asphere's shape is critical. Some algorithms are given for implementing tailored polynomials that are ideally suited to these needs. With minimal coding, these results allow a recently introduced orthogonal polynomial basis to be employed to arbitrary orders. Interestingly, these robust and efficient methods are enabled by the introduction of an auxiliary polynomial basis. PMID:20940865

  10. Development of supersonic computational aerodynamic program using panel method

    NASA Technical Reports Server (NTRS)

    Maruyama, Y.; Akishita, S.; Nakamura, A.

    1987-01-01

    An aerodynamic program for steady supersonic linearized potential flow using a higher order panel method was developed. Boundary surface is divided into planar triangular panels on each of which a linearly varying doublet and a constant or linearly varying source are distributed. Distributions of source and doublet on the panel assemblies of the panels can be determined by their strengths at nodal points, which are placed at the vertices of the panels for linear distribution or on each panel for constant distribution.

  11. Singularity computations. [finite element methods for elastoplastic flow

    NASA Technical Reports Server (NTRS)

    Swedlow, J. L.

    1978-01-01

    Direct descriptions of the structure of a singularity would describe the radial and angular distributions of the field quantities as explicitly as practicable along with some measure of the intensity of the singularity. This paper discusses such an approach based on recent development of numerical methods for elastoplastic flow. Attention is restricted to problems where one variable or set of variables is finite at the origin of the singularity but a second set is not.

  12. Population density methods for stochastic neurons with realistic synaptic kinetics: firing rate dynamics and fast computational methods.

    PubMed

    Apfaltrer, Felix; Ly, Cheng; Tranchina, Daniel

    2006-12-01

    An outstanding problem in computational neuroscience is how to use population density function (PDF) methods to model neural networks with realistic synaptic kinetics in a computationally efficient manner. We explore an application of two-dimensional (2-D) PDF methods to simulating electrical activity in networks of excitatory integrate-and-fire neurons. We formulate a pair of coupled partial differential-integral equations describing the evolution of PDFs for neurons in non-refractory and refractory pools. The population firing rate is given by the total flux of probability across the threshold voltage. We use an operator-splitting method to reduce computation time. We report on speed and accuracy of PDF results and compare them to those from direct, Monte-Carlo simulations. We compute temporal frequency response functions for the transduction from the rate of postsynaptic input to population firing rate, and examine its dependence on background synaptic input rate. The behaviors in the1-D and 2-D cases--corresponding to instantaneous and non-instantaneous synaptic kinetics, respectively--differ markedly from those for a somewhat different transduction: from injected current input to population firing rate output (Brunel et al. 2001; Fourcaud & Brunel 2002). We extend our method by adding inhibitory input, consider a 3-D to 2-D dimension reduction method, demonstrate its limitations, and suggest directions for future study. PMID:17162461

  13. Theoretical studies of potential energy surfaces and computational methods

    SciTech Connect

    Shepard, R.

    1993-12-01

    This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. This difficult challenge is met with general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions, are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.

  14. Theoretical studies of potential energy surfaces and computational methods.

    SciTech Connect

    Shepard, R.

    2006-01-01

    This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces (PES) involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. Most of our work focuses on general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of molecular geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.

  15. Using Mixed Methods and Collaboration to Evaluate an Education and Public Outreach Program (Invited)

    NASA Astrophysics Data System (ADS)

    Shebby, S.; Shipp, S. S.

    2013-12-01

    Traditional indicators (such as the number of participants or Likert-type ratings of participant perceptions) are often used to provide stakeholders with basic information about program outputs and to justify funding decisions. However, use of qualitative methods can strengthen the reliability of these data and provide stakeholders with more meaningful information about program challenges, successes, and ultimate impacts (Stern, Stame, Mayne, Forss, David & Befani, 2012). In this session, presenters will discuss how they used a mixed methods evaluation to determine the impact of an education and public outreach (EPO) program. EPO efforts were intended to foster more effective, sustainable, and efficient utilization of science discoveries and learning experiences through three main goals 1) increase engagement and support by leveraging of resources, expertise, and best practices; 2) organize a portfolio of resources for accessibility, connectivity, and strategic growth; and 3) develop an infrastructure to support coordination. The evaluation team used a mixed methods design to conduct the evaluation. Presenters will first discuss five potential benefits of mixed methods designs: triangulation of findings, development, complementarity, initiation, and value diversity (Greene, Caracelli & Graham, 2005). They will next demonstrate how a 'mix' of methods, including artifact collection, surveys, interviews, focus groups, and vignettes, was included in the EPO project's evaluation design, providing specific examples of how alignment between the program theory and the evaluation plan was best achieved with a mixed methods approach. The presentation will also include an overview of different mixed methods approaches and information about important considerations when using a mixed methods design, such as selection of data collection methods and sources, and the timing and weighting of quantitative and qualitative methods (Creswell, 2003). Ultimately, this presentation will provide insight into how a mixed methods approach was used to provide stakeholders with important information about progress toward program goals. Creswell, J.W. (2003). Research design: Qualitative, quantitative, and mixed approaches. Thousand Oaks, CA: Sage. Greene, J. C., Caracelli, V. J., & Graham, W. D. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11(3), 255-274. Stern, E; Stame, N; Mayne, J; Forss, K; Davis, R & Befani, B (2012) Broadening the range of designs and methods for impact evaluation. Department for International Development.

  16. 47 CFR 90.483 - Permissible methods and requirements of interconnecting private and public systems of...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...private and public systems of communications. 90.483 Section 90.483 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY...private and public systems of communications. Interconnection...

  17. Computational Biology Methods for Characterization of Pluripotent Cells.

    PubMed

    Araúzo-Bravo, Marcos J

    2016-01-01

    Pluripotent cells are a powerful tool for regenerative medicine and drug discovery. Several techniques have been developed to induce pluripotency, or to extract pluripotent cells from different tissues and biological fluids. However, the characterization of pluripotency requires tedious, expensive, time-consuming, and not always reliable wet-lab experiments; thus, an easy, standard quality-control protocol of pluripotency assessment remains to be established. Here to help comes the use of high-throughput techniques, and in particular, the employment of gene expression microarrays, which has become a complementary technique for cellular characterization. Research has shown that the transcriptomics comparison with an Embryonic Stem Cell (ESC) of reference is a good approach to assess the pluripotency. Under the premise that the best protocol is a computer software source code, here I propose and explain line by line a software protocol coded in R-Bioconductor for pluripotency assessment based on the comparison of transcriptomics data of pluripotent cells with an ESC of reference. I provide advice for experimental design, warning about possible pitfalls, and guides for results interpretation. PMID:26141313

  18. Pragmatic approaches to using computational methods to predict xenobiotic metabolism.

    PubMed

    Piechota, Przemyslaw; Cronin, Mark T D; Hewitt, Mark; Madden, Judith C

    2013-06-24

    In this study the performance of a selection of computational models for the prediction of metabolites and/or sites of metabolism was investigated. These included models incorporated in the MetaPrint2D-React, Meteor, and SMARTCyp software. The algorithms were assessed using two data sets: one a homogeneous data set of 28 Non-Steroidal Anti-Inflammatory Drugs (NSAIDs) and paracetamol (DS1) and the second a diverse data set of 30 top-selling drugs (DS2). The prediction of metabolites for the diverse data set (DS2) was better than for the more homogeneous DS1 for each model, indicating that some areas of chemical space may be better represented than others in the data used to develop and train the models. The study also identified compounds for which none of the packages could predict metabolites, again indicating areas of chemical space where more information is needed. Pragmatic approaches to using metabolism prediction software have also been proposed based on the results described here. These approaches include using cutoff values instead of restrictive reasoning settings in Meteor to reduce the output with little loss of sensitivity and for directing metabolite prediction by preselection based on likely sites of metabolism. PMID:23718189

  19. NMR quantum computing: applying theoretical methods to designing enhanced systems.

    PubMed

    Mawhinney, Robert C; Schreckenbach, Georg

    2004-10-01

    Density functional theory results for chemical shifts and spin-spin coupling constants are presented for compounds currently used in NMR quantum computing experiments. Specific design criteria were examined and numerical guidelines were assessed. Using a field strength of 7.0 T, protons require a coupling constant of 4 Hz with a chemical shift separation of 0.3 ppm, whereas carbon needs a coupling constant of 25 Hz for a chemical shift difference of 10 ppm, based on the minimal coupling approximation. Using these guidelines, it was determined that 2,3-dibromothiophene is limited to only two qubits; the three qubit system bromotrifluoroethene could be expanded to five qubits and the three qubit system 2,3-dibromopropanoic acid could also be used as a six qubit system. An examination of substituent effects showed that judiciously choosing specific groups could increase the number of available qubits by removing rotational degeneracies in addition to introducing specific conformational preferences that could increase (or decrease) the magnitude of the couplings. The introduction of one site of unsaturation can lead to a marked improvement in spectroscopic properties, even increasing the number of active nuclei. PMID:15366045

  20. Scenario-based design: A method for connecting information system design with public health operations and emergency management

    PubMed Central

    Reeder, Blaine; Turner, Anne M

    2011-01-01

    Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Methods: Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Results: Interview analysis identified twenty-five information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create twenty-five scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. Conclusion: The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design. PMID:21807120

  1. July 5, 2007 10:45 Computational and Mathematical Methods in Medicine RejniakDillonCMMM Computational and Mathematical Methods in Medicine, Vol. 8, No. 1, pp.51-69 2007, 116

    E-print Network

    Dillon, Robert H.

    July 5, 2007 10:45 Computational and Mathematical Methods in Medicine RejniakDillonCMMM Computational and Mathematical Methods in Medicine, Vol. 8, No. 1, pp.51-69 2007, 1­16 A single cell based model. The definitive version was published in Computational and Mathematical Methods in Medicine, Volume 8 Issue 1

  2. 26 CFR 1.9001-1 - Change from retirement to straight-line method of computing depreciation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...straight-line method of computing depreciation. 1.9001-1 Section 1.9001-1...straight-line method of computing depreciation. (a) In general. The...the allowance of deductions for the depreciation of those roadway assets...

  3. Computation of the diffracted field of a toothed occulter by the semi-infinite rectangle method.

    PubMed

    Sun, Mingzhe; Zhang, Hongxin; Bu, Heyang; Wang, Xiaoxun; Ma, Junlin; Lu, Zhenwu

    2013-10-01

    To observe the solar corona, stray light in the coronagraph, arising primarily from an external occulter and diaphragm illuminated directly by the Sun, should be strongly suppressed. A toothed occulter and diaphragm can be used to suppress stray light because they diffract much less light in the central area than a circular disk. This study develops a method of computing the light diffracted by a toothed occulter and diaphragm, obtaining the optimum shape using this method. To prove the method's feasibility, the diffracted fields of circular and rectangular disks are computed and compared with those calculated by a conventional method. PMID:24322869

  4. Computing the principal eigenelements of some linear operators using a branching Monte Carlo method

    SciTech Connect

    Lejay, Antoine Maire, Sylvain

    2008-12-01

    In earlier work, we developed a Monte Carlo method to compute the principal eigenvalue of linear operators, which was based on the simulation of exit times. In this paper, we generalize this approach by showing how to use a branching method to improve the efficacy of simulating large exit times for the purpose of computing eigenvalues. Furthermore, we show that this new method provides a natural estimation of the first eigenfunction of the adjoint operator. Numerical examples of this method are given for the Laplace operator and an homogeneous neutron transport operator.

  5. A rigid motion correction method for helical computed tomography (CT)

    NASA Astrophysics Data System (ADS)

    Kim, J.-H.; Nuyts, J.; Kyme, A.; Kuncic, Z.; Fulton, R.

    2015-03-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data.

  6. Computational Methods for Stability and Control (COMSAC): The Time Has Come

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.

    2005-01-01

    Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.

  7. Computer methods for ITER-like materials LIBS diagnostics

    NASA Astrophysics Data System (ADS)

    ?epek, Micha?; GÄ sior, Pawe?

    2014-11-01

    Recent development of Laser-Induced Breakdown Spectroscopy (LIBS) caused that this method is considered as the most promising for future diagnostic applications for characterization of the deposited materials in the International Thermonuclear Experimental Reactor (ITER), which is currently under construction. In this article the basics of LIBS are shortly discussed and the software for spectra analyzing is presented. The main software function is to analyze measured spectra with respect to the certain element lines presence. Some program operation results are presented. Correct results for graphite and aluminum are obtained although identification of tungsten lines is a problem. The reason for this is low tungsten lines intensity, and thus low signal to noise ratio of the measured signal. In the second part artificial neural networks (ANNs) as the next step for LIBS spectra analyzing are proposed. The idea is focused on multilayer perceptron network (MLP) with backpropagation learning method. The potential of ANNs for data processing was proved through application in several LIBS-related domains, e.g. differentiating ancient Greek ceramics (discussed). The idea is to apply an ANN for determination of W, Al, C presence on ITER-like plasma-facing materials.

  8. A combined direct/inverse three-dimensional transonic wing design method for vector computers

    NASA Technical Reports Server (NTRS)

    Weed, R. A.; Carlson, L. A.; Anderson, W. K.

    1984-01-01

    A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

  9. Data-Driven Computational Methods for Materials Characterization, Classification, and Discovery

    NASA Astrophysics Data System (ADS)

    Meredig, Bryce

    Many major technological challenges facing contemporary society, in fields from energy to medicine, contain within them a materials discovery requirement. While, historically, these discoveries emerged from intuition and experimentation in the laboratory, modern computational methods and hardware hold the promise to dramatically accelerate materials discovery efforts. However, a number of key questions must be answered in order for computation to approach its full potential in new materials development. This thesis explores some of these questions, including: 1) How can we ensure that computational methods are amenable to as broad a range of materials as possible? 2) How can computational techniques assist experimental materials characterization? 3) Can computation readily predict properties indicative of real-world materials performance? 4) How do we glean actionable insights from the vast stores of data that computational methods generate? and 5) Can we lift some of the burdensome requirements for computational study of compounds that are entirely uncharacterized experimentally? In addressing these points, we turn frequently to concepts from statistics, computer science, and applied mathematics to shed new light on traditional topics in materials science, and offer a data-driven approach to steps in materials discovery.

  10. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    SciTech Connect

    Hadj Salah, S. Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-15

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.

  11. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    NASA Astrophysics Data System (ADS)

    Hadj Salah, S.; Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-01

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.

  12. A parallel finite-difference method for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Swisshelm, Julie M.

    1989-01-01

    A finite-difference scheme for solving complex three-dimensional aerodynamic flow on parallel-processing supercomputers is presented. The method consists of a basic flow solver with multigrid convergence acceleration, embedded grid refinements, and a zonal equation scheme. Multitasking and vectorization have been incorporated into the algorithm. Results obtained include multiprocessed flow simulations from the Cray X-MP and Cray-2. Speedups as high as 3.3 for the two-dimensional case and 3.5 for segments of the three-dimensional case have been achieved on the Cray-2. The entire solver attained a factor of 2.7 improvement over its unitasked version on the Cray-2. The performance of the parallel algorithm on each machine is analyzed.

  13. THE LOS ALAMOS SUPERNOVA LIGHT-CURVE PROJECT: COMPUTATIONAL METHODS

    SciTech Connect

    Frey, Lucille H.; Even, Wesley; Hungerford, Aimee L.; Whalen, Daniel J.; Fryer, Chris L.; Fontes, Christopher J.; Colgan, James

    2013-02-15

    We have entered the era of explosive transient astronomy, in which current and upcoming real-time surveys such as the Large Synoptic Survey Telescope, the Palomar Transient Factory, and the Panoramic Survey Telescope and Rapid Response System will detect supernovae in unprecedented numbers. Future telescopes such as the James Webb Space Telescope may discover supernovae from the earliest stars in the universe and reveal their masses. The observational signatures of these astrophysical transients are the key to unveiling their central engines, the environments in which they occur, and to what precision they will pinpoint cosmic acceleration and the nature of dark energy. We present a new method for modeling supernova light curves and spectra with the radiation hydrodynamics code RAGE coupled with detailed monochromatic opacities in the SPECTRUM code. We include a suite of tests that demonstrate how the improved physics and opacities are indispensable to modeling shock breakout and light curves when radiation and matter are tightly coupled.

  14. The Los Alamos Supernova Light-curve Project: Computational Methods

    NASA Astrophysics Data System (ADS)

    Frey, Lucille H.; Even, Wesley; Whalen, Daniel J.; Fryer, Chris L.; Hungerford, Aimee L.; Fontes, Christopher J.; Colgan, James

    2013-02-01

    We have entered the era of explosive transient astronomy, in which current and upcoming real-time surveys such as the Large Synoptic Survey Telescope, the Palomar Transient Factory, and the Panoramic Survey Telescope and Rapid Response System will detect supernovae in unprecedented numbers. Future telescopes such as the James Webb Space Telescope may discover supernovae from the earliest stars in the universe and reveal their masses. The observational signatures of these astrophysical transients are the key to unveiling their central engines, the environments in which they occur, and to what precision they will pinpoint cosmic acceleration and the nature of dark energy. We present a new method for modeling supernova light curves and spectra with the radiation hydrodynamics code RAGE coupled with detailed monochromatic opacities in the SPECTRUM code. We include a suite of tests that demonstrate how the improved physics and opacities are indispensable to modeling shock breakout and light curves when radiation and matter are tightly coupled.

  15. Simplified methods for computing total sediment discharge with the modified Einstein procedure

    USGS Publications Warehouse

    Colby, Bruce R.; Hubbell, David Wellington

    1961-01-01

    A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.

  16. Applying computational methods to interpret experimental results in tribology and enantioselective catalysis

    NASA Astrophysics Data System (ADS)

    Garvey, Michael T.

    Computational methods are rapidly becoming a mainstay in the field of chemistry. Advances in computational methods (both theory and implementation), increasing availability of computational resources and the advancement of parallel computing are some of the major forces driving this trend. It is now possible to perform density functional theory (DFT) calculations with chemical accuracy for model systems that can be interrogated experimentally. This allows computational methods to supplement or complement experimental methods. There are even cases where DFT calculations can give insight into processes and interactions that cannot be interrogated directly by current experimental methods. This work presents several examples of the application of computational methods to the interpretation and analysis of experimentally obtained results. First, triobological systems were investigated primarily with full-potential linearized augmented plane wave (FLAPW) method DFT calculations. Second, small organic molecules adsorbed on Pd(111) were studied using projector-augmented wave (PAW) method DFT calculations and scanning tunneling microscopy (STM) image simulations to investigate molecular interactions involved in enantioselective heterogeneous catalysis. A method for method for calculating pressure-dependent shear properties of model boundary-layer lubricants is demonstrated. The calculated values are compared with experimentally obtained results. For the case of methyl pyruvate adsorbed on Pd(111), DFT-calculated adsorption energies and structures are used along with STM simulations to identify species observed by STM imaging. A previously unobserved enol species is discovered to be present along with the expected keto species. The information about methyl pyruvate species on Pd(111) is combined with previously published studies of S-alpha-(1-naphthyl)-ethylamine (NEA) to understand the nature of their interaction upon coadsorption on Pd(111). DFT calculated structures and energies are used to identify potential docking complexes and STM simulations are compared to the experimental STM images.

  17. A Cognition-Based Method to Ease the Computational Load for an Extended Kalman Filter

    PubMed Central

    Li, Yanpeng; Li, Xiang; Deng, Bin; Wang, Hongqiang; Qin, Yuliang

    2014-01-01

    The extended Kalman filter (EKF) is the nonlinear model of a Kalman filter (KF). It is a useful parameter estimation method when the observation model and/or the state transition model is not a linear function. However, the computational requirements in EKF are a difficulty for the system. With the help of cognition-based designation and the Taylor expansion method, a novel algorithm is proposed to ease the computational load for EKF in azimuth predicting and localizing under a nonlinear observation model. When there are nonlinear functions and inverse calculations for matrices, this method makes use of the major components (according to current performance and the performance requirements) in the Taylor expansion. As a result, the computational load is greatly lowered and the performance is ensured. Simulation results show that the proposed measure will deliver filtering output with a similar precision compared to the regular EKF. At the same time, the computational load is substantially lowered. PMID:25479332

  18. A Review of Computational Methods in Materials Science: Examples from Shock-Wave and Polymer Physics

    PubMed Central

    Steinhauser, Martin O.; Hiermaier, Stefan

    2009-01-01

    This review discusses several computational methods used on different length and time scales for the simulation of material behavior. First, the importance of physical modeling and its relation to computer simulation on multiscales is discussed. Then, computational methods used on different scales are shortly reviewed, before we focus on the molecular dynamics (MD) method. Here we survey in a tutorial-like fashion some key issues including several MD optimization techniques. Thereafter, computational examples for the capabilities of numerical simulations in materials research are discussed. We focus on recent results of shock wave simulations of a solid which are based on two different modeling approaches and we discuss their respective assets and drawbacks with a view to their application on multiscales. Then, the prospects of computer simulations on the molecular length scale using coarse-grained MD methods are covered by means of examples pertaining to complex topological polymer structures including star-polymers, biomacromolecules such as polyelectrolytes and polymers with intrinsic stiffness. This review ends by highlighting new emerging interdisciplinary applications of computational methods in the field of medical engineering where the application of concepts of polymer physics and of shock waves to biological systems holds a lot of promise for improving medical applications such as extracorporeal shock wave lithotripsy or tumor treatment. PMID:20054467

  19. A review of computational methods in materials science: examples from shock-wave and polymer physics.

    PubMed

    Steinhauser, Martin O; Hiermaier, Stefan

    2009-12-01

    This review discusses several computational methods used on different length and time scales for the simulation of material behavior. First, the importance of physical modeling and its relation to computer simulation on multiscales is discussed. Then, computational methods used on different scales are shortly reviewed, before we focus on the molecular dynamics (MD) method. Here we survey in a tutorial-like fashion some key issues including several MD optimization techniques. Thereafter, computational examples for the capabilities of numerical simulations in materials research are discussed. We focus on recent results of shock wave simulations of a solid which are based on two different modeling approaches and we discuss their respective assets and drawbacks with a view to their application on multiscales. Then, the prospects of computer simulations on the molecular length scale using coarse-grained MD methods are covered by means of examples pertaining to complex topological polymer structures including star-polymers, biomacromolecules such as polyelectrolytes and polymers with intrinsic stiffness. This review ends by highlighting new emerging interdisciplinary applications of computational methods in the field of medical engineering where the application of concepts of polymer physics and of shock waves to biological systems holds a lot of promise for improving medical applications such as extracorporeal shock wave lithotripsy or tumor treatment. PMID:20054467

  20. Computational methods for microfluidic microscopy and phase-space imaging

    NASA Astrophysics Data System (ADS)

    Pegard, Nicolas Christian Richard

    Modern optical devices are made by assembling separate components such as lenses, objectives, and cameras. Traditionally, each part is optimized separately, even though the trade-offs typically limit the performance of the system overall. This component-based approach is particularly unfit to solve the new challenges brought by modern biology: 3D imaging, in vivo environments, and high sample throughput. In the first part of this thesis, we introduce a general method to design integrated optical systems. The laws of wave propagation, the performance of available technology, as well as other design parameters are combined as constraints into a single optimization problem. The solution provides qualitative design rules to improve optical systems as well as quantitative task-specific methods to minimize loss of information. Our results have applications in optical data storage, holography, and microscopy. The second part of this dissertation presents a direct application. We propose a more efficient design for wide-field microscopy with coherent light, based on double transmission through the sample. Historically, speckle noise and aberrations caused by undesired interferences have made coherent illumination unpopular for imaging. We were able to dramatically reduce speckle noise and unwanted interferences using optimized holographic wavefront reconstruction. The resulting microscope not only yields clear coherent images with low aberration---even in thick samples---but also increases contrast and enables optical filtering and in-depth sectioning. In the third part, we develop new imaging techniques that better respond to the needs of modern biology research through implementing optical design optimization. Using a 4D phase-space distribution, we first represent the state and propagation of incoherent light. We then introduce an additional degree of freedom by putting samples in motion in a microfluidic channel, increasing image diversity. From there, we develop a design that is minimally invasive yet optimizes the transfer of information from sample to detector. This optimization best responds to the desired imaging application. We present three microfluidic devices which can all be implemented as a compact add-on device for commercial microscopes. The first is a flow-scanning structured illumination microfluidic microscopy device demonstrating enhanced resolution in 2D. The second is a method for 3D deconvolution microscopy with a tilted channel to acquire and deconvolve gradually defocused images. Finally, we demonstrate optical projection microscopic tomography with simultaneous phase and intensity imaging capabilities in 3D by combining flow-scanning and optical acquisition in phase space. Experimental results utilize yeast cells as well as live C.elegans.. In the fourth part, we show that optical system optimization also has non-imaging applications such as solar cell engineering. Instead of looking for an optical setup that maximizes the transfer of information, we implement inexpensive surface wrinkles and folds in the layered structure of organic solar cells and optimize their surface density. This strategy enhances light trapping and further improves the electric conversion of solar energy.

  1. Tracking Replicability as a Method of Post-Publication Open Evaluation

    PubMed Central

    Hartshorne, Joshua K.; Schachner, Adena

    2011-01-01

    Recent reports have suggested that many published results are unreliable. To increase the reliability and accuracy of published papers, multiple changes have been proposed, such as changes in statistical methods. We support such reforms. However, we believe that the incentive structure of scientific publishing must change for such reforms to be successful. Under the current system, the quality of individual scientists is judged on the basis of their number of publications and citations, with journals similarly judged via numbers of citations. Neither of these measures takes into account the replicability of the published findings, as false or controversial results are often particularly widely cited. We propose tracking replications as a means of post-publication evaluation, both to help researchers identify reliable findings and to incentivize the publication of reliable results. Tracking replications requires a database linking published studies that replicate one another. As any such database is limited by the number of replication attempts published, we propose establishing an open-access journal dedicated to publishing replication attempts. Data quality of both the database and the affiliated journal would be ensured through a combination of crowd-sourcing and peer review. As reports in the database are aggregated, ultimately it will be possible to calculate replicability scores, which may be used alongside citation counts to evaluate the quality of work published in individual journals. In this paper, we lay out a detailed description of how this system could be implemented, including mechanisms for compiling the information, ensuring data quality, and incentivizing the research community to participate. PMID:22403538

  2. A Computationally Efficient Meshless Local Petrov-Galerkin Method for Axisymmetric Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Chen, T.

    2003-01-01

    The Meshless Local Petrov-Galerkin (MLPG) method is one of the recently developed element-free methods. The method is convenient and can produce accurate results with continuous secondary variables, but is more computationally expensive than the finite element method. To overcome this disadvantage, a simple Heaviside test function is chosen. The computational effort is significantly reduced by eliminating the domain integral for the axisymmetric potential problems and by simplifying the domain integral for the axisymmetric elasticity problems. The method is evaluated through several patch tests for axisymmetric problems and example problems for which the exact solutions are available. The present method yielded very accurate solutions. The sensitivity of several parameters of the method is also studied.

  3. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    SciTech Connect

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.

  4. A shifting paradigm: Teachers' beliefs and methods for fostering ecological literacy in two public charter schools

    NASA Astrophysics Data System (ADS)

    Sterling, Evan P.

    Ecological literacy is measured by a person's ability to understand the natural systems that make life on earth possible and how to live in accordance with those systems. The emergence of the pedagogies of place- and community-based education during the past two decades provides a possible avenue for fostering ecological literacy in schools. This thesis explores the following research questions: 1) How is ecological literacy fostered in two Alaskan public charter schools? 2) What are teachers' beliefs in these two schools about the way children and youth develop ecological literacy? 3) What are effective teaching methods and what are the challenges in engaging students in ecological literacy? Semi-structured interviews were conducted with six K--12 teachers in two public charter schools in Alaska in order to investigate these questions, and relevant examples of student work were collected for study as well. Qualitative data analysis revealed several emergent themes: the need for real-world connections to curriculum; the necessity of time spent outdoors at a young age; the long-term and holistic nature of ecological literacy development; and the importance of family and community role models in developing connections with the natural world. Based upon the research findings, several recommendations are made to support the efforts of teachers in these schools and elsewhere for fostering ecological literacy in children and youth.

  5. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  6. A distributed vortex method for computing the vortex field of a missile

    NASA Technical Reports Server (NTRS)

    Barger, R. L.

    1978-01-01

    Vortex sheet development in the flow field of a missile was investigated by approximating the sheets in the cross-flow plane with short straight-line segments having distributed vorticity. In contrast with the method that represents the sheets as lines of discrete vortices, this distributed vortex method produced calculations with a high degree of computational stability.

  7. Methods, systems, and computer program products for network firewall policy optimization

    DOEpatents

    Fulp, Errin W. (Winston-Salem, NC); Tarsa, Stephen J. (Duxbury, MA)

    2011-10-18

    Methods, systems, and computer program products for firewall policy optimization are disclosed. According to one method, a firewall policy including an ordered list of firewall rules is defined. For each rule, a probability indicating a likelihood of receiving a packet matching the rule is determined. The rules are sorted in order of non-increasing probability in a manner that preserves the firewall policy.

  8. How to select basis sets and computational methods for carbohydrate modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In the last decade there have been significant improvements in computer hardware but also in development of quantum mechanical methods. This makes it more feasible to study large carbohydrate molecules via quantum mechanical methods whereas in the past studies of carbohydrates were restricted to em...

  9. Latent Class Models for Diary Method Data: Parameter Estimation by Local Computations

    ERIC Educational Resources Information Center

    Rijmen, Frank; Vansteelandt, Kristof; De Boeck, Paul

    2008-01-01

    The increasing use of diary methods calls for the development of appropriate statistical methods. For the resulting panel data, latent Markov models can be used to model both individual differences and temporal dynamics. The computational burden associated with these models can be overcome by exploiting the conditional independence relations…

  10. Ab initio modeling of carbohydrates: on the proper selection of computational methods and basis sets

    Technology Transfer Automated Retrieval System (TEKTRAN)

    With the development of faster computer hardware and quantum mechanical software it has become more feasible to study large carbohydrate molecules via quantum mechanical methods. In the past, studies of carbohydrates were restricted to empirical/semiempirical methods and Hartree Fock. In the last ...

  11. A Comparison of Computational Aeroacoustic Prediction Methods for Transonic Rotor Noise

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Lyrintzis, Anastasios; Koutsavdis, Evangelos K.

    1996-01-01

    This paper compares two methods for predicting transonic rotor noise for helicopters in hover and forward flight. Both methods rely on a computational fluid dynamics (CFD) solution as input to predict the acoustic near and far fields. For this work, the same full-potential rotor code has been used to compute the CFD solution for both acoustic methods. The first method employs the acoustic analogy as embodied in the Ffowcs Williams-Hawkings (FW-H) equation, including the quadrupole term. The second method uses a rotating Kirchhoff formulation. Computed results from both methods are compared with one other and with experimental data for both hover and advancing rotor cases. The results are quite good for all cases tested. The sensitivity of both methods to CFD grid resolution and to the choice of the integration surface/volume is investigated. The computational requirements of both methods are comparable; in both cases these requirements are much less than the requirements for the CFD solution.

  12. Implementation of a Fan-Like Triangulation for the CPA Method to compute Lyapunov Functions

    E-print Network

    Hafstein, Sigurður Freyr

    Implementation of a Fan-Like Triangulation for the CPA Method to compute Lyapunov Functions Peter and Piecewise Affine Lyapunov functions for nonlinear systems is the generation of a suitable triangulation. Recently, the CPA method was revised by using more advanced triangulations and it was proved that it can

  13. A Convex Speech Extraction Model and Fast Computation by the Split Bregman Method

    E-print Network

    Ferguson, Thomas S.

    1 A Convex Speech Extraction Model and Fast Computation by the Split Bregman Method Meng Yu, Wenye Ma, Jack Xin, and Stanley Osher. Abstract--A fast speech extraction (FSE) method is presented using convex optimization made possible by pause detection of the speech sources. Sparse unmixing filters

  14. Mixing-Plane Method for Flutter Computation in Multi-stage Turbomachines

    E-print Network

    Liu, Feng

    The mixing-plane method for calculating the three-dimensional flow through multistage turbomachinery is used in the computation. I. Introduction The modern desire to make turbomachinery blade rows both lighter and better-dimensional turbomachinery blade rows has been maturing for about twenty years. While these methods may not be in standard

  15. To appear in: Optimal Control Applications and Methods. COMPUTATIONS AND TIME-OPTIMAL CONTROLS

    E-print Network

    Kaya, Yalcin

    To appear in: Optimal Control Applications and Methods. COMPUTATIONS AND TIME-OPTIMAL CONTROLS C of constant-input arcs is used to get from an initial point to the target, and an optimization procedure method is shown to be fast by making comparisons with a general optimal control software package

  16. Computer program offers new method for constructing periodic orbits in nonlinear dynamical systems

    NASA Technical Reports Server (NTRS)

    Bennett, A. G.; Hanafy, L. M.; Palmore, J. I.

    1968-01-01

    Computer program uses an iterative method to construct precisely periodic orbits which dynamically approximate solutions that converge to precise dynamical solutions in the limit of the sequence. The method used is a modification of the generalized Newton-Raphson algorithm used in analyzing two point boundary problems.

  17. NEW SIMULTANEOUS GENERALIZED SCHUR DECOMPOSITION METHODS FOR THE COMPUTATION OF THE CANONICAL POLYADIC DECOMPOSITION

    E-print Network

    NEW SIMULTANEOUS GENERALIZED SCHUR DECOMPOSITION METHODS FOR THE COMPUTATION OF THE CANONICAL POLYADIC DECOMPOSITION Mikael Sørensen and Lieven De Lathauwer K.U.Leuven - E.E. Dept. (ESAT) - SCD) decomposition have been proposed. The original SGSD method re- quires that all three matrix factors of the CP

  18. A Lanczos eigenvalue method on a parallel computer. [for large complex space structure free vibration analysis

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.; Fulton, Robert E.

    1987-01-01

    Eigenvalue analyses of complex structures is a computationally intensive task which can benefit significantly from new and impending parallel computers. This study reports on a parallel computer implementation of the Lanczos method for free vibration analysis. The approach used here subdivides the major Lanczos calculation tasks into subtasks and introduces parallelism down to the subtask levels such as matrix decomposition and forward/backward substitution. The method was implemented on a commercial parallel computer and results were obtained for a long flexible space structure. While parallel computing efficiency is problem and computer dependent, the efficiency for the Lanczos method was good for a moderate number of processors for the test problem. The greatest reduction in time was realized for the decomposition of the stiffness matrix, a calculation which took 70 percent of the time in the sequential program and which took 25 percent of the time on eight processors. For a sample calculation of the twenty lowest frequencies of a 486 degree of freedom problem, the total sequential computing time was reduced by almost a factor of ten using 16 processors.

  19. Numerical Methods for the Computation of the Confluent and Gauss Hypergeometric Functions

    E-print Network

    John W. Pearson; Sheehan Olver; Mason A. Porter

    2015-08-28

    The two most commonly used hypergeometric functions are the confluent hypergeometric function and the Gauss hypergeometric function. We review the available techniques for accurate, fast, and reliable computation of these two hypergeometric functions in different parameter and variable regimes. The methods that we investigate include Taylor and asymptotic series computations, Gauss-Jacobi quadrature, numerical solution of differential equations, recurrence relations, and others. We discuss the results of numerical experiments used to determine the best methods, in practice, for each parameter and variable regime considered. We provide 'roadmaps' with our recommendation for which methods should be used in each situation.

  20. Business Architecture Development at Public Administration - Insights from Government EA Method Engineering Project in Finland

    NASA Astrophysics Data System (ADS)

    Valtonen, Katariina; Leppänen, Mauri

    Governments worldwide are concerned for efficient production of services to customers. To improve quality of services and to make service production more efficient, information and communication technology (ICT) is largely exploited in public administration (PA). Succeeding in this exploitation calls for large-scale planning which embraces issues from strategic to technological level. In this planning the notion of enterprise architecture (EA) is commonly applied. One of the sub-architectures of EA is business architecture (BA). BA planning is challenging in PA due to a large number of stakeholders, a wide set of customers, and solid and hierarchical structures of organizations. To support EA planning in Finland, a project to engineer a government EA (GEA) method was launched. In this chapter, we analyze the discussions and outputs of the project workshops and reflect emerged issues on current e-government literature. We bring forth insights into and suggestions for government BA and its development.