Science.gov

Sample records for methods publications computer

  1. A computational method for drug repositioning using publicly available gene expression data

    PubMed Central

    2015-01-01

    Motivation The identification of new therapeutic uses of existing drugs, or drug repositioning, offers the possibility of faster drug development, reduced risk, lesser cost and shorter paths to approval. The advent of high throughput microarray technology has enabled comprehensive monitoring of transcriptional response associated with various disease states and drug treatments. This data can be used to characterize disease and drug effects and thereby give a measure of the association between a given drug and a disease. Several computational methods have been proposed in the literature that make use of publicly available transcriptional data to reposition drugs against diseases. Method In this work, we carry out a data mining process using publicly available gene expression data sets associated with a few diseases and drugs, to identify the existing drugs that can be used to treat genes causing lung cancer and breast cancer. Results Three strong candidates for repurposing have been identified- Letrozole and GDC-0941 against lung cancer, and Ribavirin against breast cancer. Letrozole and GDC-0941 are drugs currently used in breast cancer treatment and Ribavirin is used in the treatment of Hepatitis C. PMID:26679199

  2. Exploration of Preterm Birth Rates Using the Public Health Exposome Database and Computational Analysis Methods

    PubMed Central

    Kershenbaum, Anne D.; Langston, Michael A.; Levine, Robert S.; Saxton, Arnold M.; Oyana, Tonny J.; Kilbourne, Barbara J.; Rogers, Gary L.; Gittner, Lisaann S.; Baktash, Suzanne H.; Matthews-Juarez, Patricia; Juarez, Paul D.

    2014-01-01

    Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother’s age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

  3. Exploration of preterm birth rates using the public health exposome database and computational analysis methods.

    PubMed

    Kershenbaum, Anne D; Langston, Michael A; Levine, Robert S; Saxton, Arnold M; Oyana, Tonny J; Kilbourne, Barbara J; Rogers, Gary L; Gittner, Lisaann S; Baktash, Suzanne H; Matthews-Juarez, Patricia; Juarez, Paul D

    2014-12-01

    Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother's age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

  4. BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability

    SciTech Connect

    Sellers, C.; Fox, B.; Paulz, J.

    1996-03-01

    The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

  5. Computer Science and Technology Publications. NBS Publications List 84.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.

    This bibliography lists publications of the Institute for Computer Sciences and Technology of the National Bureau of Standards. Publications are listed by subject in the areas of computer security, computer networking, and automation technology. Sections list publications of: (1) current Federal Information Processing Standards; (2) computer…

  6. Computers in Public Broadcasting: Who, What, Where.

    ERIC Educational Resources Information Center

    Yousuf, M. Osman

    This handbook offers guidance to public broadcasting managers on computer acquisition and development activities. Based on a 1981 survey of planned and current computer uses conducted by the Corporation for Public Broadcasting (CPB) Information Clearinghouse, computer systems in public radio and television broadcasting stations are listed by…

  7. Computer Center: A Diversity of Publications on Educational Computing.

    ERIC Educational Resources Information Center

    Crovello, Theodore J.

    1983-01-01

    Title, author(s), publisher, price, and review are provided for nine computer oriented publications. These include books related to the use of microcomputers in educational settings, computer assisted instruction, LOGO programming language, and the role of the computer in schools as a tutor, tool, and tutee. (JN)

  8. Computational Methods for Crashworthiness

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Carden, Huey D. (Compiler)

    1993-01-01

    Presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Crashworthiness held at Langley Research Center on 2-3 Sep. 1992 are included. The presentations addressed activities in the area of impact dynamics. Workshop attendees represented NASA, the Army and Air Force, the Lawrence Livermore and Sandia National Laboratories, the aircraft and automotive industries, and academia. The workshop objectives were to assess the state-of-technology in the numerical simulation of crash and to provide guidelines for future research.

  9. Computer intensive statistical methods

    NASA Astrophysics Data System (ADS)

    Yakowitz, S.

    The special session “Computer-Intensive Statistical Methods” was held in morning and afternoon parts at the 1985 AGU Fall Meeting in San Francisco. Calif. Its mission was to provide a forum for hydrologists and statisticians who are active in bringing unconventional, algorithmic-oriented statistical techniques to bear on problems of hydrology. Statistician Emanuel Parzen (Texas A&M University, College Station, Tex.) opened the session by relating recent developments in quantile estimation methods and showing how properties of such methods can be used to advantage to categorize runoff data previously analyzed by I. Rodriguez-Iturbe (Universidad Simon Bolivar, Caracas, Venezuela). Statistician Eugene Schuster (University of Texas, El Paso) discussed recent developments in nonparametric density estimation which enlarge the framework for convenient incorporation of prior and ancillary information. These extensions were motivated by peak annual flow analysis. Mathematician D. Myers (University of Arizona, Tucson) gave a brief overview of “kriging” and outlined some recently developed methodology.

  10. Publication Bias in Methodological Computational Research

    PubMed Central

    Boulesteix, Anne-Laure; Stierle, Veronika; Hapfelmeier, Alexander

    2015-01-01

    The problem of publication bias has long been discussed in research fields such as medicine. There is a consensus that publication bias is a reality and that solutions should be found to reduce it. In methodological computational research, including cancer informatics, publication bias may also be at work. The publication of negative research findings is certainly also a relevant issue, but has attracted very little attention to date. The present paper aims at providing a new formal framework to describe the notion of publication bias in the context of methodological computational research, facilitate and stimulate discussions on this topic, and increase awareness in the scientific community. We report an exemplary pilot study that aims at gaining experiences with the collection and analysis of information on unpublished research efforts with respect to publication bias, and we outline the encountered problems. Based on these experiences, we try to formalize the notion of publication bias. PMID:26508827

  11. Public Databases Supporting Computational Toxicology

    EPA Science Inventory

    A major goal of the emerging field of computational toxicology is the development of screening-level models that predict potential toxicity of chemicals from a combination of mechanistic in vitro assay data and chemical structure descriptors. In order to build these models, resea...

  12. Research Methods in Public Relations.

    ERIC Educational Resources Information Center

    Belvin, Robert; Botan, Carl

    In response to the need for research methods training in the public relations undergraduate curriculum, this paper identifies the range of possible formats for a public relations research methods course (analyzing strengths and weaknesses for each) and recommends a hybrid format. The paper then identifies and compares different goals for research…

  13. Satellite orbit computation methods

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Mathematical and algorithmical techniques for solution of problems in satellite dynamics were developed, along with solutions to satellite orbit motion. Dynamical analysis of shuttle on-orbit operations were conducted. Computer software routines for use in shuttle mission planning were developed and analyzed, while mathematical models of atmospheric density were formulated.

  14. Public Relations, Computers, and Election Success.

    ERIC Educational Resources Information Center

    Banach, William J.; Westley, Lawrence

    This paper describes a successful financial election campaign that used a combination of computer technology and public relations techniques. Analysis, determination of needs, development of strategy, organization, finance, communication, and evaluation are given as the steps to be taken for a successful school financial campaign. The authors…

  15. Computational methods working group

    SciTech Connect

    Gabriel, T. A.

    1997-09-01

    During the Cold Moderator Workshop several working groups were established including one to discuss calculational methods. The charge for this working group was to identify problems in theory, data, program execution, etc., and to suggest solutions considering both deterministic and stochastic methods including acceleration procedures.

  16. Acquisition of Computing Literacy on Shared Public Computers: Children and the "Hole in the Wall"

    ERIC Educational Resources Information Center

    Mitra, Sugata; Dangwal, Ritu; Chatterjee, Shiffon; Jha, Swati; Bisht, Ravinder S.; Kapur, Preeti

    2005-01-01

    Earlier work, often referred to as the "hole in the wall" experiments, has shown that groups of children can learn to use public computers on their own. This paper presents the method and results of an experiment conducted to investigate whether such unsupervised group learning in shared public spaces is universal. The experiment was conducted

  17. Acquisition of Computing Literacy on Shared Public Computers: Children and the "Hole in the Wall"

    ERIC Educational Resources Information Center

    Mitra, Sugata; Dangwal, Ritu; Chatterjee, Shiffon; Jha, Swati; Bisht, Ravinder S.; Kapur, Preeti

    2005-01-01

    Earlier work, often referred to as the "hole in the wall" experiments, has shown that groups of children can learn to use public computers on their own. This paper presents the method and results of an experiment conducted to investigate whether such unsupervised group learning in shared public spaces is universal. The experiment was conducted…

  18. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective...

  19. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective...

  20. ANNOTATED BIBLIOGRAPHY OF RAND PUBLICATIONS IN COMPUTATIONAL LINGUISTICS.

    ERIC Educational Resources Information Center

    HAYS, DAVID G.; AND OTHERS

    THIS REVISED ANNOTATED BIBLIOGRAPHY LISTS 143 RAND PUBLICATIONS IN COMPUTATIONAL LINGUISTICS, INCLUDING SUCH AREAS AS LINGUISTIC RESEARCH METHODS, STUDIES ON THE RUSSIAN AND ENGLISH LANGUAGES, INFORMATION RETRIEVAL, PSYCHOLINGUISTICS, AND CHARACTER READERS. ENTRIES ON THE RUSSIAN LANGUAGE ARE FURTHER ORGANIZED AS ANALYSES OF TEXTS AND GLOSSARIES,…

  1. Computational Methods Development at Ames

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  2. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  3. Closing the "Digital Divide": Building a Public Computing Center

    ERIC Educational Resources Information Center

    Krebeck, Aaron

    2010-01-01

    The public computing center offers an economical and environmentally friendly model for providing additional public computer access when and where it is needed. Though not intended to be a replacement for a full-service branch, the public computing center does offer a budget-friendly option for quickly expanding high-demand services into the…

  4. Methods for computing color anaglyphs

    NASA Astrophysics Data System (ADS)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  5. Cepstral methods in computational vision

    NASA Astrophysics Data System (ADS)

    Bandari, Esfandiar; Little, James J.

    1993-05-01

    Many computational vision routines can be regarded as recognition and retrieval of echoes in space or time. Cepstral analysis is a powerful nonlinear adaptive signal processing methodology widely used in many areas such as: echo retrieval and removal, speech processing and phoneme chunking, radar and sonar processing, seismology, medicine, image deblurring and restoration, and signal recovery. The aim of this paper is: (1) To provide a brief mathematical and historical review of cepstral techniques. (2) To introduce computational and performance improvements to power and differential cepstrum for use in detection of echoes; and to provide a comparison between these methods and the traditional cepstral techniques. (3) To apply cepstrum to visual tasks such as motion analysis and trinocular vision. And (4) to draw a brief comparison between cepstrum and other matching techniques. The computational and performance improvements introduced in this paper can e applied in other areas that frequently utilize cepstrum.

  6. Computational methods for stellerator configurations

    NASA Astrophysics Data System (ADS)

    Betancourt, O.

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings.

  7. Computational methods for stellerator configurations

    SciTech Connect

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings.

  8. Geometric methods in quantum computation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun

    Recent advances in the physical sciences and engineering have created great hopes for new computational paradigms and substrates. One such new approach is the quantum computer, which holds the promise of enhanced computational power. Analogous to the way a classical computer is built from electrical circuits containing wires and logic gates, a quantum computer is built from quantum circuits containing quantum wires and elementary quantum gates to transport and manipulate quantum information. Therefore, design of quantum gates and quantum circuits is a prerequisite for any real application of quantum computation. In this dissertation we apply geometric control methods from differential geometry and Lie group representation theory to analyze the properties of quantum gates and to design optimal quantum circuits. Using the Cartan decomposition and the Weyl group, we show that the geometric structure of nonlocal two-qubit gates is a 3-Torus. After further reducing the symmetry, the geometric representation of nonlocal gates is seen to be conveniently visualized as a tetrahedron. Each point in this tetrahedron except on the base corresponds to a different equivalent class of nonlocal gates. This geometric representation is one of the cornerstones for the discussion on quantum computation in this dissertation. We investigate the properties of those two-qubit operations that can generate maximal entanglement. It is an astonishing finding that if we randomly choose a two-qubit operation, the probability that we obtain a perfect entangler is exactly one half. We prove that given a two-body interaction Hamiltonian, it is always possible to explicitly construct a quantum circuit for exact simulation of any arbitrary nonlocal two-qubit gate by turning on the two-body interaction for at most three times, together with at most four local gates. We also provide an analytic approach to construct a universal quantum circuit from any entangling gate supplemented with local gates. Closed form solutions have been derived for each step in this explicit construction procedure. Moreover, the minimum upper bound is found to construct a universal quantum circuit from any Controlled-Unitary gate. A near optimal explicit construction of universal quantum circuits from a given Controlled-Unitary is provided. For the Controlled-NOT and Double-CNOT gate, we then develop simple analytic ways to construct universal quantum circuits with exactly three applications, which is the least possible for these gates. We further discover a new quantum gate (named B gate) that achieves the desired universality with minimal number of gates. Optimal implementation of single-qubit quantum gates is also investigated. Finally, as a real physical application, a constructive way to implement any arbitrary two-qubit operation on a spin electronics system is discussed.

  9. Systems Science Methods in Public Health

    PubMed Central

    Luke, Douglas A.; Stamatakis, Katherine A.

    2012-01-01

    Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example in preparing for global pandemics. However, systems science study designs and analytic methods remain underutilized and are not widely featured in public health curricula or training. In this review we present an argument for the utility of systems science methods in public health, introduce three important systems science methods (system dynamics, network analysis, and agent-based modeling), and provide three case studies where these methods have been used to answer important public health science questions in the areas of infectious disease, tobacco control, and obesity. PMID:22224885

  10. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-01

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods. PMID:26549429

  11. Computational methods for stealth design

    SciTech Connect

    Cable, V.P. )

    1992-08-01

    A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.

  12. Optimization Methods for Computer Animation.

    ERIC Educational Resources Information Center

    Donkin, John Caldwell

    Emphasizing the importance of economy and efficiency in the production of computer animation, this master's thesis outlines methodologies that can be used to develop animated sequences with the highest quality images for the least expenditure. It is assumed that if computer animators are to be able to fully exploit the available resources, they…

  13. Computers and Public Policy. Proceedings of the Symposium Man and the Computer.

    ERIC Educational Resources Information Center

    Oden, Teresa, Ed.; Thompson, Christine, Ed.

    Experts from the fields of law, business, government, and research were invited to a symposium sponsored by Dartmouth College to examine public policies which are challenged by the advent of computer technology. Eleven papers were delivered addressing such critical social issues related to computing and public policies as the man-computer…

  14. How You Can Protect Public Access Computers "and" Their Users

    ERIC Educational Resources Information Center

    Huang, Phil

    2007-01-01

    By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and…

  15. Wildlife software: procedures for publication of computer software

    USGS Publications Warehouse

    Samuel, M.D.

    1990-01-01

    Computers and computer software have become an integral part of the practice of wildlife science. Computers now play an important role in teaching, research, and management applications. Because of the specialized nature of wildlife problems, specific computer software is usually required to address a given problem (e.g., home range analysis). This type of software is not usually available from commercial vendors and therefore must be developed by those wildlife professionals with particular skill in computer programming. Current journal publication practices generally prevent a detailed description of computer software associated with new techniques. In addition, peer review of journal articles does not usually include a review of associated computer software. Thus, many wildlife professionals are usually unaware of computer software that would meet their needs or of major improvements in software they commonly use. Indeed most users of wildlife software learn of new programs or important changes only by word of mouth.

  16. A Computer-Assisted Instruction in Teaching Abstract Statistics to Public Affairs Undergraduates

    ERIC Educational Resources Information Center

    Ozturk, Ali Osman

    2012-01-01

    This article attempts to demonstrate the applicability of a computer-assisted instruction supported with simulated data in teaching abstract statistical concepts to political science and public affairs students in an introductory research methods course. The software is called the Elaboration Model Computer Exercise (EMCE) in that it takes a great…

  17. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  18. Computational Methods in Nanostructure Design

    NASA Astrophysics Data System (ADS)

    Bellesia, Giovanni; Lampoudi, Sotiria; Shea, Joan-Emma

    Self-assembling peptides can serve as building blocks for novel biomaterials. Replica exchange molecular dynamics simulations are a powerful means to probe the conformational space of these peptides. We discuss the theoretical foundations of this enhanced sampling method and its use in biomolecular simulations. We then apply this method to determine the monomeric conformations of the Alzheimer amyloid-?(12-28) peptide that can serve as initiation sites for aggregation.

  19. Computational methods for biomolecular electrostatics.

    PubMed

    Dong, Feng; Olsen, Brett; Baker, Nathan A

    2008-01-01

    An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate, and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion, and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems, with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

  20. Computational Methods for Biomolecular Electrostatics

    PubMed Central

    Dong, Feng; Olsen, Brett; Baker, Nathan A.

    2008-01-01

    An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der Waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

  1. Computational Methods to Model Persistence.

    PubMed

    Vandervelde, Alexandra; Loris, Remy; Danckaert, Jan; Gelens, Lendert

    2016-01-01

    Bacterial persister cells are dormant cells, tolerant to multiple antibiotics, that are involved in several chronic infections. Toxin-antitoxin modules play a significant role in the generation of such persister cells. Toxin-antitoxin modules are small genetic elements, omnipresent in the genomes of bacteria, which code for an intracellular toxin and its neutralizing antitoxin. In the past decade, mathematical modeling has become an important tool to study the regulation of toxin-antitoxin modules and their relation to the emergence of persister cells. Here, we provide an overview of several numerical methods to simulate toxin-antitoxin modules. We cover both deterministic modeling using ordinary differential equations and stochastic modeling using stochastic differential equations and the Gillespie method. Several characteristics of toxin-antitoxin modules such as protein production and degradation, negative autoregulation through DNA binding, toxin-antitoxin complex formation and conditional cooperativity are gradually integrated in these models. Finally, by including growth rate modulation, we link toxin-antitoxin module expression to the generation of persister cells. PMID:26468111

  2. Computational Methods for Ideal Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Kercher, Andrew D.

    Numerical schemes for the ideal magnetohydrodynamics (MHD) are widely used for modeling space weather and astrophysical flows. They are designed to resolve the different waves that propagate through a magnetohydro fluid, namely, the fast, Alfven, slow, and entropy waves. Numerical schemes for ideal magnetohydrodynamics that are based on the standard finite volume (FV) discretization exhibit pseudo-convergence in which non-regular waves no longer exist only after heavy grid refinement. A method is described for obtaining solutions for coplanar and near coplanar cases that consist of only regular waves, independent of grid refinement. The method, referred to as Compound Wave Modification (CWM), involves removing the flux associated with non-regular structures and can be used for simulations in two- and three-dimensions because it does not require explicitly tracking an Alfven wave. For a near coplanar case, and for grids with 213 points or less, we find root-mean-square-errors (RMSEs) that are as much as 6 times smaller. For the coplanar case, in which non-regular structures will exist at all levels of grid refinement for standard FV schemes, the RMSE is as much as 25 times smaller. A multidimensional ideal MHD code has been implemented for simulations on graphics processing units (GPUs). Performance measurements were conducted for both the NVIDIA GeForce GTX Titan and Intel Xeon E5645 processor. The GPU is shown to perform one to two orders of magnitude greater than the CPU when using a single core, and two to three times greater than when run in parallel with OpenMP. Performance comparisons are made for two methods of storing data on the GPU. The first approach stores data as an Array of Structures (AoS), e.g., a point coordinate array of size 3 x n is iterated over. The second approach stores data as a Structure of Arrays (SoA), e.g. three separate arrays of size n are iterated over simultaneously. For an AoS, coalescing does not occur, reducing memory efficiency. All results are given for Cartesian grids, but the algorithms are implemented for a general geometry on a unstructured grids.

  3. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  4. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral methods in numerical calculations stems from the attractive approximation properties of orthogonal polynomial expansions.

  5. Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

    ERIC Educational Resources Information Center

    Jayakar, Krishna; Park, Eun-A

    2012-01-01

    The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

  6. Computational Chemistry Using Modern Electronic Structure Methods

    ERIC Educational Resources Information Center

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  7. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  8. Updated Panel-Method Computer Program

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1995-01-01

    Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

  9. Computing discharge using the index velocity method

    USGS Publications Warehouse

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression techniques in which the mean cross-sectional velocity for the standard section is related to the measured index velocity. Most ratings are simple-linear regressions, but more complex ratings may be necessary in some cases. Once the rating is established, validation measurements should be made periodically. Over time, validation measurements may provide additional definition to the rating or result in the creation of a new rating. The computation of discharge is the last step in the index velocity method, and in some ways it is the most straight-forward step. This step differs little from the steps used to compute discharge records for stage-discharge gaging stations. The ratings are entered into database software used for records computation, and continuous records of discharge are computed.

  10. Computational stoning method for surface defect detection

    NASA Astrophysics Data System (ADS)

    Ma, Ninshu; Zhu, Xinhai

    2013-12-01

    Surface defects on outer panels of automotive bodies must be controlled in order to improve the surface quality. The detection and quantitative evaluation of surface defects are quite difficult because the deflection of surface defects is very small. One of detecting methods for surface defects used in factories is a stoning method in which a stone block is moved on the surface of a stamped panel. The computational stoning method was developed to detect surface low defect by authors based on a geometry contact algorithm between a stone block and a stamped panel. If the surface is convex, the stone block always contacts with the convex surface of a stamped panel and the contact gap between them is zero. If there is a surface low, the stone block does not contact to the surface and the contact gap can be computed based on contact algorithm. The convex surface defect can also be detected by applying computational stoning method to the back surface of a stamped panel. By performing two way stoning computations from both the normal surface and the back surface, not only the depth of surface low defect but also the height of convex surface defect can be detected. The surface low defect and convex surface defect can also be detected through multi-directions. Surface defects on the handle emboss of outer panels were accurately detected using the computational stoning method and compared with the real shape. A very good accuracy was obtained.

  11. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  12. Leg stiffness measures depend on computational method.

    PubMed

    Hébert-Losier, Kim; Eriksson, Anders

    2014-01-01

    Leg stiffness is often computed from ground reaction force (GRF) registrations of vertical hops to estimate the force-resisting capacity of the lower-extremity during ground contact, with leg stiffness values incorporated in a spring-mass model to describe human motion. Individual biomechanical characteristics, including leg stiffness, were investigated in 40 healthy males. Our aim is to report and discuss the use of 13 different computational methods for evaluating leg stiffness from a double-legged repetitive hopping task, using only GRF registrations. Four approximations for the velocity integration constant were combined with three mathematical expressions, giving 12 methods for computing stiffness using double integrations. One frequency-based method that considered ground contact times was also trialled. The 13 methods thus defined were used to compute stiffness in four extreme cases, which were the stiffest, and most compliant, consistent and variable subjects. All methods provided different stiffness measures for a given individual, but the between-method variations in stiffness were consistent across the four atypical subjects. The frequency-based method apparently overestimated the actual stiffness values, whereas double integrations' measures were more consistent. In double integrations, the choice of the integration constant and mathematical expression considerably affected stiffness values, as variations during hopping were more or less emphasized. Stating a zero centre of mass position at take-off gave more consistent results, and taking a weighted-average of the force or displacement curve was more forgiving to variations in performance. In any case, stiffness values should always be accompanied by a detailed description of their evaluation methods, as our results demonstrated that computational methods affect calculated stiffness. PMID:24188972

  13. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  14. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... PROCEDURES GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures...

  15. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  16. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  17. Survey of Public IaaS Cloud Computing API

    NASA Astrophysics Data System (ADS)

    Yamato, Yoji; Moriya, Takaaki; Ogawa, Takeshi; Akahani, Junichi

    Recently, Cloud computing is spread rapidly and many Cloud providers start their Cloud services. One of the Cloud computing problems is Cloud provider Lock In” for users. Actually, Cloud computing management APIs such as ordering or provisioning are different in each Cloud provider, so that users need to study and implement new APIs when they change Cloud providers. OGF and DMTF start the discussions of standardization of Cloud computing APIs, but there is no standard now. In this technical note, to clarify what APIs cloud providers should provide, we study common APIs for Cloud computing. We survey and compare Cloud computing APIs such as Rackspace Cloud Server, Sun Cloud, GoGrid, ElasticHosts, Amazon EC2 and FlexiScale which are currently provided as public IaaS Cloud APIs in the market. From the survey, the common APIs should support REST access style and provide account management, virtual server management, storage management, network management and resource usage management capabilities. We also show an example of OSS to provide these common APIs compared to normal hosting services OSS.

  18. Distributed sequence alignment applications for the public computing architecture.

    PubMed

    Pellicer, S; Chen, G; Chan, K C C; Pan, Y

    2008-03-01

    The public computer architecture shows promise as a platform for solving fundamental problems in bioinformatics such as global gene sequence alignment and data mining with tools such as the basic local alignment search tool (BLAST). Our implementation of these two problems on the Berkeley open infrastructure for network computing (BOINC) platform demonstrates a runtime reduction factor of 1.15 for sequence alignment and 16.76 for BLAST. While the runtime reduction factor of the global gene sequence alignment application is modest, this value is based on a theoretical sequential runtime extrapolated from the calculation of a smaller problem. Because this runtime is extrapolated from running the calculation in memory, the theoretical sequential runtime would require 37.3 GB of memory on a single system. With this in mind, the BOINC implementation not only offers the reduced runtime, but also the aggregation of the available memory of all participant nodes. If an actual sequential run of the problem were compared, a more drastic reduction in the runtime would be seen due to an additional secondary storage I/O overhead for a practical system. Despite the limitations of the public computer architecture, most notably in communication overhead, it represents a practical platform for grid- and cluster-scale bioinformatics computations today and shows great potential for future implementations. PMID:18334454

  19. Semiempirical methods for computing turbulent flows

    NASA Technical Reports Server (NTRS)

    Belov, I. A.; Ginzburg, I. P.

    1986-01-01

    Two semiempirical theories which provide a basis for determining the turbulent friction and heat exchange near a wall are presented: (1) the Prandtl-Karman theory, and (2) the theory utilizing an equation for the energy of turbulent pulsations. A comparison is made between exact numerical methods and approximate integral methods for computing the turbulent boundary layers in the presence of pressure, blowing, or suction gradients. Using the turbulent flow around a plate as an example, it is shown that, when computing turbulent flows with external turbulence, it is preferable to construct a turbulence model based on the equation for energy of turbulent pulsations.

  20. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  1. Computational Methods for Failure Analysis and Life Prediction

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Harris, Charles E. (Compiler); Housner, Jerrold M. (Compiler); Hopkins, Dale A. (Compiler)

    1993-01-01

    This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

  2. Soft computing methods for geoidal height transformation

    NASA Astrophysics Data System (ADS)

    Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.

    2009-07-01

    Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.

  3. Efficient Methods to Compute Genomic Predictions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  4. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  5. Applying Human Computation Methods to Information Science

    ERIC Educational Resources Information Center

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  6. Shifted power method for computing tensor eigenvalues.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-07-01

    Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

  7. Referees Often Miss Obvious Errors in Computer and Electronic Publications

    NASA Astrophysics Data System (ADS)

    de Gloucester, Paul Colin

    2013-05-01

    Misconduct is extensive and damaging. So-called science is prevalent. Articles resulting from so-called science are often cited in other publications. This can have damaging consequences for society and for science. The present work includes a scientometric study of 350 articles (published by the Association for Computing Machinery; Elsevier; The Institute of Electrical and Electronics Engineers, Inc.; John Wiley; Springer; Taylor & Francis; and World Scientific Publishing Co.). A lower bound of 85.4% articles are found to be incongruous. Authors cite inherently self-contradictory articles more than valid articles. Incorrect informational cascades ruin the literature's signal-to-noise ratio even for uncomplicated cases.

  8. Experience of public procurement of Open Compute servers

    NASA Astrophysics Data System (ADS)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  9. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  10. Computational Methods for MOF/Polymer Membranes.

    PubMed

    Erucar, Ilknur; Keskin, Seda

    2016-04-01

    Metal-organic framework (MOF)/polymer mixed matrix membranes (MMMs) have received significant interest in the last decade. MOFs are incorporated into polymers to make MMMs that exhibit improved gas permeability and selectivity compared with pure polymer membranes. The fundamental challenge in this area is to choose the appropriate MOF/polymer combinations for a gas separation of interest. Even if a single polymer is considered, there are thousands of MOFs that could potentially be used as fillers in MMMs. As a result, there has been a large demand for computational studies that can accurately predict the gas separation performance of MOF/polymer MMMs prior to experiments. We have developed computational approaches to assess gas separation potentials of MOF/polymer MMMs and used them to identify the most promising MOF/polymer pairs. In this Personal Account, we aim to provide a critical overview of current computational methods for modeling MOF/polymer MMMs. We give our perspective on the background, successes, and failures that led to developments in this area and discuss the opportunities and challenges of using computational methods for MOF/polymer MMMs. PMID:26842308

  11. Parallel computer methods for eigenvalue extraction

    NASA Technical Reports Server (NTRS)

    Akl, Fred

    1988-01-01

    A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is used in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. The advantages of this new algorithm in parallel computer architecture are discussed.

  12. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  13. Teacher Perspectives on the Current State of Computer Technology Integration into the Public School Classroom

    ERIC Educational Resources Information Center

    Zuniga, Ramiro

    2009-01-01

    Since the introduction of computers into the public school arena over forty years ago, educators have been convinced that the integration of computer technology into the public school classroom will transform education. Joining educators are state and federal governments. Public schools and others involved in the process of computer technology…

  14. Computations of entropy bounds: Multidimensional geometric methods

    SciTech Connect

    Makaruk, H.E.

    1998-02-01

    The entropy bounds for constructive upper bound on the needed number-of-bits for solving a dichotomy is represented by the quotient of two multidimensional solid volumes. For minimization of this upper bound exact calculation of the volume of this quotient is needed. Three methods for exact computing of the volume of a given nD volume are presented: (1) general method for calculation any nD volume by slicing it into volumes of decreasing dimension is presented; (2) a method applying appropriate curvilinear coordinate system is described for volume bounded by symmetrical curvilinear hypersurfaces (spheres, cones, hyperboloids, ellipsoids, cylinders, etc.); and (3) an algorithm for dividing any nD complex into simplices and computing of the volume of the simplices is presented, supplemented by a general formula for calculation of volume of an nD simplex. These mathematical methods enable exact calculation of volume of any complicated multidimensional solids. The methods allow for the calculation of the minimal volume and lead to tighter bounds on the needed number-of-bits.

  15. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  16. Accelerated matrix element method with parallel computing

    NASA Astrophysics Data System (ADS)

    Schouten, D.; DeAbreu, A.; Stelzer, B.

    2015-07-01

    The matrix element method utilizes ab initio calculations of probability densities as powerful discriminants for processes of interest in experimental particle physics. The method has already been used successfully at previous and current collider experiments. However, the computational complexity of this method for final states with many particles and degrees of freedom sets it at a disadvantage compared to supervised classification methods such as decision trees, k nearest-neighbor, or neural networks. This note presents a concrete implementation of the matrix element technique using graphics processing units. Due to the intrinsic parallelizability of multidimensional integration, dramatic speedups can be readily achieved, which makes the matrix element technique viable for general usage at collider experiments.

  17. Probabilistic Computational Methods in Structural Failure Analysis

    NASA Astrophysics Data System (ADS)

    Krejsa, Martin; Kralik, Juraj

    2015-12-01

    Probabilistic methods are used in engineering where a computational model contains random variables. Each random variable in the probabilistic calculations contains uncertainties. Typical sources of uncertainties are properties of the material and production and/or assembly inaccuracies in the geometry or the environment where the structure should be located. The paper is focused on methods for the calculations of failure probabilities in structural failure and reliability analysis with special attention on newly developed probabilistic method: Direct Optimized Probabilistic Calculation (DOProC), which is highly efficient in terms of calculation time and the accuracy of the solution. The novelty of the proposed method lies in an optimized numerical integration that does not require any simulation technique. The algorithm has been implemented in mentioned software applications, and has been used several times in probabilistic tasks and probabilistic reliability assessments.

  18. Teaching Practical Public Health Evaluation Methods

    ERIC Educational Resources Information Center

    Davis, Mary V.

    2006-01-01

    Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the

  19. Teaching Practical Public Health Evaluation Methods

    ERIC Educational Resources Information Center

    Davis, Mary V.

    2006-01-01

    Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…

  20. Numerical methods for problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Mead, Jodi Lorraine

    1998-12-01

    A goal of computational aeroacoustics is the accurate calculation of noise from a jet in the far field. This work concerns the numerical aspects of accurately calculating acoustic waves over large distances and long time. More specifically, the stability, efficiency, accuracy, dispersion and dissipation in spatial discretizations, time stepping schemes, and absorbing boundaries for the direct solution of wave propagation problems are determined. Efficient finite difference methods developed by Tam and Webb, which minimize dispersion and dissipation, are commonly used for the spatial and temporal discretization. Alternatively, high order pseudospectral methods can be made more efficient by using the grid transformation introduced by Kosloff and Tal-Ezer. Work in this dissertation confirms that the grid transformation introduced by Kosloff and Tal-Ezer is not spectrally accurate because, in the limit, the grid transformation forces zero derivatives at the boundaries. If a small number of grid points are used, it is shown that approximations with the Chebyshev pseudospectral method with the Kosloff and Tal-Ezer grid transformation are as accurate as with the Chebyshev pseudospectral method. This result is based on the analysis of the phase and amplitude errors of these methods, and their use for the solution of a benchmark problem in computational aeroacoustics. For the grid transformed Chebyshev method with a small number of grid points it is, however, more appropriate to compare its accuracy with that of high- order finite difference methods. This comparison, for an order of accuracy 10-3 for a benchmark problem in computational aeroacoustics, is performed for the grid transformed Chebyshev method and the fourth order finite difference method of Tam. Solutions with the finite difference method are as accurate. and the finite difference method is more efficient than, the Chebyshev pseudospectral method with the grid transformation. The efficiency of the Chebyshev pseudospectral method is further improved by developing Runge-Kutta methods for the temporal discretization which maximize imaginary stability intervals. Two new Runge-Kutta methods, which allow time steps almost twice as large as the maximal order schemes, while holding dissipation and dispersion fixed, are developed. In the process of studying dispersion and dissipation, it is determined that maximizing dispersion minimizes dissipation, and vice versa. In order to determine accurate and efficient absorbing boundary conditions, absorbing layers are studied and compared with one way wave equations. The matched layer technique for Maxwell equations is equivalent to the absorbing layer technique for the acoustic wave equation introduced by Kosloff and Kosloff. The numerical implementation of the perfectly matched layer for the acoustic wave equation with a large damping parameter results in a small portion of the wave transmitting into the absorbing layer. A large damping parameter also results in a large portion of the wave reflecting back into the domain. The perfectly matched layer is implemented on a single domain for the solution of the second order wave equation, and when implemented in this manner shows no advantage over the matched layer. Solutions of the second order wave equation, with the absorbing boundary condition imposed either by the matched layer or by the one way wave equations, are compared. The comparison shows no advantage of the matched layer over the one way wave equation for the absorbing boundary condition. Hence there is no benefit to be gained by using the matched layer, which necessarily increases the size of the computational domain.

  1. Delamination detection using methods of computational intelligence

    NASA Astrophysics Data System (ADS)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  2. 47 CFR 61.32 - Method of filing publications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Method of filing publications. 61.32 Section 61.32 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) TARIFFS General Rules for Dominant Carriers § 61.32 Method of filing publications. (a) Publications...

  3. Interpolation method in simple computed tomography scanner

    NASA Astrophysics Data System (ADS)

    Wiguna, Gede A.

    2015-03-01

    A method for sinogram data interpolation based on a sinusoidal pattern in computed tomography has been developed. Sampled sinograms were acquired based on angular interval scanning of 5o, 10o, and 20o. Then each resulted sinogram was interpolated following sinusoidal pattern to make a complete full scanning sinogram as if they were sampled at 1o. After that, a formal summation convolved filtered back projection was applied to each sinogram to yield a crosssectional image. This method was successfully interpolated limited number of projections data to obtained complete sinogram. It works for simple and homogenous object. However, for high variation of physical properties, e.g. linear attenuation coefficient values, this method needs more consideration on interpolation strategies to produce good image.

  4. Review of Computational Stirling Analysis Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  5. Implicit methods for computing chemically reacting flow

    NASA Technical Reports Server (NTRS)

    Li, C. P.

    1986-01-01

    The backward Euler scheme was used to solve a large system of inviscid flow and chemical rate equations in three spatial coordinates. The flow equations were integrated simultaneously in time by a conventional ADI factorization technique, then the species equations were solved by either simultaneous or successive techniques. The methods were evaluated in their efficiency and robustness for a hypersonic flow problem involving an aerobrake configuration. It was found that both implicit methods can effectively reduce the stiffness associated with the chemical production term and that the successive solution for the species was as stable as the simultaneous solution. The latter method is more economical because the computation time varies linearly with the number of species.

  6. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  7. Comparison of Methods of Height Anomaly Computation

    NASA Astrophysics Data System (ADS)

    Mazurova, E.; Lapshin, A.; Menshova, A.

    2012-04-01

    As of today, accurate determination of height anomaly is one of the most difficult problems of geodesy, even with sustainable perfection of mathematical methods, computer possibilities. The most effective methods of height anomaly computation are based on the methods of discrete linear transformations, such as the Fast Fourier Transform (FFT), Short-Time Fourier Transform (STFT), Fast Wavelet Transform (FWT). The main drawback of the classical FFT is weak localization in the time domain. If it is necessary to define the time interval of a frequency presence the STFT is used that allows one to detect the presence of any frequency signal and the interval of its presence. It expands the possibilities of the method in comparison with the classical Fourier Transform. However, subject to Heisenberg's uncertainty principle, it is impossible to tell precisely what frequency signal is present at a given moment of time (it is possible to speak only about the range of frequencies); and it is impossible to tell at what precisely moment of time the frequency signal is present (it is possible to speak only about a time span). A wavelet-transform gives the chance to reduce the influence of the Heisenberg's uncertainty principle on the obtained time-and-frequency representation of the signal. With its help low frequencies have more detailed representation relative to the time, and high frequencies - relative to the frequency. The paper summarizes the results of height anomaly calculations done by the FFT, STFT, FWT methods and represents 3-D models of calculation results. Key words: Fast Fourier Transform(FFT), Short-Time Fourier Transform (STFT), Fast Wavelet Transform(FWT), Heisenberg's uncertainty principle.

  8. Review: Computer Methods in Membrane Biomechanics.

    PubMed

    Humphrey, J. D.

    1998-01-01

    The purpose of this paper is twofold: first, to review analytical, experimental, and numerical methods for studying the nonlinear, pseudoelastic behavior of membranes of interest in biomechanics, and second, to present illustrative examples from the literature for a variety of biomembranes (e.g., skin, pericardium, pleura, aneurysms, and cells) as well as elastomeric membranes used in balloon catheters and new cell stretching tests. Although a membrane approach affords great simplifications in comparison to the three-dimensional theory of nonlinear elasticity, associated problems are still challenging. Computer-based methods are essential, therefore, for performing the requisite experiments, analyzing data, and solving boundary and initial value problems. Emphasis is on stable equilibria although material instabilities and elastodynamics are discussed. PMID:11264803

  9. Monte Carlo methods on advanced computer architectures

    SciTech Connect

    Martin, W.R.

    1991-12-31

    Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.

  10. The Contingent Valuation Method in Public Libraries

    ERIC Educational Resources Information Center

    Chung, Hye-Kyung

    2008-01-01

    This study aims to present a new model measuring the economic value of public libraries, combining the dissonance minimizing (DM) and information bias minimizing (IBM) format in the contingent valuation (CV) surveys. The possible biases which are tied to the conventional CV surveys are reviewed. An empirical study is presented to compare the model

  11. The Contingent Valuation Method in Public Libraries

    ERIC Educational Resources Information Center

    Chung, Hye-Kyung

    2008-01-01

    This study aims to present a new model measuring the economic value of public libraries, combining the dissonance minimizing (DM) and information bias minimizing (IBM) format in the contingent valuation (CV) surveys. The possible biases which are tied to the conventional CV surveys are reviewed. An empirical study is presented to compare the model…

  12. Computational methods for optical molecular imaging

    PubMed Central

    Chen, Duan; Wei, Guo-Wei; Cong, Wen-Xiang; Wang, Ge

    2010-01-01

    Summary A new computational technique, the matched interface and boundary (MIB) method, is presented to model the photon propagation in biological tissue for the optical molecular imaging. Optical properties have significant differences in different organs of small animals, resulting in discontinuous coefficients in the diffusion equation model. Complex organ shape of small animal induces singularities of the geometric model as well. The MIB method is designed as a dimension splitting approach to decompose a multidimensional interface problem into one-dimensional ones. The methodology simplifies the topological relation near an interface and is able to handle discontinuous coefficients and complex interfaces with geometric singularities. In the present MIB method, both the interface jump condition and the photon flux jump conditions are rigorously enforced at the interface location by using only the lowest-order jump conditions. This solution near the interface is smoothly extended across the interface so that central finite difference schemes can be employed without the loss of accuracy. A wide range of numerical experiments are carried out to validate the proposed MIB method. The second-order convergence is maintained in all benchmark problems. The fourth-order convergence is also demonstrated for some three-dimensional problems. The robustness of the proposed method over the variable strength of the linear term of the diffusion equation is also examined. The performance of the present approach is compared with that of the standard finite element method. The numerical study indicates that the proposed method is a potentially efficient and robust approach for the optical molecular imaging. PMID:20485461

  13. Computational electromagnetic methods for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3.0 times less volume than Figure-8 coils. Uncertainty quantification (UQ): The location/volume/depth of the stimulated region during TMS is often strongly affected by variability in the position and orientation of TMS coils, as well as anatomical differences between patients. A surrogate model-assisted UQ framework was developed and used to statistically characterize TMS depression therapy. The framework identifies key parameters that strongly affect TMS fields, and partially explains variations in TMS treatment responses.

  14. Computational predictive methods for fracture and fatigue

    NASA Technical Reports Server (NTRS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-01-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  15. Computational predictive methods for fracture and fatigue

    NASA Astrophysics Data System (ADS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  16. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 2 2010-07-01 2010-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  17. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 2 2012-07-01 2012-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  18. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 2 2011-07-01 2011-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  19. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 2 2014-07-01 2014-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  20. Survey of the Computer Users of the Upper Arlington Public Library.

    ERIC Educational Resources Information Center

    Tsardoulias, L. Sevim

    The Computer Services Department of the Upper Arlington Public Library in Franklin County, Ohio, provides microcomputers for public use, including IBM compatible and Macintosh computers, a laser printer, and dot-matrix printers. Circulation statistics provide data regarding the frequency and amount of computer use, but these statistics indicate…

  1. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 2 2013-07-01 2013-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  2. Computational simulation methods for composite fracture mechanics

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  3. Symbolic Substitution Methods For Optical Computing

    NASA Astrophysics Data System (ADS)

    Murdocca, M. J.; Huang, A.

    1989-02-01

    Symbolic substitution is a method of computing based on parallel binary pattern replacement, that can be implemented with simple optical components and regular free-space interconnection schemes. A two-dimensional pattern is searched for in parallel in an array and is replaced with another pattern. Pattern transformation rules can be applied sequentially or in parallel to realize complex functions. When the substitution space is modified to be loge SIT connected for N binary spots, and masks are allowed to customize the system, then optical digital circuits using symbolic substitution for network interconnects can be made nearly as efficient in terms of gate count and circuit depth as conventional arbitrary interconnection schemes allow. We describe an optical setup that requires no more than a fanin and fanout of two using optically nonlinear logic devices and a free space interconnection scheme based on symbolic substitution.

  4. Method and system for cardiac computed tomography

    SciTech Connect

    Harell, G.S.; Morehouse, C.C.; Seppi, E.J.

    1980-01-08

    System and method are set forth enabling reconstruction of images of desired ''frozen action'' cross-sections of the heart or of other bodily organs or similar objects undergoing cyclic displacements. Utilizing a computed tomography scanning apparatus data is acquired during one or more full rotational cycles and suitably stored. The said data corresponding to various angular projections can then be correlated with the desired portion of the object's cyclical motion by means of a reference signal associated with the motion, such as that derived through an electrocardiogram-where a heart is the object of interest. Data taking can also be limited to only the times when the desired portion of the cyclical motion is occurring. A sequential presentation of a plurality of said frozen action cross-sections provides a motion picture of the moving object.

  5. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  6. Modules and methods for all photonic computing

    DOEpatents

    Schultz, David R.; Ma, Chao Hung

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  7. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1992-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  8. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  9. User's guide to SAC, a computer program for computing discharge by slope-area method

    USGS Publications Warehouse

    Fulford, Janice M.

    1994-01-01

    This user's guide contains information on using the slope-area program, SAC. SAC can be used to compute peak flood discharges from measurements of high-water marks along a stream reach. The Slope-area method used by the program is the U.S. Geological Survey (USGS) procedure presented in Techniques of Water Resources Investigations of the U.S. Geological Survey, beok 3, chapter A2, "Measurement of Peak Discharge by the Slope-Area Method." The program uses input files that have formats compatible with those used by the water-surface profile program (WSPRO) described in the Federal Highways Administration publication FHWA-IP-89-027. The guide briefly describes the slope-area method documents the input requirements and the output produced, and demonstrates use of SAC.

  10. Predicting the Number of Public Computer Terminals Needed for an On-Line Catalog: A Queuing Theory Approach.

    ERIC Educational Resources Information Center

    Knox, A. Whitney; Miller, Bruce A.

    1980-01-01

    Describes a method for estimating the number of cathode ray tube terminals needed for public use of an online library catalog. Authors claim method could also be used to estimate needed numbers of microform readers for a computer output microform (COM) catalog. Formulae are included. (Author/JD)

  11. Computing Potential Assessment in Atlanta Public Schools Education. Report Number 2.

    ERIC Educational Resources Information Center

    Cobbs, Henry L., Jr.; Wilmoth, James Noel

    The Computing Potential in Atlanta Public School Education (CPAPSE) was developed to determine teacher attitudes about computing potential as an instructional tool and to compare current practice with potential computing applications to determine the degree to which computer resources are being used in grades 2, 3, and 4. During the last week of

  12. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  13. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

    EPA Science Inventory

    The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

  14. Radiological Protection in Cone Beam Computed Tomography (CBCT). ICRP Publication 129.

    PubMed

    Rehani, M M; Gupta, R; Bartling, S; Sharp, G C; Pauwels, R; Berris, T; Boone, J M

    2015-07-01

    The objective of this publication is to provide guidance on radiological protection in the new technology of cone beam computed tomography (CBCT). Publications 87 and 102 dealt with patient dose management in computed tomography (CT) and multi-detector CT. The new applications of CBCT and the associated radiological protection issues are substantially different from those of conventional CT. The perception that CBCT involves lower doses was only true in initial applications. CBCT is now used widely by specialists who have little or no training in radiological protection. This publication provides recommendations on radiation dose management directed at different stakeholders, and covers principles of radiological protection, training, and quality assurance aspects. Advice on appropriate use of CBCT needs to be made widely available. Advice on optimisation of protection when using CBCT equipment needs to be strengthened, particularly with respect to the use of newer features of the equipment. Manufacturers should standardise radiation dose displays on CBCT equipment to assist users in optimisation of protection and comparisons of performance. Additional challenges to radiological protection are introduced when CBCT-capable equipment is used for both fluoroscopy and tomography during the same procedure. Standardised methods need to be established for tracking and reporting of patient radiation doses from these procedures. The recommendations provided in this publication may evolve in the future as CBCT equipment and applications evolve. As with previous ICRP publications, the Commission hopes that imaging professionals, medical physicists, and manufacturers will use the guidelines and recommendations provided in this publication for implementation of the Commission's principle of optimisation of protection of patients and medical workers, with the objective of keeping exposures as low as reasonably achievable, taking into account economic and societal factors, and consistent with achieving the necessary medical outcomes. PMID:26116562

  15. Computational methods in sequence and structure prediction

    NASA Astrophysics Data System (ADS)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed a software package which is capable of designing novel protein structures at the atomic resolution. This software package allows us to perform protein structure design with a flexible backbone. The backbone flexibility includes loop region relaxation as well as a secondary structure collective mode relaxation scheme. (Abstract shortened by UMI.)

  16. Effectiveness of Teaching Methods: Computer Literacy of End-Users.

    ERIC Educational Resources Information Center

    Gattiker, Urs E.; And Others

    Computer literacy has been identified as one of the most important factors for the effective use of computer-based technology in the workplace. Managers need to know the most efficient methods available to teach computer skills to their employees in a short time. Such methods need to be suitable for all employees, whether academically gifted or…

  17. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  18. Saving lives: a computer simulation game for public education about emergencies

    SciTech Connect

    Morentz, J.W.

    1985-01-01

    One facet of the Information Revolution in which the nation finds itself involves the utilization of computers, video systems, and a variety of telecommunications capabilities by those who must cope with emergency situations. Such technologies possess a significant potential for performing emergency public education and transmitting key information that is essential for survival. An ''Emergency Public Information Competitive Challenge Grant,'' under the aegis of the Federal Emergency Management Agency (FEMA), has sponsored an effort to use computer technology - both large, time-sharing systems and small personal computers - to develop computer games which will help teach techniques of emergency management to the public at large. 24 references.

  19. Programs for Use in Teaching Research Methods for Small Computers

    ERIC Educational Resources Information Center

    Halley, Fred S.

    1975-01-01

    Description of Sociology Library (SOLIB), presented as a package of computer programs designed for smaller computers used in research methods courses and by students performing independent research. (Author/ND)

  20. Accreditation: A Method for Evaluating Public Park and Recreation Systems.

    ERIC Educational Resources Information Center

    Twardzik, Louis F.

    1987-01-01

    This article considers the concept of accreditation as a proper method of evaluating public park and recreation systems. Arguments for accreditation are presented, and the system used to evaluate college park and recreation curricula and administration is described. (MT)

  1. Universal Tailored Access: Automating Setup of Public and Classroom Computers.

    ERIC Educational Resources Information Center

    Whittaker, Stephen G.; Young, Ted; Toth-Cohen, Susan

    2002-01-01

    This article describes a setup smart access card that enables users with visual impairments to customize magnifiers and screen readers on computers by loading the floppy disk into the computer and finding and pressing two successive keys. A trial with four elderly users found instruction took about 15 minutes. (Contains 3 references.) (CR)

  2. The method of public morality versus the method of principlism.

    PubMed

    Green, R M; Gert, B; Clouser, K D

    1993-10-01

    Two years ago in two articles in a thematic issue of this journal the three of us engaged in a critique of principlism. In a subsequent issue, B. Andrew Lustig defended aspects of principlism we had criticized and argued against our own account of morality. Our reply to Lustig's critique is also in two parts, corresponding with his own. Our first part shows how Lustig's criticisms are seriously misdirected. Our second and philosophically more important part picks up on Lustig's challenge to us to show that our account of mortality is more adequate than principlism. In particular we show that recognition of mortality as public and systematic enables us to provide a far better description of morality than does principlism. This explains why we adopt the label "Dartmouth Descriptivism." PMID:8138741

  3. Numerical computation of polynomial zeros by means of Aberth's method

    NASA Astrophysics Data System (ADS)

    Bini, Dario

    1996-02-01

    An algorithm for computing polynomial zeros, based on Aberth's method, is presented. The starting approximations are chosen by means of a suitable application of Rouch's theorem. More precisely, an integerq ? 1 and a set of annuliAi,iD1,...,q, in the complex plane, are determined together with the numberki of zeros of the polynomial contained in each annulusAi. As starting approximations we chooseki complex numbers lying on a suitable circle contained in the annulusAi, foriD1,...,q. The computation of Newton's correction is performed in such a way that overflow situations are removed. A suitable stop condition, based on a rigorous backward rounding error analysis, guarantees that the computed approximations are the exact zeros of a "nearby" polynomial. This implies the backward stability of our algorithm. We provide a Fortran 77 implementation of the algorithm which is robust against overflow and allows us to deal with polynomials of any degree, not necessarily monic, whose zeros and coefficients are representable as floating point numbers. In all the tests performed with more than 1000 polynomials having degrees from 10 up to 25,600 and randomly generated coefficients, the Fortran 77 implementation of our algorithm computed approximations to all the zeros within the relative precision allowed by the classical conditioning theorems with 11.1 average iterations. In the worst case the number of iterations needed has been at most 17. Comparisons with available public domain software and with the algorithm PA16AD of Harwell are performed and show the effectiveness of our approach. A multiprecision implementation in MATHEMATICA is presented together with the results of the numerical tests performed.

  4. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  5. Awareness of Accessibility Barriers in Computer-Based Instructional Materials and Faculty Demographics at South Dakota Public Universities

    ERIC Educational Resources Information Center

    Olson, Christopher

    2013-01-01

    Advances in technology and course delivery methods have enabled persons with disabilities to enroll in higher education at an increasing rate. Federal regulations state persons with disabilities must be granted equal access to the information contained in computer-based instructional materials, but faculty at the six public universities in South…

  6. A method of billing third generation computer users

    NASA Technical Reports Server (NTRS)

    Anderson, P. N.; Hyter, D. R.

    1973-01-01

    A method is presented for charging users for the processing of their applications on third generation digital computer systems is presented. For background purposes, problems and goals in billing on third generation systems are discussed. Detailed formulas are derived based on expected utilization and computer component cost. These formulas are then applied to a specific computer system (UNIVAC 1108). The method, although possessing some weaknesses, is presented as a definite improvement over use of second generation billing methods.

  7. Excellence in Computational Biology and Informatics — EDRN Public Portal

    Cancer.gov

    9th Early Detection Research Network (EDRN) Scientific Workshop. Excellence in Computational Biology and Informatics: Sponsored by the EDRN Data Sharing Subcommittee Moderator: Daniel Crichton, M.S., NASA Jet Propulsion Laboratory

  8. Computational Methods for Analyzing Health News Coverage

    ERIC Educational Resources Information Center

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…

  9. Computational Methods for Analyzing Health News Coverage

    ERIC Educational Resources Information Center

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently

  10. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  11. Soft computing methods in design of superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1995-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modeled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  12. Computational methods and opportunities for phosphorylation network medicine

    PubMed Central

    Chen, Yian Ann; Eschrich, Steven A.

    2014-01-01

    Protein phosphorylation, one of the most ubiquitous post-translational modifications (PTM) of proteins, is known to play an essential role in cell signaling and regulation. With the increasing understanding of the complexity and redundancy of cell signaling, there is a growing recognition that targeting the entire network or system could be a necessary and advantageous strategy for treating cancer. Protein kinases, the proteins that add a phosphate group to the substrate proteins during phosphorylation events, have become one of the largest groups of ‘druggable’ targets in cancer therapeutics in recent years. Kinase inhibitors are being regularly used in clinics for cancer treatment. This therapeutic paradigm shift in cancer research is partly due to the generation and availability of high-dimensional proteomics data. Generation of this data, in turn, is enabled by increased use of mass-spectrometry (MS)-based or other high-throughput proteomics platforms as well as companion public databases and computational tools. This review briefly summarizes the current state and progress on phosphoproteomics identification, quantification, and platform related characteristics. We review existing database resources, computational tools, methods for phosphorylation network inference, and ultimately demonstrate the connection to therapeutics. Finally, many research opportunities exist for bioinformaticians or biostatisticians based on developments and limitations of the current and emerging technologies. PMID:25530950

  13. Funding Methods for Public Higher Education in the SREB States.

    ERIC Educational Resources Information Center

    Caruthers, J. Kent; Marks, Joseph L.

    This report provides background information for discussion of major higher education finance issues and options. Terminology for comparing funding methods for public higher education across states is introduced. An overview of the evolution of the objectives of funding methods over time is provided, and detailed profiles of the major…

  14. Computational complexity for the two-point block method

    NASA Astrophysics Data System (ADS)

    See, Phang Pei; Majid, Zanariah Abdul

    2014-12-01

    In this paper, we discussed and compared the computational complexity for two-point block method and one-point method of Adams type. The computational complexity for both methods is determined based on the number of arithmetic operations performed and expressed in O(n). These two methods will be used to solve two-point second order boundary value problem directly and implemented using variable step size strategy adapted with the multiple shooting technique via three-step iterative method. Two numerical examples will be tested. The results show that the computational complexity of these methods is reliable to estimate the cost of these methods in term of the execution time. We conclude that the two-point block method has better computational performance compare to the one-point method as the total number of steps is larger.

  15. The Battle to Secure Our Public Access Computers

    ERIC Educational Resources Information Center

    Sendze, Monique

    2006-01-01

    Securing public access workstations should be a significant part of any library's network and information-security strategy because of the sensitive information patrons enter on these workstations. As the IT manager for the Johnson County Library in Kansas City, Kan., this author is challenged to make sure that thousands of patrons get the access…

  16. Public Experiments and their Analysis with the Replication Method

    NASA Astrophysics Data System (ADS)

    Heering, Peter

    2007-06-01

    One of those who failed to establish himself as a natural philosopher in 18th century Paris was the future revolutionary Jean Paul Marat. He did not only publish several monographs on heat, optics and electricity in which he attempted to characterise his work as being purely empirical but he also tried to establish himself as a public lecturer. From the analysis of his experiments using the replication method it became obvious that the written description is missing several relevant aspects of the experiments. In my paper, I am going to discuss the experiences made in analysing these experiments and will suggest possible relations between these publications and the public demonstrations.

  17. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  18. The ACLS Survey of Scholars: Views on Publications, Computers, Libraries.

    ERIC Educational Resources Information Center

    Morton, Herbert C.; Price, Anne Jamieson

    1986-01-01

    Reviews results of a survey by the American Council of Learned Societies (ACLS) of 3,835 scholars in the humanities and social sciences who are working both in colleges and universities and outside the academic community. Areas highlighted include professional reading, authorship patterns, computer use, and library use. (LRW)

  19. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  20. Three parallel computation methods for structural vibration analysis

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf; Bostic, Susan; Patrick, Merrell; Mahajan, Umesh; Ma, Shing

    1988-01-01

    The Lanczos (1950), multisectioning, and subspace iteration sequential methods for vibration analysis presently used as bases for three parallel algorithms are noted, in the aftermath of three example problems, to maintain reasonable accuracy in the computation of vibration frequencies. Significant computation time reductions are obtained as the number of processors increases. An analysis is made of the performance of each method, in order to characterize relative strengths and weaknesses as well as to identify those parameters that most strongly affect computation efficiency.

  1. Ecological validity and the study of publics: The case for organic public engagement methods.

    PubMed

    Gehrke, Pat J

    2014-01-01

    This essay argues for a method of public engagement grounded in the criteria of ecological validity. Motivated by what Hammersly called the responsibility that comes with intellectual authority: "to seek, as far as possible, to ensure the validity of their conclusions and to participate in rational debate about those conclusions" (1993: 29), organic public engagement follows the empirical turn in citizenship theory and in rhetorical studies of actually existing publics. Rather than shaping citizens into either the compliant subjects of the cynical view or the deliberatively disciplined subjects of the idealist view, organic public engagement instead takes Asen's advice that "we should ask: how do people enact citizenship?" (2004: 191). In short, organic engagement methods engage publics in the places where they already exist and through those discourses and social practices by which they enact their status as publics. Such engagements can generate practical middle-range theories that facilitate future actions and decisions that are attentive to the local ecologies of diverse publics. PMID:23887250

  2. Overview of computational structural methods for modern military aircraft

    NASA Technical Reports Server (NTRS)

    Kudva, J. N.

    1992-01-01

    Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

  3. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  4. Classical versus Computer Algebra Methods in Elementary Geometry

    ERIC Educational Resources Information Center

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  5. 12 CFR 227.25 - Unfair balance computation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... under 12 CFR 226.12 or 12 CFR 226.13; or (2) Adjustments to finance charges as a result of the return of... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in...

  6. Lattice gas methods for computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1995-01-01

    This paper presents the lattice gas solution to the category 1 problems of the ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics. The first and second problems were solved for Delta t = Delta x = 1, and additionally the second problem was solved for Delta t = 1/4 and Delta x = 1/2. The results are striking: even for these large time and space grids the lattice gas numerical solutions are almost indistinguishable from the analytical solutions. A simple bug in the Mathematica code was found in the solutions submitted for comparison, and the comparison plots shown at the end of this volume show the bug. An Appendix to the present paper shows an example lattice gas solution with and without the bug.

  7. COMSAC: Computational Methods for Stability and Control. Part 1

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

  8. [Teaching quantitative methods in public health: the EHESP experience].

    PubMed

    Grimaud, Olivier; Astagneau, Pascal; Desvarieux, Moïse; Chambaud, Laurent

    2014-01-01

    Many scientific disciplines, including epidemiology and biostatistics, are used in the field of public health. These quantitative sciences are fundamental tools necessary for the practice of future professionals. What then should be the minimum quantitative sciences training, common to all future public health professionals? By comparing the teaching models developed in Columbia University and those in the National School of Public Health in France, the authors recognize the need to adapt teaching to the specific competencies required for each profession. They insist that all public health professionals, whatever their future career, should be familiar with quantitative methods in order to ensure that decision-making is based on a reflective and critical use of quantitative analysis. PMID:25629671

  9. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  10. Transonic Flow Computations Using Nonlinear Potential Methods

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.

  11. Original computer method for the experimental data processing in photoelasticity

    NASA Astrophysics Data System (ADS)

    Oanta, Emil M.; Panait, Cornel; Barhalescu, Mihaela; Sabau, Adrian; Dumitrache, Constantin; Dascalescu, Anca-Elena

    2015-02-01

    Optical methods in experimental mechanics are important because their results are accurate and they may be used for both full field interpretation and analysis of the local rapid variation of the stresses produced by the stress concentrators. Researchers conceived several graphical, analytical and numerical methods for the experimental data reduction. The paper presents an original computer method employed to compute the analytic functions of the isostatics, using the pattern of isoclinics of a photoelastic model or coating. The resulting software instrument may be included in hybrid models consisting of analytical, numerical and experimental studies. The computer-based integration of the results of these studies offers a higher level of understanding of the phenomena. A thorough examination of the sources of inaccuracy of this computer based numerical method was done and the conclusions were tested using the original computer code which implements the algorithm.

  12. The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

    ERIC Educational Resources Information Center

    DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

    2013-01-01

    Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a

  13. The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

    ERIC Educational Resources Information Center

    DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

    2013-01-01

    Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…

  14. Statistical and Computational Methods for Genetic Diseases: An Overview

    PubMed Central

    Di Taranto, Maria Donata

    2015-01-01

    The identification of causes of genetic diseases has been carried out by several approaches with increasing complexity. Innovation of genetic methodologies leads to the production of large amounts of data that needs the support of statistical and computational methods to be correctly processed. The aim of the paper is to provide an overview of statistical and computational methods paying attention to methods for the sequence analysis and complex diseases. PMID:26106440

  15. Statistical and Computational Methods for Genetic Diseases: An Overview.

    PubMed

    Camastra, Francesco; Di Taranto, Maria Donata; Staiano, Antonino

    2015-01-01

    The identification of causes of genetic diseases has been carried out by several approaches with increasing complexity. Innovation of genetic methodologies leads to the production of large amounts of data that needs the support of statistical and computational methods to be correctly processed. The aim of the paper is to provide an overview of statistical and computational methods paying attention to methods for the sequence analysis and complex diseases. PMID:26106440

  16. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  17. Analytical and numerical methods; advanced computer concepts

    SciTech Connect

    Lax, P D

    1991-03-01

    This past year, two projects have been completed and a new is under way. First, in joint work with R. Kohn, we developed a numerical algorithm to study the blowup of solutions to equations with certain similarity transformations. In the second project, the adaptive mesh refinement code of Berger and Colella for shock hydrodynamic calculations has been parallelized and numerical studies using two different shared memory machines have been done. My current effort is towards the development of Cartesian mesh methods to solve pdes with complicated geometries. Most of the coming year will be spent on this project, which is joint work with Prof. Randy Leveque at the University of Washington in Seattle.

  18. Who's in the Queue? A Demographic Analysis of Public Access Computer Users and Uses in U.S. Public Libraries. Research Brief Number 4

    ERIC Educational Resources Information Center

    Manjarrez, Carlos A.; Schoembs, Kyle

    2011-01-01

    Over the past decade, policy discussions about public access computing in libraries have focused on the role that these institutions play in bridging the digital divide. In these discussions, public access computing services are generally targeted at individuals who either cannot afford a computer and Internet access, or have never received formal…

  19. Small Towns and Small Computers: Can a Match Be Made? A Public Policy Seminar.

    ERIC Educational Resources Information Center

    National Association of Towns and Townships, Washington, DC.

    A public policy seminar discussed how to match small towns and small computers. James K. Coyne, Special Assistant to the President and Director of the White House Office of Private Sector Initiatives, offered opening remarks and described a database system developed by his office to link organizations and communities with small computers to…

  20. 77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-18

    ... open to the general public. NIST invites organizations to display posters and participate as exhibitors..., especially Cloud Computing and Big Data community stakeholders, to participate in this event with a poster... academic, industry, and standards developing organizations to display posters related to Cloud Computing...

  1. Computer-Based National Information Systems. Technology and Public Policy Issues.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    A general introduction to computer based national information systems, and the context and basis for future studies are provided in this report. Chapter One, the introduction, summarizes computers and information systems and their relation to society, the structure of information policy issues, and public policy issues. Chapter Two describes the…

  2. Observations on the Use of Computer and Broadcast Television Technology in One Public Elementary School.

    ERIC Educational Resources Information Center

    Hoge, John Douglas

    This paper provides participant observations regarding the use of computer and broadcast television technology at a suburban public elementary school in Athens, Georgia during the 1995-1996 school year. The paper describes the hardware and software available in the school, and the use and misuse of computers and broadcast television in the

  3. The Diffusion of Evaluation Methods among Public Relations Practitioners.

    ERIC Educational Resources Information Center

    Dozier, David M.

    A study explored the relationships between public relations practitioners' organizational roles and the type of evaluation methods they used on the job. Based on factor analysis of role data obtained from an earlier study, four organizational roles were defined and ranked: communication manager, media relations specialist, communication liaison,…

  4. Method for transferring data from an unsecured computer to a secured computer

    DOEpatents

    Nilsen, Curt A.

    1997-01-01

    A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

  5. Multiscale methods for computational RNA enzymology

    PubMed Central

    Panteva, Maria T.; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K.; Kuechler, Erich R.; Giambaşu, George M.; Lee, Tai-Sung; York, Darrin M.

    2016-01-01

    RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multi-dimensional “problem space” of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues and bond breaking/forming in the chemical steps of the reaction. The goal of this article is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics (MD) simulations, reference interaction site model (RISM) calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics (HREMD) and quantum mechanical/molecular mechanical (QM/MM) simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme (HDVr) and RNase A. PMID:25726472

  6. Computational methods for internal flows with emphasis on turbomachinery

    NASA Technical Reports Server (NTRS)

    Mcnally, W. D.; Sockol, P. M.

    1981-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  7. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  8. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  9. Low-Rank Incremental Methods for Computing Dominant Singular Subspaces

    SciTech Connect

    Baker, Christopher G; Gallivan, Dr. Kyle A; Van Dooren, Dr. Paul

    2012-01-01

    Computing the singular values and vectors of a matrix is a crucial kernel in numerous scientific and industrial applications. As such, numerous methods have been proposed to handle this problem in a computationally efficient way. This paper considers a family of methods for incrementally computing the dominant SVD of a large matrix A. Specifically, we describe a unification of a number of previously disparate methods for approximating the dominant SVD via a single pass through A. We tie the behavior of these methods to that of a class of optimization-based iterative eigensolvers on A'*A. An iterative procedure is proposed which allows the computation of an accurate dominant SVD via multiple passes through A. We present an analysis of the convergence of this iteration, and provide empirical demonstration of the proposed method on both synthetic and benchmark data.

  10. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  11. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  12. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  13. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  14. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  15. A Quantitative Method for Estimating Probable Public Costs of Hurricanes.

    PubMed

    BOSWELL; DEYLE; SMITH; BAKER

    1999-04-01

    / A method is presented for estimating probable public costs resulting from damage caused by hurricanes, measured as local government expenditures approved for reimbursement under the Stafford Act Section 406 Public Assistance Program. The method employs a multivariate model developed through multiple regression analysis of an array of independent variables that measure meteorological, socioeconomic, and physical conditions related to the landfall of hurricanes within a local government jurisdiction. From the regression analysis we chose a log-log (base 10) model that explains 74% of the variance in the expenditure data using population and wind speed as predictors. We illustrate application of the method for a local jurisdiction-Lee County, Florida, USA. The results show that potential public costs range from $4.7 million for a category 1 hurricane with winds of 137 kilometers per hour (85 miles per hour) to $130 million for a category 5 hurricane with winds of 265 kilometers per hour (165 miles per hour). Based on these figures, we estimate expected annual public costs of $2.3 million. These cost estimates: (1) provide useful guidance for anticipating the magnitude of the federal, state, and local expenditures that would be required for the array of possible hurricanes that could affect that jurisdiction; (2) allow policy makers to assess the implications of alternative federal and state policies for providing public assistance to jurisdictions that experience hurricane damage; and (3) provide information needed to develop a contingency fund or other financial mechanism for assuring that the community has sufficient funds available to meet its obligations. KEY WORDS: Hurricane; Public costs; Local government; Disaster recovery; Disaster response; Florida; Stafford Act PMID:9950698

  16. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  17. Methods for computing comet core temperatures.

    PubMed

    McKay, C P; Squyres, S W; Reynolds, R T

    1986-01-01

    General analytic expressions are derived that relate the surface temperature to the temperature deep within the nucleus of a spherically symmetric layered comet in thermal equilibrium. The relation between the average surface temperature and the mean temperature at great depths depends entirely on the temperature dependence of the thermal conductivity. The core temperature is given by the inverse of the anti-derivative of the thermal conductivity, with respect to temperature, operating on the average value of the anti-derivative of the thermal conductivity evaluated at the surface temperature. Using these expressions detailed numerical models of the surface temperature of comets can be used to directly estimate the core temperature. For the special, albeit unphysical, case of an isothermal, low-conductivity comet nucleus, without sublimation, the core temperature can be determined analytically. To illustrate the dependence of core temperature on eccentricity this simple case is solved assuming that the temperatures dependence of the thermal conductivity is given by that of crystalline ice. For an eccentricity of approximately 0.5, the core temperature obtained is 3% colder than the corresponding value obtained assuming constant thermal conductivity an is 11% colder than the result of Klinger's (1981) formula. This method is also applied to a detailed numerical model with a complicated nonintegrable thermal conductivity. PMID:11542053

  18. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    ERIC Educational Resources Information Center

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer

  19. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation

    PubMed Central

    Hu, Hongzhi; Mao, Huajuan; Hu, Xiaohua; Hu, Feng; Sun, Xuemin; Jing, Zaiping; Duan, Yunsuo

    2015-01-01

    Due to the extensive social influence, public health emergency has attracted great attention in today's society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event's social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution) provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback) based on ACP simulation system which was successfully applied to the analysis of A (H1N1) Flu emergency. PMID:26609303

  20. A typology of health marketing research methods--combining public relations methods with organizational concern.

    PubMed

    Rotarius, Timothy; Wan, Thomas T H; Liberman, Aaron

    2007-01-01

    Research plays a critical role throughout virtually every conduit of the health services industry. The key terms of research, public relations, and organizational interests are discussed. Combining public relations as a strategic methodology with the organizational concern as a factor, a typology of four different research methods emerges. These four health marketing research methods are: investigative, strategic, informative, and verification. The implications of these distinct and contrasting research methods are examined. PMID:19042536

  1. The Importance of Computer Science for Public Health Training: An Opportunity and Call to Action

    PubMed Central

    Christie, Gillian; Yach, Derek; El-Sayed, Abdulrahman M

    2016-01-01

    A century ago, the Welch-Rose Report established a public health education system in the United States. Since then, the system has evolved to address emerging health needs and integrate new technologies. Today, personalized health technologies generate large amounts of data. Emerging computer science techniques, such as machine learning, present an opportunity to extract insights from these data that could help identify high-risk individuals and tailor health interventions and recommendations. As these technologies play a larger role in health promotion, collaboration between the public health and technology communities will become the norm. Offering public health trainees coursework in computer science alongside traditional public health disciplines will facilitate this evolution, improving public health’s capacity to harness these technologies to improve population health. PMID:27227145

  2. Platform-independent method for computer aided schematic drawings

    DOEpatents

    Vell, Jeffrey L.; Siganporia, Darius M.; Levy, Arthur J.

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  3. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  4. Parallel methods for eigenvalue computations on hybrid multiprocessing architectures

    SciTech Connect

    Natarajan, R. )

    1988-01-01

    This paper discusses the parallel implementation of eigenvalue computations on a hybrid multiprocessing architecture such as the IBM RP3 (a proposed 512 way parallel computer under development at the IBM T.J. Watson Center). The authors discuss two algorithms that are especially suited to parallel computers. They present a new parallel implementation of the Martin-Wilkinson algorithm for the reduction of the generalized problem to the standard form as well as priority-based task scheduling algorithm for the Sturm bisection method used to computer the eigenvalues of the reduced tridiagonal matrix. Upper bounds on the performance of these algorithms on RP3 are provided using results obtained from simulation.

  5. Key management of the double random-phase-encoding method using public-key encryption

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2010-03-01

    Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.

  6. Computer Simulation Methods for Defect Configurations and Nanoscale Structures

    SciTech Connect

    Gao, Fei

    2010-01-01

    This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.

  7. Atomistic Method Applied to Computational Modeling of Surface Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Abel, Phillip B.

    2000-01-01

    The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on the BFS (Bozzolo, Ferrante, and Smith) method for the calculation of the energetics, consists of a small number of simple PCbased computer codes that deal with the different aspects of surface alloy formation. Two analysis modes are available within this package. The first mode provides an atom-by-atom description of real and virtual stages 1. during the process of surface alloying, based on the construction of catalogues of configurations where each configuration describes one possible atomic distribution. BFS analysis of this catalogue provides information on accessible states, possible ordering patterns, and details of island formation or film growth. More importantly, it provides insight into the evolution of the system. Software developed by the Computational Materials Group allows for the study of an arbitrary number of elements forming surface alloys, including an arbitrary number of surface atomic layers. The second mode involves large-scale temperature-dependent computer 2. simulations that use the BFS method for the energetics and provide information on the dynamic processes during surface alloying. These simulations require the implementation of Monte-Carlo-based codes with high efficiency within current workstation environments. This methodology capitalizes on the advantages of the BFS method: there are no restrictions on the number or type of elements or on the type of crystallographic structure considered. This removes any restrictions in the definition of the configuration catalogues used in the analytical calculations, thus allowing for the study of arbitrary ordering patterns, ultimately leading to the actual surface alloy structure. Moreover, the Monte Carlo numerical technique used for the large-scale simulations allows for a detailed visualization of the simulated process, the main advantage of this type of analysis being the ability to understand the underlying features that drive these processes. Because of the simplicity of the BFS method for e energetics used in these calculations, a detailed atom-by-atom analysis can be performed at any point in the simulation, providing necessary insight on the details of the process. The main objective of this research program is to develop a tool to guide experimenters in understanding and interpreting often unexpected results in alloy formation experiments. By reducing the computational effort without losing physical accuracy, we expect that powerful simulation tools will be developed in the immediate future, which will allow material scientists to easily visualize and analyze processes at a level not achievable experimentally.

  8. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  9. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  10. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  11. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  12. Public health surveillance: historical origins, methods and evaluation.

    PubMed Central

    Declich, S.; Carter, A. O.

    1994-01-01

    In the last three decades, disease surveillance has grown into a complete discipline, quite distinct from epidemiology. This expansion into a separate scientific area within public health has not been accompanied by parallel growth in the literature about its principles and methods. The development of the fundamental concepts of surveillance systems provides a basis on which to build a better understanding of the subject. In addition, the concepts have practical value as they can be used in designing new systems as well as understanding or evaluating currently operating systems. This article reviews the principles of surveillance, beginning with a historical survey of the roots and evolution of surveillance, and discusses the goals of public health surveillance. Methods for data collection, data analysis, interpretation, and dissemination are presented, together with proposed procedures for evaluating and improving a surveillance system. Finally, some points to be considered in establishing a new surveillance system are presented. PMID:8205649

  13. A Comparative Assessment of Computer Literacy of Private and Public Secondary School Students in Lagos State, Nigeria

    ERIC Educational Resources Information Center

    Osunwusi, Adeyinka Olumuyiwa; Abifarin, Michael Segun

    2013-01-01

    The aim of this study was to conduct a comparative assessment of computer literacy of private and public secondary school students. Although the definition of computer literacy varies widely, this study treated computer literacy in terms of access to, and use of, computers and the internet, basic knowledge and skills required to use computers and…

  14. A Comparative Assessment of Computer Literacy of Private and Public Secondary School Students in Lagos State, Nigeria

    ERIC Educational Resources Information Center

    Osunwusi, Adeyinka Olumuyiwa; Abifarin, Michael Segun

    2013-01-01

    The aim of this study was to conduct a comparative assessment of computer literacy of private and public secondary school students. Although the definition of computer literacy varies widely, this study treated computer literacy in terms of access to, and use of, computers and the internet, basic knowledge and skills required to use computers and

  15. Comparison of methods for computing streamflow statistics for Pennsylvania streams

    USGS Publications Warehouse

    Ehlke, Marla H.; Reed, Lloyd A.

    1999-01-01

    Methods for computing streamflow statistics intended for use on ungaged locations on Pennsylvania streams are presented and compared to frequency distributions of gaged streamflow data. The streamflow statistics used in the comparisons include the 7-day 10-year low flow, 50-year flood flow, and the 100-year flood flow; additional statistics are presented. Streamflow statistics for gaged locations on streams in Pennsylvania were computed using three methods for the comparisons: 1) Log-Pearson type III frequency distribution (Log-Pearson) of continuous-record streamflow data, 2) regional regression equations developed by the U.S. Geological Survey in 1982 (WRI 82-21), and 3) regional regression equations developed by the Pennsylvania State University in 1981 (PSU-IV). Log-Pearson distribution was considered the reference method for evaluation of the regional regression equations. Low-flow statistics were computed using the Log-Pearson distribution and WRI 82-21, whereas flood-flow statistics were computed using all three methods. The urban adjustment for PSU-IV was modified from the recommended computation to exclude Philadelphia and the surrounding areas (region 1) from the adjustment. Adjustments for storage area for PSU-IV were also slightly modified. A comparison of the 7-day 10-year low flow computed from Log-Pearson distribution and WRI-82- 21 showed that the methods produced significantly different values for about 7 percent of the state. The same methods produced 50-year and 100-year flood flows that were significantly different for about 24 percent of the state. Flood-flow statistics computed using Log-Pearson distribution and PSU-IV were not significantly different in any regions of the state. These findings are based on a statistical comparison using the t-test on signed ranks and graphical methods.

  16. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    ERIC Educational Resources Information Center

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

  17. Checklist and Pollard Walk butterfly survey methods on public lands

    USGS Publications Warehouse

    Royer, R.A.; Austin, J.E.; Newton, W.E.

    1998-01-01

    Checklist and Pollard Walk butterfly survey methods were contemporaneously applied to seven public sites in North Dakota during the summer of 1995. Results were compared for effect of method and site on total number of butterflies and total number of species detected per hour. Checklist searching produced significantly more butterfly detections per hour than Pollard Walks at all sites. Number of species detected per hour did not differ significantly either among sites or between methods. Many species were detected by only one method, and at most sites generalist and invader species were more likely to be observed during checklist searches than during Pollard Walks. Results indicate that checklist surveys are a more efficient means for initial determination of a species list for a site, whereas for long-term monitoring the Pollard Walk is more practical and statistically manageable. Pollard Walk transects are thus recommended once a prairie butterfly fauna has been defined for a site by checklist surveys.

  18. Method for implementation of recursive hierarchical segmentation on parallel computers

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  19. Calculating PI Using Historical Methods and Your Personal Computer.

    ERIC Educational Resources Information Center

    Mandell, Alan

    1989-01-01

    Provides a software program for determining PI to the 15th place after the decimal. Explores the history of determining the value of PI from Archimedes to present computer methods. Investigates Wallis's, Liebniz's, and Buffon's methods. Written for Tandy GW-BASIC (IBM compatible) with 384K. Suggestions for Apple II's are given. (MVL)

  20. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  1. Computational methods for structural load and resistance modeling

    NASA Technical Reports Server (NTRS)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  2. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  3. Integrating Publicly Available Data to Generate Computationally Predicted Adverse Outcome Pathways for Fatty Liver.

    PubMed

    Bell, Shannon M; Angrish, Michelle M; Wood, Charles E; Edwards, Stephen W

    2016-04-01

    Newin vitrotesting strategies make it possible to design testing batteries for large numbers of environmental chemicals. Full utilization of the results requires knowledge of the underlying biological networks and the adverse outcome pathways (AOPs) that describe the route from early molecular perturbations to an adverse outcome. Curation of a formal AOP is a time-intensive process and a rate-limiting step to designing these test batteries. Here, we describe a method for integrating publicly available data in order to generate computationally predicted AOP (cpAOP) scaffolds, which can be leveraged by domain experts to shorten the time for formal AOP development. A network-based workflow was used to facilitate the integration of multiple data types to generate cpAOPs. Edges between graph entities were identified through direct experimental or literature information, or computationally inferred using frequent itemset mining. Data from the TG-GATEs and ToxCast programs were used to channel large-scale toxicogenomics information into a cpAOP network (cpAOPnet) of over 20 000 relationships describing connections between chemical treatments, phenotypes, and perturbed pathways as measured by differential gene expression and high-throughput screening targets. The resulting fatty liver cpAOPnet is available as a resource to the community. Subnetworks of cpAOPs for a reference chemical (carbon tetrachloride, CCl4) and outcome (fatty liver) were compared with published mechanistic descriptions. In both cases, the computational approaches approximated the manually curated AOPs. The cpAOPnet can be used for accelerating expert-curated AOP development and to identify pathway targets that lack genomic markers or high-throughput screening tests. It can also facilitate identification of key events for designing test batteries and for classification and grouping of chemicals for follow up testing. PMID:26895641

  4. Public participation GIS: a method for identifying ecosystems services

    USGS Publications Warehouse

    Brown, Greg; Montag, Jessica; Lyon, Katie

    2012-01-01

    This study evaluated the use of an Internet-based public participation geographic information system (PPGIS) to identify ecosystem services in Grand County, Colorado. Specific research objectives were to examine the distribution of ecosystem services, identify the characteristics of participants in the study, explore potential relationships between ecosystem services and land use and land cover (LULC) classifications, and assess the methodological strengths and weakness of the PPGIS approach for identifying ecosystem services. Key findings include: (1) Cultural ecosystem service opportunities were easiest to identify while supporting and regulatory services most challenging, (2) participants were highly educated, knowledgeable about nature and science, and have a strong connection to the outdoors, (3) some LULC classifications were logically and spatially associated with ecosystem services, and (4) despite limitations, the PPGIS method demonstrates potential for identifying ecosystem services to augment expert judgment and to inform public or environmental policy decisions regarding land use trade-offs.

  5. Data analysis through interactive computer animation method (DATICAM)

    SciTech Connect

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process.

  6. A stochastic method for computing hadronic matrix elements

    DOE PAGESBeta

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; Drach, Vincent; Jansen, Karl; Hadjiyiannakou, Kyriakos; Renner, Dru B.

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  7. A fast sweeping method for computing geodesics on triangular manifolds.

    PubMed

    Xu, Song-Gang; Zhang, Yun-Xiang; Yong, Jun-Hai

    2010-02-01

    A wide range of applications in computer intelligence and computer graphics require computing geodesics accurately and efficiently. The fast marching method (FMM) is widely used to solve this problem, of which the complexity is O(N\\log N), where N is the total number of nodes on the manifold. A fast sweeping method (FSM) is proposed and applied on arbitrary triangular manifolds of which the complexity is reduced to O(N). By traversing the undigraph, four orderings are built to produce two groups of interfering waves, which cover all directions of characteristics. The correctness of this method is proved by analyzing the coverage of characteristics. The convergence and error estimation are also presented. PMID:20075455

  8. Fully consistent CFD methods for incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kolmogorov, D. K.; Shen, W. Z.; Sørensen, N. N.; Sørensen, J. N.

    2014-06-01

    Nowadays collocated grid based CFD methods are one of the most efficient tools for computations of the flows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure-velocity coupling on collocated grids use the so-called momentum interpolation method of Rhie and Chow [1]. As known, the method and some of its widely spread modifications result in solutions, which are dependent of time step at convergence. In this paper the magnitude of the dependence is shown to contribute about 0.5% into the total error in a typical turbulent flow computation. Nevertheless if coarse grids are used, the standard interpolation methods result in much higher non-consistent behavior. To overcome the problem, a recently developed interpolation method, which is independent of time step, is used. It is shown that in comparison to other time step independent method, the method may enhance the convergence rate of the SIMPLEC algorithm up to 25 %. The method is verified using turbulent flow computations around a NACA 64618 airfoil and the roll-up of a shear layer, which may appear in wind turbine wake.

  9. Geometrical MTF computation method based on the irradiance model

    NASA Astrophysics Data System (ADS)

    Lin, P.-D.; Liu, C.-S.

    2011-01-01

    The Modulation Transfer Function (MTF) is a measure of an optical system's ability to transfer contrast from the specimen to the image plane at a specific resolution. It can be computed either numerically by geometrical optics or measured experimentally by imaging a knife edge or a bar-target pattern of varying spatial frequency. Previously, MTF accuracy was generally affected by the size of the mesh on the image plane. This paper presents a new MTF computation method based on the irradiance model, without counting the number of rays hitting each grid. To verify the method, the MTF in the sagittal and meridional directions of an axis-symmetrical optical system is computed by both the ray-counting and the proposed methods. It is found that the grid size meshed on the image plane significantly affects the MTF of the ray-counting method, sometimes with significantly negative results. The proposed irradiance method is immune to issues of grid size. The CPU computation time for the two methods is approximately the same.

  10. Software for computing eigenvalue bounds for iterative subspace matrix methods

    NASA Astrophysics Data System (ADS)

    Shepard, Ron; Minkoff, Michael; Zhou, Yunkai

    2005-07-01

    This paper describes software for computing eigenvalue bounds to the standard and generalized hermitian eigenvalue problem as described in [Y. Zhou, R. Shepard, M. Minkoff, Computing eigenvalue bounds for iterative subspace matrix methods, Comput. Phys. Comm. 167 (2005) 90-102]. The software discussed in this manuscript applies to any subspace method, including Lanczos, Davidson, SPAM, Generalized Davidson Inverse Iteration, Jacobi-Davidson, and the Generalized Jacobi-Davidson methods, and it is applicable to either outer or inner eigenvalues. This software can be applied during the subspace iterations in order to truncate the iterative process and to avoid unnecessary effort when converging specific eigenvalues to a required target accuracy, and it can be applied to the final set of Ritz values to assess the accuracy of the converged results. Program summaryTitle of program: SUBROUTINE BOUNDS_OPT Catalogue identifier: ADVE Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVE Computers: any computer that supports a Fortran 90 compiler Operating systems: any computer that supports a Fortran 90 compiler Programming language: Standard Fortran 90 High speed storage required:5m+5 working-precision and 2m+7 integer for m Ritz values No. of bits in a word: The floating point working precision is parameterized with the symbolic constant WP No. of lines in distributed program, including test data, etc.: 2452 No. of bytes in distributed program, including test data, etc.: 281 543 Distribution format: tar.gz Nature of physical problem: The computational solution of eigenvalue problems using iterative subspace methods has widespread applications in the physical sciences and engineering as well as other areas of mathematical modeling (economics, social sciences, etc.). The accuracy of the solution of such problems and the utility of those errors is a fundamental problem that is of importance in order to provide the modeler with information of the reliability of the computational results. Such applications include using these bounds to terminate the iterative procedure at specified accuracy limits. Method of solution: The Ritz values and their residual norms are computed and used as input for the procedure. While knowledge of the exact eigenvalues is not required, we require that the Ritz values are isolated from the exact eigenvalues outside of the Ritz spectrum and that there are no skipped eigenvalues within the Ritz spectrum. Using a multipass refinement approach, upper and lower bounds are computed for each Ritz value. Typical running time: While typical applications would deal with m<20, for m=100000, the running time is 0.12 s on an Apple PowerBook.

  11. Computational methods for the analysis of primate mobile elements

    PubMed Central

    Cordaux, Richard; Sen, Shurjo K.; Konkel, Miriam K.; Batzer, Mark A.

    2010-01-01

    Transposable elements (TE), defined as discrete pieces of DNA that can move from site to another site in genomes, represent significant components of eukaryotic genomes, including primates. Comparative genome-wide analyses have revealed the considerable structural and functional impact of TE families on primate genomes. Insights into these questions have come in part from the development of computational methods that allow detailed and reliable identification, annotation and evolutionary analyses of the many TE families that populate primate genomes. Here, we present an overview of these computational methods, and describe efficient data mining strategies for providing a comprehensive picture of TE biology in newly available genome sequences. PMID:20238080

  12. The continuous slope-area method for computing event hydrographs

    USGS Publications Warehouse

    Smith, Christopher F.; Cordova, Jeffrey T.; Wiele, Stephen M.

    2010-01-01

    The continuous slope-area (CSA) method expands the slope-area method of computing peak discharge to a complete flow event. Continuously recording pressure transducers installed at three or more cross sections provide water-surface slopes and stage during an event that can be used with cross-section surveys and estimates of channel roughness to compute a continuous discharge hydrograph. The CSA method has been made feasible by the availability of low-cost recording pressure transducers that provide a continuous record of stage. The CSA method was implemented on the Babocomari River in Arizona in 2002 to monitor streamflow in the channel reach by installing eight pressure transducers in four cross sections within the reach. Continuous discharge hydrographs were constructed from five streamflow events during 2002-2006. Results from this study indicate that the CSA method can be used to obtain continuous hydrographs and rating curves can be generated from streamflow events.

  13. New computational methods and algorithms for semiconductor science and nanotechnology

    NASA Astrophysics Data System (ADS)

    Gamoke, Benjamin C.

    The design and implementation of sophisticated computational methods and algorithms are critical to solve problems in nanotechnology and semiconductor science. Two key methods will be described to overcome challenges in contemporary surface science. The first method will focus on accurately cancelling interactions in a molecular system, such as modeling adsorbates on periodic surfaces at low coverages, a problem for which current methodologies are computationally inefficient. The second method pertains to the accurate calculation of core-ionization energies through X-ray photoelectron spectroscopy. The development can provide assignment of peaks in X-ray photoelectron spectra, which can determine the chemical composition and bonding environment of surface species. Finally, illustrative surface-adsorbate and gas-phase studies using the developed methods will also be featured.

  14. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. PMID:25698513

  15. Practical Use of Computationally Frugal Model Analysis Methods.

    PubMed

    Hill, Mary C; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2016-03-01

    Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities-this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts. PMID:25810333

  16. Selection and Integration of a Computer Simulation for Public Budgeting and Finance (PBS 116).

    ERIC Educational Resources Information Center

    Banas, Ed Jr.

    1998-01-01

    Describes the development of a course on public budgeting and finance, which integrated the use of SimCity Classic, a computer-simulation software, with traditional lecture, guest speakers, and collaborative-learning activities. Explains the rationale for the course design and discusses the results from the first semester of teaching the course.…

  17. Advanced Telecommunications and Computer Technologies in Georgia Public Elementary School Library Media Centers.

    ERIC Educational Resources Information Center

    Rogers, Jackie L.

    The purpose of this study was to determine what recent progress had been made in Georgia public elementary school library media centers regarding access to advanced telecommunications and computer technologies as a result of special funding. A questionnaire addressed the following areas: automation and networking of the school library media center…

  18. The Ever-Present Demand for Public Computing Resources. CDS Spotlight

    ERIC Educational Resources Information Center

    Pirani, Judith A.

    2014-01-01

    This Core Data Service (CDS) Spotlight focuses on public computing resources, including lab/cluster workstations in buildings, virtual lab/cluster workstations, kiosks, laptop and tablet checkout programs, and workstation access in unscheduled classrooms. The findings are derived from 758 CDS 2012 participating institutions. A dataset of 529…

  19. 77 FR 26509 - Notice of Public Meeting-Cloud Computing Forum & Workshop V

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-04

    ... interoperability, portability and security in cloud computing. This event is open to the public. In addition, NIST... standards for data portability, cloud interoperability, and security. The workshops' goals were to engage with industry to accelerate the development of cloud standards for interoperability, portability,...

  20. Curriculum modules, software laboratories, and an inexpensive hardware platform for teaching computational methods to undergraduate computer science students

    NASA Astrophysics Data System (ADS)

    Peck, Charles Franklin

    Computational methods are increasingly important to 21st century research and education; bioinformatics and climate change are just two examples of this trend. In this context computer scientists play an important role, facilitating the development and use of the methods and tools used to support computationally-based approaches. The undergraduate curriculum in computer science is one place where computational tools and methods can be introduced to facilitate the development of appropriately prepared computer scientists. To facilitate the evolution of the pedagogy, this dissertation identifies, develops, and organizes curriculum materials, software laboratories, and the reference design for an inexpensive portable cluster computer, all of which are specifically designed to support the teaching of computational methods to undergraduate computer science students. Keywords. computational science, computational thinking, computer science, undergraduate curriculum.

  1. Automatic detection of lung nodules in computed tomography images: training and validation of algorithms using public research databases

    NASA Astrophysics Data System (ADS)

    Camarlinghi, Niccolò

    2013-09-01

    Lung cancer is one of the main public health issues in developed countries. Lung cancer typically manifests itself as non-calcified pulmonary nodules that can be detected reading lung Computed Tomography (CT) images. To assist radiologists in reading images, researchers started, a decade ago, the development of Computer Aided Detection (CAD) methods capable of detecting lung nodules. In this work, a CAD composed of two CAD subprocedures is presented: , devoted to the identification of parenchymal nodules, and , devoted to the identification of the nodules attached to the pleura surface. Both CADs are an upgrade of two methods previously presented as Voxel Based Neural Approach CAD . The novelty of this paper consists in the massive training using the public research Lung International Database Consortium (LIDC) database and on the implementation of new features for classification with respect to the original VBNA method. Finally, the proposed CAD is blindly validated on the ANODE09 dataset. The result of the validation is a score of 0.393, which corresponds to the average sensitivity of the CAD computed at seven predefined false positive rates: 1/8, 1/4, 1/2, 1, 2, 4, and 8 FP/CT.

  2. Learning From Engineering and Computer Science About Communicating The Field To The Public

    NASA Astrophysics Data System (ADS)

    Moore, S. L.; Tucek, K.

    2014-12-01

    The engineering and computer science community has taken the lead in actively informing the public about their discipline, including the societal contributions and career opportunities. These efforts have been intensified in regards to informing underrepresented populations in STEM about engineering and computer science. Are there lessons to be learned by the geoscience community in communicating the societal impacts and career opportunities in the geosciences, especially in regards to broadening participation and meeting Next Generation Science Standards? An estimated 35 percent increase in the number of geoscientist jobs in the United States forecasted for the period between 2008 and 2018, combined with majority populations becoming minority populations, make it imperative that we improve how we increase the public's understanding of the geosciences and how we present our message to targeted populations. This talk will look at recommendations from the National Academy of Engineering's Changing the Conversation: Messages for Improving the Public Understanding of Engineering, and communication strategies by organizations such as Code.org, to highlight practices that the geoscience community can adopt to increase public awareness of the societal contributions of the geosciences, the career opportunities in the geosciences, and the importance of the geosciences in the Next Generation Science Standards. An effort to communicate geoscience to the public, Earth is Calling, will be compared and contrasted to these efforts, and used as an example of how geological societies and other organizations can engage the general public and targeted groups about the geosciences.

  3. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Z.; Falkowski, P.

    1990-07-17

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.

  4. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Zbigniew; Falkowski, Paul

    1990-01-01

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

  5. A fast semidirect method for computing transonic aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Martin, E. D.

    1975-01-01

    A fast, semidirect, iterative computational method, previously introduced for finite-difference solution of subsonic and slightly supercritical flow over airfoils, is extended both to apply to strongly supercritical conditions and to include full second-order accuracy in computing inviscid flows over airfoils. The nonlinear small-disturbance equations are solved iteratively by a direct, linear, elliptic solver. General, fully conservative, type-dependent difference equations are formulated, including parabolic- and shock-point transition operators that provide consistency with the integral conservation laws. These equations specialize to either first-order or to fully second-order-accurate equations. Various free parameters are evaluated for rapid convergence of the first-order scheme. Resulting pressure distributions and computing times are compared with the improved Murman-Cole line-relaxation method.

  6. Computational Methods for CLIP-seq Data Processing

    PubMed Central

    Reyes-Herrera, Paula H; Ficarra, Elisa

    2014-01-01

    RNA-binding proteins (RBPs) are at the core of post-transcriptional regulation and thus of gene expression control at the RNA level. One of the principal challenges in the field of gene expression regulation is to understand RBPs mechanism of action. As a result of recent evolution of experimental techniques, it is now possible to obtain the RNA regions recognized by RBPs on a transcriptome-wide scale. In fact, CLIP-seq protocols use the joint action of CLIP, crosslinking immunoprecipitation, and high-throughput sequencing to recover the transcriptome-wide set of interaction regions for a particular protein. Nevertheless, computational methods are necessary to process CLIP-seq experimental data and are a key to advancement in the understanding of gene regulatory mechanisms. Considering the importance of computational methods in this area, we present a review of the current status of computational approaches used and proposed for CLIP-seq data. PMID:25336930

  7. Computational Methods for CLIP-seq Data Processing.

    PubMed

    Reyes-Herrera, Paula H; Ficarra, Elisa

    2014-01-01

    RNA-binding proteins (RBPs) are at the core of post-transcriptional regulation and thus of gene expression control at the RNA level. One of the principal challenges in the field of gene expression regulation is to understand RBPs mechanism of action. As a result of recent evolution of experimental techniques, it is now possible to obtain the RNA regions recognized by RBPs on a transcriptome-wide scale. In fact, CLIP-seq protocols use the joint action of CLIP, crosslinking immunoprecipitation, and high-throughput sequencing to recover the transcriptome-wide set of interaction regions for a particular protein. Nevertheless, computational methods are necessary to process CLIP-seq experimental data and are a key to advancement in the understanding of gene regulatory mechanisms. Considering the importance of computational methods in this area, we present a review of the current status of computational approaches used and proposed for CLIP-seq data. PMID:25336930

  8. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  9. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  10. A computational method for automated characterization of genetic components.

    PubMed

    Yordanov, Boyan; Dalchau, Neil; Grant, Paul K; Pedersen, Michael; Emmott, Stephen; Haseloff, Jim; Phillips, Andrew

    2014-08-15

    The ability to design and construct synthetic biological systems with predictable behavior could enable significant advances in medical treatment, agricultural sustainability, and bioenergy production. However, to reach a stage where such systems can be reliably designed from biological components, integrated experimental and computational techniques that enable robust component characterization are needed. In this paper we present a computational method for the automated characterization of genetic components. Our method exploits a recently developed multichannel experimental protocol and integrates bacterial growth modeling, Bayesian parameter estimation, and model selection, together with data processing steps that are amenable to automation. We implement the method within the Genetic Engineering of Cells modeling and design environment, which enables both characterization and design to be integrated within a common software framework. To demonstrate the application of the method, we quantitatively characterize a synthetic receiver device that responds to the 3-oxohexanoyl-homoserine lactone signal, across a range of experimental conditions. PMID:24628037

  11. pyro: Python-based tutorial for computational methods for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  12. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  13. Decluttering Methods for Computer-Generated Graphic Displays

    NASA Technical Reports Server (NTRS)

    Schultz, E. Eugene, Jr.

    1986-01-01

    Symbol simplification and contrasting enhance viewer's ability to detect particular symbol. Report describes experiments designed to indicate how various decluttering methods affect viewer's abilities to distinguish essential from nonessential features on computer-generated graphic displays. Results indicate partial removal of nonessential graphic features through symbol simplification effective in decluttering as total removal of nonessential graphic features.

  14. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  15. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  16. Stress intensity estimates by a computer assisted photoelastic method

    NASA Technical Reports Server (NTRS)

    Smith, C. W.

    1977-01-01

    Following an introductory history, the frozen stress photoelastic method is reviewed together with analytical and experimental aspects of cracks in photoelastic models. Analytical foundations are then presented upon which a computer assisted frozen stress photoelastic technique is based for extracting estimates of stress intensity factors from three-dimensional cracked body problems. The use of the method is demonstrated for two currently important three-dimensional crack problems.

  17. Public consultation. Up and ATAM (aims, timing, audience, method).

    PubMed

    Khan, U

    1998-04-30

    Although the NHS has some shining examples of public and user involvement, many still view it as an optional extra. Policy makers need to adopt a broader strategy for involving users, carers, staff and the wider public. Badly done public consultation will cause problems for policy makers, alienate participants and fuel public cynicism. PMID:10180417

  18. [Public health systems and methods of their financing].

    PubMed

    Kim, S V

    2001-01-01

    A correlation between type of the state as regards public consciousness (authoritarian, liberal, democratic) and type of public health is disclosed. The type of public health determines the ways of its financing (centralized management, tariff regulation, and free prices) and forms of regulation of financial flows in public health. PMID:11593814

  19. Three-dimensional cardiac computational modelling: methods, features and applications.

    PubMed

    Lopez-Perez, Alejandro; Sebastian, Rafael; Ferrero, Jose M

    2015-01-01

    The combination of computational models and biophysical simulations can help to interpret an array of experimental data and contribute to the understanding, diagnosis and treatment of complex diseases such as cardiac arrhythmias. For this reason, three-dimensional (3D) cardiac computational modelling is currently a rising field of research. The advance of medical imaging technology over the last decades has allowed the evolution from generic to patient-specific 3D cardiac models that faithfully represent the anatomy and different cardiac features of a given alive subject. Here we analyse sixty representative 3D cardiac computational models developed and published during the last fifty years, describing their information sources, features, development methods and online availability. This paper also reviews the necessary components to build a 3D computational model of the heart aimed at biophysical simulation, paying especial attention to cardiac electrophysiology (EP), and the existing approaches to incorporate those components. We assess the challenges associated to the different steps of the building process, from the processing of raw clinical or biological data to the final application, including image segmentation, inclusion of substructures and meshing among others. We briefly outline the personalisation approaches that are currently available in 3D cardiac computational modelling. Finally, we present examples of several specific applications, mainly related to cardiac EP simulation and model-based image analysis, showing the potential usefulness of 3D cardiac computational modelling into clinical environments as a tool to aid in the prevention, diagnosis and treatment of cardiac diseases. PMID:25928297

  20. Analysis and optimization of cyclic methods in orbit computation

    NASA Technical Reports Server (NTRS)

    Pierce, S.

    1973-01-01

    The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

  1. Computational biology in the cloud: methods and new insights from computing at scale.

    PubMed

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available. PMID:23424149

  2. Computational Methods for Structural Mechanics and Dynamics, part 1

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

  3. Public library computer training for older adults to access high-quality Internet health information.

    PubMed

    Xie, Bo; Bugg, Julie M

    2009-09-01

    An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at a state university. One hundred and thirty-one older adults aged 54-89 participated in the study between September 2007 and July 2008. Key findings include: a) participants had overwhelmingly positive perceptions of the training program; b) after learning about two NIH websites (http://nihseniorhealth.gov and http://medlineplus.gov) from the training, many participants started using these online resources to find high quality health and medical information and, further, to guide their decision-making regarding a health- or medically-related matter; and c) computer anxiety significantly decreased (p < .001) while computer interest and efficacy significantly increased (p = .001 and p < .001, respectively) from pre- to post-training, suggesting statistically significant improvements in computer attitudes between pre- and post-training. The findings have implications for public libraries, LIS academic programs, and other organizations interested in providing similar programs in their communities. PMID:20161649

  4. Public library computer training for older adults to access high-quality Internet health information

    PubMed Central

    Xie, Bo; Bugg, Julie M.

    2010-01-01

    An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at a state university. One hundred and thirty-one older adults aged 54–89 participated in the study between September 2007 and July 2008. Key findings include: a) participants had overwhelmingly positive perceptions of the training program; b) after learning about two NIH websites (http://nihseniorhealth.gov and http://medlineplus.gov) from the training, many participants started using these online resources to find high quality health and medical information and, further, to guide their decision-making regarding a health- or medically-related matter; and c) computer anxiety significantly decreased (p < .001) while computer interest and efficacy significantly increased (p = .001 and p < .001, respectively) from pre- to post-training, suggesting statistically significant improvements in computer attitudes between pre- and post-training. The findings have implications for public libraries, LIS academic programs, and other organizations interested in providing similar programs in their communities. PMID:20161649

  5. Digital data storage systems, computers, and data verification methods

    DOEpatents

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  6. The ensemble switch method for computing interfacial tensions

    SciTech Connect

    Schmitz, Fabian; Virnau, Peter

    2015-04-14

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.

  7. A computationally efficient particle-simulation method suited to vector-computer architectures

    SciTech Connect

    McDonald, J.D.

    1990-01-01

    Recent interest in a National Aero-Space Plane (NASP) and various Aero-assisted Space Transfer Vehicles (ASTVs) presents the need for a greater understanding of high-speed rarefied flight conditions. Particle simulation techniques such as the Direct Simulation Monte Carlo (DSMC) method are well suited to such problems, but the high cost of computation limits the application of the methods to two-dimensional or very simple three-dimensional problems. This research re-examines the algorithmic structure of existing particle simulation methods and re-structures them to allow efficient implementation on vector-oriented supercomputers. A brief overview of the DSMC method and the Cray-2 vector computer architecture are provided, and the elements of the DSMC method that inhibit substantial vectorization are identified. One such element is the collision selection algorithm. A complete reformulation of underlying kinetic theory shows that this may be efficiently vectorized for general gas mixtures. The mechanics of collisions are vectorizable in the DSMC method, but several optimizations are suggested that greatly enhance performance. Also this thesis proposes a new mechanism for the exchange of energy between vibration and other energy modes. The developed scheme makes use of quantized vibrational states and is used in place of the Borgnakke-Larsen model. Finally, a simplified representation of physical space and boundary conditions is utilized to further reduce the computational cost of the developed method. Comparison to solutions obtained from the DSMC method for the relaxation of internal energy modes in a homogeneous gas, as well as single and multiple specie shock wave profiles, are presented. Additionally, a large scale simulation of the flow about the proposed Aeroassisted Flight Experiment (AFE) vehicle is included as an example of the new computational capability of the developed particle simulation method.

  8. Computing the crystal growth rate by the interface pinning method

    NASA Astrophysics Data System (ADS)

    Pedersen, Ulf R.; Hummel, Felix; Dellago, Christoph

    2015-01-01

    An essential parameter for crystal growth is the kinetic coefficient given by the proportionality between supercooling and average growth velocity. Here, we show that this coefficient can be computed in a single equilibrium simulation using the interface pinning method where two-phase configurations are stabilized by adding a spring-like bias field coupling to an order-parameter that discriminates between the two phases. Crystal growth is a Smoluchowski process and the crystal growth rate can, therefore, be computed from the terminal exponential relaxation of the order parameter. The approach is investigated in detail for the Lennard-Jones model. We find that the kinetic coefficient scales as the inverse square-root of temperature along the high temperature part of the melting line. The practical usability of the method is demonstrated by computing the kinetic coefficient of the elements Na and Si from first principles. A generalized version of the method may be used for computing the rates of crystal nucleation or other rare events.

  9. Methods for the computation of detailed geoids and their accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.; Rummel, R.

    1975-01-01

    Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.

  10. Variational-moment method for computing magnetohydrodynamic equilibria

    SciTech Connect

    Lao, L.L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed.

  11. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  12. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  13. Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS.

    PubMed

    Jalali, Arash; Olabode, Olusegun A; Bell, Christopher M

    2012-01-01

    As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities. PMID:23569644

  14. Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS

    PubMed Central

    Jalali, Arash; Olabode, Olusegun A.; Bell, Christopher M.

    2012-01-01

    As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities. PMID:23569644

  15. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  16. Computer-aided methods of determining thyristor thermal transients

    SciTech Connect

    Lu, E.; Bronner, G.

    1988-08-01

    An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs.

  17. Novel Methods for Communicating Plasma Science to the General Public

    NASA Astrophysics Data System (ADS)

    Zwicker, Andrew; Merali, Aliya; Wissel, S. A.; Delooper, John

    2012-10-01

    The broader implications of Plasma Science remains an elusive topic that the general public rarely discusses, regardless of their relevance to energy, the environment, and technology. Recently, we have looked beyond print media for methods to reach large numbers of people in creative and informative ways. These have included video, art, images, and music. For example, our submission to the ``What is a Flame?'' contest was ranked in the top 15 out of 800 submissions. Images of plasmas have won 3 out of 5 of the Princeton University ``Art of Science'' competitions. We use a plasma speaker to teach students of all ages about sound generation and plasma physics. We report on the details of each of these and future videos and animations under development.

  18. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  19. SAR/QSAR methods in public health practice

    SciTech Connect

    Demchuk, Eugene Ruiz, Patricia; Chou, Selene; Fowler, Bruce A.

    2011-07-15

    Methods of (Quantitative) Structure-Activity Relationship ((Q)SAR) modeling play an important and active role in ATSDR programs in support of the Agency mission to protect human populations from exposure to environmental contaminants. They are used for cross-chemical extrapolation to complement the traditional toxicological approach when chemical-specific information is unavailable. SAR and QSAR methods are used to investigate adverse health effects and exposure levels, bioavailability, and pharmacokinetic properties of hazardous chemical compounds. They are applied as a part of an integrated systematic approach in the development of Health Guidance Values (HGVs), such as ATSDR Minimal Risk Levels, which are used to protect populations exposed to toxic chemicals at hazardous waste sites. (Q)SAR analyses are incorporated into ATSDR documents (such as the toxicological profiles and chemical-specific health consultations) to support environmental health assessments, prioritization of environmental chemical hazards, and to improve study design, when filling the priority data needs (PDNs) as mandated by Congress, in instances when experimental information is insufficient. These cases are illustrated by several examples, which explain how ATSDR applies (Q)SAR methods in public health practice.

  20. Applications of meshless methods for damage computations with finite strains

    NASA Astrophysics Data System (ADS)

    Pan, Xiaofei; Yuan, Huang

    2009-06-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis.

  1. Approximate Quantum Mechanical Methods for Rate Computation in Complex Systems

    NASA Astrophysics Data System (ADS)

    Schwartz, Steven D.

    The last 20 years have seen qualitative leaps in the complexity of chemical reactions that have been studied using theoretical methods. While methodologies for small molecule scattering are still of great importance and under active development [1], two important trends have allowed the theoretical study of the rates of reaction in complex molecules, condensed phase systems, and biological systems. First, there has been the explicit recognition that the type of state to state information obtained by rigorous scattering theory is not only not possible for complex systems, but more importantly, not meaningful. Thus, methodologies have been developed that compute averaged rate data directly from a Hamiltonian. Perhaps the most influential of these approaches has been the correlation function formalisms developed by Bill Miller et al. [2]. While these formal expressions for rate theories are certainly not the only correlation function descriptions of quantum rates [3, 4], these expressions of rates directly in terms of evolution operators, and in their coordinate space representations as Feynman Propagators, have lent themselves beautifully to complex systems because many of the approximation methods that have been devised are for Feynman propagator computation. This fact brings us to the second contributor to the blossoming of these approximate methods, the development of a wide variety of approximate mathematical methods to compute the time evolution of quantum systems. Thus the marriage of these mathematical developments has created the necessary powerful tools needed to probe systems of complexity unimagined just a few decades ago.

  2. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  3. Practical methods to improve the development of computational software

    SciTech Connect

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-07-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  4. Computing eigenvalues occurring in continuation methods with the Jacobi-Davidson QZ method

    SciTech Connect

    Dorsselaer, J.L.M. van

    1997-12-01

    This paper discusses how the Jacobi-Davidson QZ method can be used to compute the eigenvalues used in applications of continuation methods. A Rayleigh-Benard problem is used as an example to demonstrate how very efficient this Jacobi-davidson QZ method is.

  5. Design and Analysis of Computational Methods for Structural Acoustics

    NASA Astrophysics Data System (ADS)

    Grosh, Karl

    The application of finite element methods to problems in structural acoustics (the vibration of an elastic structure coupled to an acoustic medium) is considered. New methods are developed which yield dramatic improvement in accuracy over the standard Galerkin finite element approach. The goal of the new methods is to decrease the computational burden required to achieve a desired accuracy level at a particular frequency thereby enabling larger scale, higher frequency computations for a given platform. A new class of finite element methods, Galerkin Generalized Least-Squares (GGLS) methods, are developed and applied to model the in vacuo and fluid-loaded vibration response of Reissner-Mindlin plates. Through judicious selection of the design parameters inherent to GGLS methods, this formulation provides a consistent framework for enhancing the accuracy of finite elements. An optimal GGLS method is designed such that the complex wave-number finite element dispersion relations are identical to the analytic relations. Complex wave-number dispersion analysis and numerical experiments demonstrate the dramatic superiority of the new optimal method over the standard finite element approach for coupled and uncoupled plate vibrations. The new method provides for a dramatic decrease in discretization requirements over previous methods. The canonical problem of a baffled, fluid-loaded, finite cylindrical shell is also studied. The finite element formulation for this problem is developed and the results are compared to an analytic solution based on an expansion of the displacement using in vacuo mode shapes. A novel high resolution parameter estimation technique, based on Prony's method, is used to obtain the complex wave-number dispersion relations for the finite structure. The finite element dispersion relations enable the analyst to pinpoint the source of errors and form discretization rules. The stationary phase approximation is used to obtain the dependence of the far field pressure on the surface displacement. This analysis allows for the study of the propagation of errors into the far field as well as the determination of important mechanisms of sound radiation.

  6. Advanced Computational Aeroacoustics Methods for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane (Technical Monitor); Tam, Christopher

    2003-01-01

    Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.

  7. Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

    PubMed Central

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  8. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    SciTech Connect

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  9. Computer processing improves hydraulics optimization with new methods

    SciTech Connect

    Gavignet, A.A.; Wick, C.J.

    1987-12-01

    In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configuration system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.

  10. Characterization of Meta-Materials Using Computational Electromagnetic Methods

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Shin, Joon

    2005-01-01

    An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

  11. Implementation of an ADI method on parallel computers

    SciTech Connect

    Fatoohi, R.A.; Grosch, C.E.

    1987-06-01

    In this paper the implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are the MPP, an SIMD machine with 16-Kbit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the Flex/32 and Cray/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally conclusions are presented. 10 references.

  12. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    The implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, an SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the FLEX/32 and CRAY/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  13. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    In this paper the implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are the MPP, an SIMD machine with 16-Kbit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the Flex/32 and Cray/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally conclusions are presented.

  14. Computational methods. [Calculation of dynamic loading to offshore platforms

    SciTech Connect

    Maeda, H. . Inst. of Industrial Science)

    1993-02-01

    With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine hydrodynamics covers, not only hydro, but also aerodynamics such as wind load or current-wave-wind interaction, hydrodynamics such as cavitation, underwater noise, multi-phase flow such as two-phase flow in pipes or air bubble in water or surface and internal waves, and magneto-hydrodynamics such as propulsion due to super conductivity. Among them, two key words are focused on as the identification of marine hydrodynamics in offshore technology; they are free surface and vortex shedding.

  15. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  16. A hierarchical method for molecular docking using cloud computing.

    PubMed

    Kang, Ling; Guo, Quan; Wang, Xicheng

    2012-11-01

    Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs. PMID:23017886

  17. Improved diffraction computation with a hybrid C-RCWA-method

    NASA Astrophysics Data System (ADS)

    Bischoff, Joerg

    2009-03-01

    The Rigorous Coupled Wave Approach (RCWA) is acknowledged as a well established diffraction simulation method in electro-magnetic computing. Its two most essential applications in the semiconductor industry are in optical scatterometry and optical lithography simulation. In scatterometry, it is the standard technique to simulate spectra or diffraction responses for gratings to be characterized. In optical lithography simulation, it is an effective alternative to supplement or even to replace the FDTD for the calculation of light diffraction from thick masks as well as from wafer topographies. Unfortunately, the RCWA shows some serious disadvantages particularly for the modelling of grating profiles with shallow slopes and multilayer stacks with many layers such as extreme UV masks with large number of quarter wave layers. Here, the slicing may become a nightmare and also the computation costs may increase dramatically. Moreover, the accuracy is suffering due to the inadequate staircase approximation of the slicing in conjunction with the boundary conditions in TM polarization. On the other hand, the Chandezon Method (C-Method) solves all these problems in a very elegant way, however, it fails for binary patterns or gratings with very steep profiles where the RCWA works excellent. Therefore, we suggest a combination of both methods as plug-ins in the same scattering matrix coupling frame. The improved performance and the advantages of this hybrid C-RCWA-Method over the individual methods is shown with some relevant examples.

  18. On computer-intensive simulation and estimation methods for rare-event analysis in epidemic models.

    PubMed

    Clémençon, Stéphan; Cousien, Anthony; Felipe, Miraine Dávila; Tran, Viet Chi

    2015-12-10

    This article focuses, in the context of epidemic models, on rare events that may possibly correspond to crisis situations from the perspective of public health. In general, no close analytic form for their occurrence probabilities is available, and crude Monte Carlo procedures fail. We show how recent intensive computer simulation techniques, such as interacting branching particle methods, can be used for estimation purposes, as well as for generating model paths that correspond to realizations of such events. Applications of these simulation-based methods to several epidemic models fitted from real datasets are also considered and discussed thoroughly. PMID:26242476

  19. Informed public choices for low-carbon electricity portfolios using a computer decision tool.

    PubMed

    Mayer, Lauren A Fleishman; Bruine de Bruin, Wändi; Morgan, M Granger

    2014-04-01

    Reducing CO2 emissions from the electricity sector will likely require policies that encourage the widespread deployment of a diverse mix of low-carbon electricity generation technologies. Public discourse informs such policies. To make informed decisions and to productively engage in public discourse, citizens need to understand the trade-offs between electricity technologies proposed for widespread deployment. Building on previous paper-and-pencil studies, we developed a computer tool that aimed to help nonexperts make informed decisions about the challenges faced in achieving a low-carbon energy future. We report on an initial usability study of this interactive computer tool. After providing participants with comparative and balanced information about 10 electricity technologies, we asked them to design a low-carbon electricity portfolio. Participants used the interactive computer tool, which constrained portfolio designs to be realistic and yield low CO2 emissions. As they changed their portfolios, the tool updated information about projected CO2 emissions, electricity costs, and specific environmental impacts. As in the previous paper-and-pencil studies, most participants designed diverse portfolios that included energy efficiency, nuclear, coal with carbon capture and sequestration, natural gas, and wind. Our results suggest that participants understood the tool and used it consistently. The tool may be downloaded from http://cedmcenter.org/tools-for-cedm/informing-the-public-about-low-carbon-technologies/ . PMID:24564708

  20. Computational Catalysis Using the Artificial Force Induced Reaction Method.

    PubMed

    Sameera, W M C; Maeda, Satoshi; Morokuma, Keiji

    2016-04-19

    The artificial force induced reaction (AFIR) method in the global reaction route mapping (GRRM) strategy is an automatic approach to explore all important reaction paths of complex reactions. Most traditional methods in computational catalysis require guess reaction paths. On the other hand, the AFIR approach locates local minima (LMs) and transition states (TSs) of reaction paths without a guess, and therefore finds unanticipated as well as anticipated reaction paths. The AFIR method has been applied for multicomponent organic reactions, such as the aldol reaction, Passerini reaction, Biginelli reaction, and phase-transfer catalysis. In the presence of several reactants, many equilibrium structures are possible, leading to a number of reaction pathways. The AFIR method in the GRRM strategy determines all of the important equilibrium structures and subsequent reaction paths systematically. As the AFIR search is fully automatic, exhaustive trial-and-error and guess-and-check processes by the user can be eliminated. At the same time, the AFIR search is systematic, and therefore a more accurate and comprehensive description of the reaction mechanism can be determined. The AFIR method has been used for the study of full catalytic cycles and reaction steps in transition metal catalysis, such as cobalt-catalyzed hydroformylation and iron-catalyzed carbon-carbon bond formation reactions in aqueous media. Some AFIR applications have targeted the selectivity-determining step of transition-metal-catalyzed asymmetric reactions, including stereoselective water-tolerant lanthanide Lewis acid-catalyzed Mukaiyama aldol reactions. In terms of establishing the selectivity of a reaction, systematic sampling of the transition states is critical. In this direction, AFIR is very useful for performing a systematic and automatic determination of TSs. In the presence of a comprehensive description of the transition states, the selectivity of the reaction can be calculated more accurately. For relatively large molecular systems, the computational cost of AFIR searches can be reduced by using the ONIOM(QM:QM) or ONIOM(QM:MM) methods. In common practice, density functional theory (DFT) with a relatively small basis set is used for the high-level calculation, while a semiempirical approach or a force field description is used for the low-level calculation. After approximate LMs and TSs are determined, standard computational methods (e.g., DFT with a large basis set) are used for the full molecular system to determine the true LMs and TSs and to rationalize the reaction mechanism and selectivity of the catalytic reaction. The examples in this Account evidence that the AFIR method is a powerful approach for accurate prediction of the reaction mechanisms and selectivities of complex catalytic reactions. Therefore, the AFIR approach in the GRRM strategy is very useful for computational catalysis. PMID:27023677

  1. Integration of viscous effects into inviscid computational methods

    NASA Technical Reports Server (NTRS)

    Katz, Joseph

    1990-01-01

    A variety of practical fluid dynamic problems related to the low-speed, high Reynolds number flow over aircraft and ground vehicles fall in a category where some simplified mathematical models become applicable. This provides the fluid dynamicists with a more economical computational tool, compared to the alternative solution of the Navier Stokes equations. The objective was to provide a brief survey of some of the viscous boundary layer solution methods and to propose a method for coupling between the inviscid outer flow and the viscous boundary layer solutions. Results of this survey and details of the viscous/inviscid flow coupling efforts are presented.

  2. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1971-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  3. Public involvement in multi-objective water level regulation development projects-evaluating the applicability of public involvement methods

    SciTech Connect

    Vaentaenen, Ari . E-mail: armiva@utu.fi; Marttunen, Mika . E-mail: Mika.Marttunen@ymparisto.fi

    2005-04-15

    Public involvement is a process that involves the public in the decision making of an organization, for example a municipality or a corporation. It has developed into a widely accepted and recommended policy in environment altering projects. The EU Water Framework Directive (WFD) took force in 2000 and stresses the importance of public involvement in composing river basin management plans. Therefore, the need to develop public involvement methods for different situations and circumstances is evident. This paper describes how various public involvement methods have been applied in a development project involving the most heavily regulated lake in Finland. The objective of the project was to assess the positive and negative impacts of regulation and to find possibilities for alleviating the adverse impacts on recreational use and the aquatic ecosystem. An exceptional effort was made towards public involvement, which was closely connected to planning and decision making. The applied methods were (1) steering group work, (2) survey, (3) dialogue, (4) theme interviews, (5) public meeting and (6) workshops. The information gathered using these methods was utilized in different stages of the project, e.g., in identifying the regulation impacts, comparing alternatives and compiling the recommendations for regulation development. After describing our case and the results from the applied public involvement methods, we will discuss our experiences and the feedback from the public. We will also critically evaluate our own success in coping with public involvement challenges. In addition to that, we present general recommendations for dealing with these problematic issues based on our experiences, which provide new insights for applying various public involvement methods in multi-objective decision making projects.

  4. A literature review of neck pain associated with computer use: public health implications

    PubMed Central

    Green, Bart N

    2008-01-01

    Prolonged use of computers during daily work activities and recreation is often cited as a cause of neck pain. This review of the literature identifies public health aspects of neck pain as associated with computer use. While some retrospective studies support the hypothesis that frequent computer operation is associated with neck pain, few prospective studies reveal causal relationships. Many risk factors are identified in the literature. Primary prevention strategies have largely been confined to addressing environmental exposure to ergonomic risk factors, since to date, no clear cause for this work-related neck pain has been acknowledged. Future research should include identifying causes of work related neck pain so that appropriate primary prevention strategies may be developed and to make policy recommendations pertaining to prevention. PMID:18769599

  5. Comparison of different methods for shielding design in computed tomography.

    PubMed

    Ciraj-Bjelac, O; Arandjic, D; Kosutic, D

    2011-09-01

    The purpose of this work is to compare different methods for shielding calculation in computed tomography (CT). The BIR-IPEM (British Institute of Radiology and Institute of Physics in Engineering in Medicine) and NCRP (National Council on Radiation Protection) method were used for shielding thickness calculation. Scattered dose levels and calculated barrier thickness were also compared with those obtained by scatter dose measurements in the vicinity of a dedicated CT unit. Minimal requirement for protective barriers based on BIR-IPEM method ranged between 1.1 and 1.4 mm of lead demonstrating underestimation of up to 20 % and overestimation of up to 30 % when compared with thicknesses based on measured dose levels. For NCRP method, calculated thicknesses were 33 % higher (27-42 %). BIR-IPEM methodology-based results were comparable with values based on scattered dose measurements, while results obtained using NCRP methodology demonstrated an overestimation of the minimal required barrier thickness. PMID:21743070

  6. On a method computing transient wave propagation in ionospheric regions

    NASA Technical Reports Server (NTRS)

    Gray, K. G.; Bowhill, S. A.

    1978-01-01

    A consequence of an exoatmospheric nuclear burst is an electromagnetic pulse (EMP) radiated from it. In a region far enough away from the burst, where nonlinear effects can be ignored, the EMP can be represented by a large-amplitude narrow-time-width plane-wave pulse. If the ionosphere intervenes the origin and destination of the EMP, frequency dispersion can cause significant changes in the original pulse upon reception. A method of computing these dispersive effects of transient wave propagation is summarized. The method described is different from the standard transform techniques and provides physical insight into the transient wave process. The method, although exact, can be used in approximating the early-time transient response of an ionospheric region by a simple integration with only explicit knowledge of the electron density, electron collision frequency, and electron gyrofrequency required. As an illustration of the method, it is applied to a simple example and contrasted with the corresponding transform solution.

  7. INTERVAL SAMPLING METHODS AND MEASUREMENT ERROR: A COMPUTER SIMULATION

    PubMed Central

    Wirth, Oliver; Slaven, James; Taylor, Matthew A.

    2015-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method’s inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. PMID:24127380

  8. Graphics processing unit acceleration of computational electromagnetic methods

    NASA Astrophysics Data System (ADS)

    Inman, Matthew

    The use of Graphical Processing Units (GPU's) for scientific applications has been evolving and expanding for the decade. GPU's provide an alternative to the CPU in the creation and execution of the numerical codes that are often relied upon in to perform simulations in computational electromagnetics. While originally designed purely to display graphics on the users monitor, GPU's today are essentially powerful floating point co-processors that can be programmed not only to render complex graphics, but also perform the complex mathematical calculations often encountered in scientific computing. Currently the GPU's being produced often contain hundreds of separate cores able to access large amounts of high-speed dedicated memory. By utilizing the power offered by such a specialized processor, it is possible to drastically speed up the calculations required in computational electromagnetics. This increase in speed allows for the use of GPU based simulations in a variety of situations that the computational time has heretofore been a limiting factor in, such as in educational courses. Many situations in teaching electromagnetics often rely upon simple examples of problems due to the simulation times needed to analyze more complex problems. The use of GPU based simulations will be shown to allow demonstrations of more advanced problems than previously allowed by adapting the methods for use on the GPU. Modules will be developed for a wide variety of teaching situations utilizing the speed of the GPU to demonstrate various techniques and ideas previously unrealizable.

  9. Evolutionary computational methods to predict oral bioavailability QSPRs.

    PubMed

    Bains, William; Gilbert, Richard; Sviridenko, Lilya; Gascon, Jose-Miguel; Scoffin, Robert; Birchall, Kris; Harvey, Inman; Caldwell, John

    2002-01-01

    This review discusses evolutionary and adaptive methods for predicting oral bioavailability (OB) from chemical structure. Genetic Programming (GP), a specific form of evolutionary computing, is compared with some other advanced computational methods for OB prediction. The results show that classifying drugs into 'high' and 'low' OB classes on the basis of their structure alone is solvable, and initial models are already producing output that would be useful for pharmaceutical research. The results also suggest that quantitative prediction of OB will be tractable. Critical aspects of the solution will involve the use of techniques that can: (i) handle problems with a very large number of variables (high dimensionality); (ii) cope with 'noisy' data; and (iii) implement binary choices to sub-classify molecules with behavior that are qualitatively different. Detailed quantitative predictions will emerge from more refined models that are hybrids derived from mechanistic models of the biology of oral absorption and the power of advanced computing techniques to predict the behavior of the components of those models in silico. PMID:11865672

  10. Approximation method to compute domain related integrals in structural studies

    NASA Astrophysics Data System (ADS)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2015-11-01

    Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the calculus of the integrals associated to the transverse section problems. Thus we use a virtual rectangle which is framing the triangle, being generated supplementary right angled triangles. The sign of rectangle and the signs of the supplementary triangles are conditioned by the sign of the initial triangle. In this way, a generally located triangle for which we have direct calculus relations may be used to generate the discretization of any domain in transverse section associated integrals. A significant consequence of the paper is the opportunity to create modern computer aided engineering applications for structural studies, which use: intelligent applied mathematics background, modern informatics technologies and advanced computing techniques, such as calculus parallelization.

  11. A numerical method to compute interior transmission eigenvalues

    NASA Astrophysics Data System (ADS)

    Kleefeld, Andreas

    2013-10-01

    In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber-Krahn type inequalities for larger transmission eigenvalues that are not yet available.

  12. New developments in the multiscale hybrid energy density computational method

    NASA Astrophysics Data System (ADS)

    Min, Sun; Shanying, Wang; Dianwu, Wang; Chongyu, Wang

    2016-01-01

    Further developments in the hybrid multiscale energy density method are proposed on the basis of our previous papers. The key points are as follows. (i) The theoretical method for the determination of the weight parameter in the energy coupling equation of transition region in multiscale model is given via constructing underdetermined equations. (ii) By applying the developed mathematical method, the weight parameters have been given and used to treat some problems in homogeneous charge density systems, which are directly related with multiscale science. (iii) A theoretical algorithm has also been presented for treating non-homogeneous systems of charge density. The key to the theoretical computational methods is the decomposition of the electrostatic energy in the total energy of density functional theory for probing the spanning characteristic at atomic scale, layer by layer, by which the choice of chemical elements and the defect complex effect can be understood deeply. (iv) The numerical computational program and design have also been presented. Project supported by the National Basic Research Program of China (Grant No. 2011CB606402) and the National Natural Science Foundation of China (Grant No. 51071091).

  13. Computation of multi-material interactions using point method

    SciTech Connect

    Zhang, Duan Z; Ma, Xia; Giguere, Paul T

    2009-01-01

    Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations. Numerical examples are given to demonstrate the new scheme.

  14. An analytical method for computing atomic contact areas in biomolecules.

    PubMed

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. PMID:22965816

  15. COMSAC: Computational Methods for Stability and Control. Part 2

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  16. Consensus methods: review of original methods and their main alternatives used in public health

    PubMed Central

    Bourrée, Fanny; Michel, Philippe; Salmi, Louis Rachid

    2008-01-01

    Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Données Santé Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039

  17. PREFACE: Theory, Modelling and Computational methods for Semiconductors

    NASA Astrophysics Data System (ADS)

    Migliorato, Max; Probert, Matt

    2010-04-01

    These conference proceedings contain the written papers of the contributions presented at the 2nd International Conference on: Theory, Modelling and Computational methods for Semiconductors. The conference was held at the St Williams College, York, UK on 13th-15th Jan 2010. The previous conference in this series took place in 2008 at the University of Manchester, UK. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in Semiconductor science and technology, where there is a substantial potential for time saving in R&D. The development of high speed computer architectures is finally allowing the routine use of accurate methods for calculating the structural, thermodynamic, vibrational and electronic properties of semiconductors and their heterostructures. This workshop ran for three days, with the objective of bringing together UK and international leading experts in the field of theory of group IV, III-V and II-VI semiconductors together with postdocs and students in the early stages of their careers. The first day focused on providing an introduction and overview of this vast field, aimed particularly at students at this influential point in their careers. We would like to thank all participants for their contribution to the conference programme and these proceedings. We would also like to acknowledge the financial support from the Institute of Physics (Computational Physics group and Semiconductor Physics group), the UK Car-Parrinello Consortium, Accelrys (distributors of Materials Studio) and Quantumwise (distributors of Atomistix). The Editors Acknowledgements Conference Organising Committee: Dr Matt Probert (University of York) and Dr Max Migliorato (University of Manchester) Programme Committee: Dr Marco Califano (University of Leeds), Dr Jacob Gavartin (Accelrys Ltd, Cambridge), Dr Stanko Tomic (STFC Daresbury Laboratory), Dr Gabi Slavcheva (Imperial College London) Proceedings edited and compiled by Dr Max Migliorato and Dr Matt Probert

  18. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other Evolutionary Computation-based algorithms. The Levenberg-Marquardt optimization must be considered as the most efficient one due to its speed. Its drawback due to possible sticking in poor local optimum can be overcome by applying a multi-start approach.

  19. On implicit Runge-Kutta methods for parallel computations

    NASA Technical Reports Server (NTRS)

    Keeling, Stephen L.

    1987-01-01

    Implicit Runge-Kutta methods which are well-suited for parallel computations are characterized. It is claimed that such methods are first of all, those for which the associated rational approximation to the exponential has distinct poles, and these are called multiply explicit (MIRK) methods. Also, because of the so-called order reduction phenomenon, there is reason to require that these poles be real. Then, it is proved that a necessary condition for a q-stage, real MIRK to be A sub 0-stable with maximal order q + 1 is that q = 1, 2, 3, or 5. Nevertheless, it is shown that for every positive integer q, there exists a q-stage, real MIRK which is I-stable with order q. Finally, some useful examples of algebraically stable MIRKs are given.

  20. Implicit Runge-Kutta methods for parallel computations

    SciTech Connect

    Keeling, S.L.

    1987-09-01

    Implicit Runge-Kutta methods which are well-suited for parallel computations are characterized. It is claimed that such methods are first of all, those for which the associated rational approximation to the exponential has distinct poles, and these are called multiply explicit (MIRK) methods. Also, because of the so-called order reduction phenomenon, there is reason to require that these poles be real. Then, it is proved that a necessary condition for a q-stage, real MIRK to be A sub 0-stable with maximal order q + 1 is that q = 1, 2, 3, or 5. Nevertheless, it is shown that for every positive integer q, there exists a q-stage, real MIRK which is I-stable with order q. Finally, some useful examples of algebraically stable MIRKs are given.

  1. Review methods for image segmentation from computed tomography images

    SciTech Connect

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-12-04

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

  2. Review methods for image segmentation from computed tomography images

    NASA Astrophysics Data System (ADS)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-12-01

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

  3. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    SciTech Connect

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  4. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...) Compliance with 17 CFR part 49. Any registered swap data repository that accepts and publicly disseminates...-time public reporting. 43.3 Section 43.3 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting....

  5. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) Compliance with 17 CFR part 49. Any registered swap data repository that accepts and publicly disseminates...-time public reporting. 43.3 Section 43.3 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION (CONTINUED) REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting....

  6. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...) Compliance with 17 CFR part 49. Any registered swap data repository that accepts and publicly disseminates...-time public reporting. 43.3 Section 43.3 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting....

  7. Radiation Transport Computation in Stochastic Media: Method and Application

    NASA Astrophysics Data System (ADS)

    Liang, Chao

    Stochastic media, characterized by the stochastic distribution of inclusions in a background medium, are typical radiation transport media encountered in natural or engineering systems. In the community of radiation transport computation, there is always a demand of accurate and efficient methods that can account for the nature of the stochastic distribution. In this dissertation, we focus on methodology development for the radiation transport computation that is applied to neutronic analyses of nuclear reactor designs characterized by the stochastic distribution of particle fuel. Reactor concepts with the employment of a fuel design consisting of a random heterogeneous mixture of fissile material and non-fissile moderator are constantly proposed. Key physical quantities such as core criticality and power distribution, reactivity control design parameters, depletion and fuel burn-up need to be carefully evaluated. In order to meet these practical requirements, we first need to develop accurate and fast computational methods that can effectively account for the stochastic nature of double heterogeneity configuration. A Monte Carlo based method called Chord Length Sampling (CLS) method is considered to be a promising method for analyzing those TRISO-type fueled reactors. Although the CLS method has been proposed for more than two decades and much research has been conducted to enhance its applicability, further efforts are still needed to address some key research gaps that exist for the CLS method. (1) There is a general lack of thorough investigation of the factors that give rise to the inaccuracy of the CLS method found by many researchers. The accuracy of the CLS method depends on the optical and geometric properties of the system. In some specific scenarios, considerable inaccuracies have been reported. However, no research has been providing a clear interpretation of the reasons responsible for the inaccuracy in the reported scenarios. Furthermore, no any correction methods have been proposed or developed to improve the accuracy of the CLS in all the applied scenarios. (2) Previous CLS method only deals with the on-the-fly sample of fuel particles in analyzing TRISO-type fueled reactors. Within the fuel particle, which consists of a fuel kernel and a coating, conventional Monte Carlo simulations apply. This strategy may not fully achieve the highest computational efficiency since extra simulation time is taken for tracking neutrons in the coating region. The coating region has negligible neutronic effect on the overall reactor core performance. This indicates a possible strategy to further increase the computational efficiency by directly sampling fuel kernels on-the-fly in the CLS simulations. In order to test the new strategy, a new model of the chord length distribution function is needed, which requires new research effort to develop and test the new model. (3) The previous evaluations and applications of the CLS method have been limited to single-type single-size fuel particle systems, i.e. only one type of fuel particles with constant size is assumed in the fuel zone, which is the case for typical VHTR designs. In practice, however, for different application purposes, two or more types of TRISO fuel particles may be loaded in the same fuel zone, e.g. fissile fuel particles and fertile fuel particles are used for transmutation purpose in some reactors. Moreover, the fuel particle size may not be kept constant and can vary with a range. Typical design containing such fuel particles can be found in the FSV reactor. Therefore, it is desired to develop new computational model to treat multi-type poly-sized particle systems in the neutornic analysis. This requires extending the current CLS method to on-the-fly sample not only the location of the fuel particle, but also the type and size of the fuel particles in order to be applied to a broad range of reactor designs in neutronic analyses. New sampling functions need to be developed for the extended on-the-fly sampling strategy. This Ph.D. dissertation addressed these research gaps by (1) Performing a thorough investigation on the boundary effect that gives rise to the inaccuracy of the CLS method. (2) Formulating a new chord distribution model that allows CLS to directly sample fuel kernels instead of fuel particles on-the-fly in the CLS simulations. With this new model, good accuracy is kept in predicting neutronics while at least 70% increased time efficiency is achieved in practical applications. And (3) developing new sampling functions for additional on-the-fly samplings of fuel particle type and size in order to apply CLS to treating multi-type poly-sized fuel particle systems. By accomplishing these, the CLS method is extended for a broad range of applications in computations of neutronics. (Abstract shortened by UMI.).

  8. A multigrid nonoscillatory method for computing high speed flows

    NASA Technical Reports Server (NTRS)

    Li, C. P.; Shieh, T. H.

    1993-01-01

    A multigrid method using different smoothers has been developed to solve the Euler equations discretized by a nonoscillatory scheme up to fourth order accuracy. The best smoothing property is provided by a five-stage Runge-Kutta technique with optimized coefficients, yet the most efficient smoother is a backward Euler technique in factored and diagonalized form. The singlegrid solution for a hypersonic, viscous conic flow is in excellent agreement with the solution obtained by the third order MUSCL and Roe's method. Mach 8 inviscid flow computations for a complete entry probe have shown that the accuracy is at least as good as the symmetric TVD scheme of Yee and Harten. The implicit multigrid method is four times more efficient than the explicit multigrid technique and 3.5 times faster than the single-grid implicit technique. For a Mach 8.7 inviscid flow over a blunt delta wing at 30 deg incidence, the CPU reduction factor from the three-level multigrid computation is 2.2 on a grid of 37 x 41 x 73 nodes.

  9. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  10. Analysis of flavonoids: tandem mass spectrometry, computational methods, and NMR.

    PubMed

    March, Raymond; Brodbelt, Jennifer

    2008-12-01

    Due to the increasing understanding of the health benefits and chemopreventive properties of flavonoids, there continues to be significant effort dedicated to improved analytical methods for characterizing the structures of flavonoids and monitoring their levels in fruits and vegetables, as well as developing new approaches for mapping the interactions of flavonoids with biological molecules. Tandem mass spectrometry (MS/MS), particularly in conjunction with liquid chromatography (LC), is the dominant technique that has been pursued for elucidation of flavonoids. Metal complexation strategies have proven to be especially promising for enhancing the ionization of flavonoids and yielding key diagnostic product ions for differentiation of isomers. Of particular value is the addition of a chromophoric ligand to allow the application of infrared (IR) multiphoton dissociation as an alternative to collision-induced dissociation (CID) for the differentiation of isomers. CID, including energy-resolved methods, and nuclear magnetic resonance (NMR) have also been utilized widely for structural characterization of numerous classes of flavonoids and development of structure/activity relationships.The gas-phase ion chemistry of flavonoids is an active area of research particularly when combined with accurate mass measurement for distinguishing between isobaric ions. Applications of a variety of ab initio and chemical computation methods to the study of flavonoids have been reported, and the results of computations of ion and molecular structures have been shown together with computations of atomic charges and ion fragmentation. Unambiguous ion structures are obtained rarely using MS alone. Thus, it is necessary to combine MS with spectroscopic techniques such as ultraviolet (UV) and NMR to achieve this objective. The application of NMR data to the mass spectrometric examination of flavonoids is discussed. PMID:18855332

  11. A bibliography on finite element and related methods analysis in reactor physics computations (1971--1997)

    SciTech Connect

    Carpenter, D.C.

    1998-01-01

    This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

  12. Computational Studies of Protein Aggregation: Methods and Applications

    NASA Astrophysics Data System (ADS)

    Morriss-Andrews, Alex; Shea, Joan-Emma

    2015-04-01

    Protein aggregation involves the self-assembly of normally soluble proteins into large supramolecular assemblies. The typical end product of aggregation is the amyloid fibril, an extended structure enriched in β-sheet content. The aggregation process has been linked to a number of diseases, most notably Alzheimer's disease, but fibril formation can also play a functional role in certain organisms. This review focuses on theoretical studies of the process of fibril formation, with an emphasis on the computational models and methods commonly used to tackle this problem.

  13. Fan Flutter Computations Using the Harmonic Balance Method

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Thomas, Jeffrey P.; Reddy, T.S.R.

    2009-01-01

    An experimental forward-swept fan encountered flutter at part-speed conditions during wind tunnel testing. A new propulsion aeroelasticity code, based on a computational fluid dynamics (CFD) approach, was used to model the aeroelastic behavior of this fan. This threedimensional code models the unsteady flowfield due to blade vibrations using a harmonic balance method to solve the Navier-Stokes equations. This paper describes the flutter calculations and compares the results to experimental measurements and previous results from a time-accurate propulsion aeroelasticity code.

  14. Assessment of nonequilibrium radiation computation methods for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Sharma, Surendra

    1993-01-01

    The present understanding of shock-layer radiation in the low density regime, as appropriate to hypersonic vehicles, is surveyed. Based on the relative importance of electron excitation and radiation transport, the hypersonic flows are divided into three groups: weakly ionized, moderately ionized, and highly ionized flows. In the light of this division, the existing laboratory and flight data are scrutinized. Finally, an assessment of the nonequilibrium radiation computation methods for the three regimes in hypersonic flows is presented. The assessment is conducted by comparing experimental data against the values predicted by the physical model.

  15. Computational methods for improving thermal imaging for consumer devices

    NASA Astrophysics Data System (ADS)

    Lynch, Colm N.; Devaney, Nicholas; Drimbarean, Alexandru

    2015-05-01

    In consumer imaging, the spatial resolution of thermal microbolometer arrays is limited by the large physical size of the individual detector elements. This also limits the number of pixels per image. If thermal sensors are to find a place in consumer imaging, as the newly released FLIR One would suggest, this resolution issue must be addressed. Our work focuses on improving the output quality of low resolution thermal cameras through computational means. The method we propose utilises sub-pixel shifts and temporal variations in the scene, using information from thermal and visible channels. Results from simulations and lab experiments are presented.

  16. Interspike interval method to compute speech signals from neural firing

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe

    1998-03-01

    Auditory perception neurons also called inner hair cells (IHC) transform the mechanical movements of the basilar membrane into electrical impulses. The impulse coding of the neurons is the main information carrier in the auditory process and is the basis for improvements of cochlea implants as well as for low rate, high quality speech processing and compression. This paper shows how to compute the speech signal from the neural firing based on the analysis of the interspike interval histogram. This new approach solves problems which other standard analysis methods do no solve sufficiently well.

  17. Numerical methods and computers used in elastohydrodynamic lubrication

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Tripp, J. H.

    1982-01-01

    Some of the methods of obtaining approximate numerical solutions to boundary value problems that arise in elastohydrodynamic lubrication are reviewed. The highlights of four general approaches (direct, inverse, quasi-inverse, and Newton-Raphson) are sketched. Advantages and disadvantages of these approaches are presented along with a flow chart showing some of the details of each. The basic question of numerical stability of the elastohydrodynamic lubrication solutions, especially in the pressure spike region, is considered. Computers used to solve this important class of lubrication problems are briefly described, with emphasis on supercomputers.

  18. Efficient field-computer file transfer methods for suboptimum conditions

    SciTech Connect

    Sisk, L.B. )

    1991-07-01

    This paper describes a project to upgrade the file-transmission capabilities of a system using modem-linked PC's to acquire production data in remote oilfields and subsequently transfer these data to an area production office for further processing. The method initially specified for accomplishing this task failed repeatedly under adverse conditions. After the modems and file-transfer software were replaced, communications became much more reliable, and the time required to transfer files was reduced by orders of magnitude, resulting in a corresponding reduction in telecommunications costs. The technology described is applicable to any computer-based system doing file transfers, particularly in areas with suboptimum telephone networks.

  19. A new method to compute lunisolar perturbations in satellite motions

    NASA Technical Reports Server (NTRS)

    Kozai, Y.

    1973-01-01

    A new method to compute lunisolar perturbations in satellite motion is proposed. The disturbing function is expressed by the orbital elements of the satellite and the geocentric polar coordinates of the moon and the sun. The secular and long periodic perturbations are derived by numerical integrations, and the short periodic perturbations are derived analytically. The perturbations due to the tides can be included in the same way. In the Appendix, the motion of the orbital plane for a synchronous satellite is discussed; it is concluded that the inclination cannot stay below 7 deg.

  20. Method and apparatus for managing transactions with connected computers

    DOEpatents

    Goldsmith, Steven Y.; Phillips, Laurence R.; Spires, Shannon V.

    2003-01-01

    The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.

  1. Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity so that they are being frequently employed for specific real world applications within NASA. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by highly complex geometries. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the peculiarities of applying the immersed boundary method to this moving boundary problem, we will provide a detailed aeroacoustic analysis of the noise generation mechanisms encountered in the open rotor flow. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. The noise generation mechanisms are analyzed employing spectral analysis, proper orthogonal decomposition and the causality method.

  2. Computational methods for studying G protein-coupled receptors (GPCRs).

    PubMed

    Kaczor, Agnieszka A; Rutkowska, Ewelina; Bartuzi, Damian; Targowska-Duda, Katarzyna M; Matosiuk, Dariusz; Selent, Jana

    2016-01-01

    The functioning of GPCRs is classically described by the ternary complex model as the interplay of three basic components: a receptor, an agonist, and a G protein. According to this model, receptor activation results from an interaction with an agonist, which translates into the activation of a particular G protein in the intracellular compartment that, in turn, is able to initiate particular signaling cascades. Extensive studies on GPCRs have led to new findings which open unexplored and exciting possibilities for drug design and safer and more effective treatments with GPCR targeting drugs. These include discovery of novel signaling mechanisms such as ligand promiscuity resulting in multitarget ligands and signaling cross-talks, allosteric modulation, biased agonism, and formation of receptor homo- and heterodimers and oligomers which can be efficiently studied with computational methods. Computer-aided drug design techniques can reduce the cost of drug development by up to 50%. In particular structure- and ligand-based virtual screening techniques are a valuable tool for identifying new leads and have been shown to be especially efficient for GPCRs in comparison to water-soluble proteins. Modern computer-aided approaches can be helpful for the discovery of compounds with designed affinity profiles. Furthermore, homology modeling facilitated by a growing number of available templates as well as molecular docking supported by sophisticated techniques of molecular dynamics and quantitative structure-activity relationship models are an excellent source of information about drug-receptor interactions at the molecular level. PMID:26928552

  3. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  4. An experiment in hurricane track prediction using parallel computing methods

    NASA Technical Reports Server (NTRS)

    Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

    1994-01-01

    The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

  5. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.

  6. Computational method for reducing variance with Affymetrix microarrays

    PubMed Central

    2002-01-01

    Background Affymetrix microarrays are used by many laboratories to generate gene expression profiles. Generally, only large differences (> 1.7-fold) between conditions have been reported. Computational methods to reduce inter-array variability might be of value when attempting to detect smaller differences. We examined whether inter-array variability could be reduced by using data based on the Affymetrix algorithm for pairwise comparisons between arrays (ratio method) rather than data based on the algorithm for analysis of individual arrays (signal method). Six HG-U95A arrays that probed mRNA from young (21–31 yr old) human muscle were compared with six arrays that probed mRNA from older (62–77 yr old) muscle. Results Differences in mean expression levels of young and old subjects were small, rarely > 1.5-fold. The mean within-group coefficient of variation for 4629 mRNAs expressed in muscle was 20% according to the ratio method and 25% according to the signal method. The ratio method yielded more differences according to t-tests (124 vs. 98 differences at P < 0.01), rank sum tests (107 vs. 85 differences at P < 0.01), and the Significance Analysis of Microarrays method (124 vs. 56 differences with false detection rate < 20%; 20 vs. 0 differences with false detection rate < 5%). The ratio method also improved consistency between results of the initial scan and results of the antibody-enhanced scan. Conclusion The ratio method reduces inter-array variance and thereby enhances statistical power. PMID:12204100

  7. On Computer Algebra Generation of Symplectic Integrator Methods

    NASA Astrophysics Data System (ADS)

    Murison, M. A.; Chambers, J. E.

    1999-09-01

    Most symplectic integrators used in solar-system dynamics are second-order in the time step tau . Typically, the Hamiltonian is divided into a Keplerian piece HA and a smaller perturbative component HB. We can take advantage of the disparity in relative magnitude of these components to define a second small parameter, call it epsilon = frac {mid HBmid }{mid HAmid } << 1, and use this to obtain a ` partially' \\enspace higher-order method. Adopting a Lie series approach, one can, for a given order-N method, examine the tau (N+1) , tau (N+2) , etc. error terms. Each of the 2(k) -2 subterms of the coefficient of the tau (k) error term has an associated factor of epsilon raised to a power ranging from linear to k-1. By including adjustable parameters in each evolution operator exp(tau lbrace *,HArbrace ) or exp(tau lbrace *,HBrbrace ) in the trial method (composed of a combination of these operators) that approximates the true Hamiltonian evolution operator exp(tau lbrace *,HA+HBrbrace right ), one can in principle eliminate specified subterms in specified error terms. For example, a second-order method chosen to eliminate the tau (3) subterms linear in epsilon can, depending on the magnitude of epsilon , produce a quasi-third-order method. In practice this process boils down to generating then solving systems of nonlinear polynomial equations particular to the trial method. A computer algebra program has been developed that automates the generation and solution of the equations that result from requesting a specified method of order N. This task is tedious due to the noncommutative algebra involved in the series expansions and subsequent algebraic manipulations, but computers are well-suited for handling such tedium. Once a method, or set of equivalent methods, has been found, the program then generates and solves a second set of equations for parameter solutions whereby subterms of specified powers in epsilon are eliminated for successive tau (N+1) , tau (N+2) , etc. terms in the overall error expression. The project has, in these initial stages, been at least partially successful. Experiences and results to date will be presented.

  8. An Overview of a Decade of Journal Publications about Culture and Human-Computer Interaction (HCI)

    NASA Astrophysics Data System (ADS)

    Clemmensen, Torkil; Roese, Kerstin

    In this paper, we analyze the concept of human-computer interaction in cultural and national contexts. Building and extending upon the framework for understanding research in usability and culture by Honold [3], we give an overview of publications in culture and HCI between 1998 and 2008, with a narrow focus on high-level journal publications only. The purpose is to review current practice in how cultural HCI issues are studied, and to analyse problems with the measures and interpretation of this studies. We find that Hofstede's cultural dimensions has been the dominating model of culture, participants have been picked because they could speak English, and most studies have been large scale quantitative studies. In order to balance this situation, we recommend that more researchers and practitioners do qualitative, empirical work studies.

  9. Establishing an international computer network for research and teaching in public health and epidemiology.

    PubMed

    Ostbye, T; Bojan, F; Rennert, G; Hurlen, P; Garner, B

    1991-01-01

    Most universities and major research institutions in North America, Western Europe and around the Pacific are connected via computer communication networks. The authors have used these networks' accessible, low cost, electronic mail system to develop a network of public health researchers and teachers. Current and potential uses of this network are discussed. These networks can not only facilitate international cooperation within public health; they also make it possible to conduct international collaborative research projects that would be too cumbersome and time consuming to initialize and conduct without this communication facility. One participant from Hungary has been able to participate in the network by using telefax. This has some drawbacks compared to electronic mail. In this era of rapid change in Eastern Europe, we urge that electronic communication be made freely available to colleagues in Eastern Europe. PMID:2026221

  10. Computing thermal Wigner densities with the phase integration method

    SciTech Connect

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  11. A computational design method for transonic turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Dulikravich, D. S.

    1982-01-01

    This paper describes a systematical computational procedure to find configuration changes necessary to modify the resulting flow past turbomachinery cascades, channels and nozzles, to be shock-free at prescribed transonic operating conditions. The method is based on a finite area transonic analysis technique and the fictitious gas approach. This design scheme has two major areas of application. First, it can be used for design of supercritical cascades, with applications mainly in compressor blade design. Second, it provides subsonic inlet shapes including sonic surfaces with suitable initial data for the design of supersonic (accelerated) exits, like nozzles and turbine cascade shapes. This fast, accurate and economical method with a proven potential for applications to three-dimensional flows is illustrated by some design examples.

  12. A computer method for the automatic reduction of spectroscopic data.

    PubMed

    Ditzel, E F; Giddings, L E

    1967-12-01

    A computer program, written in Fortran IV and for use with an associated spectral comparator, has been developed at The Naval Research Laboratory for the purpose of automatically reducing spectroscopic data. A Datex digitalizing magnetic tape recorder in conjunction with a modified Jarrell-Ash microphotometer allows the reading of spectral information from a photographic plate at the rate of twentyfive data pairs per second. Spectra of local interest analyzed by this method are (1) absorption, (2) emission, (3) plasma type, obtained from time-resolved spectroscopic techniques, and (4) solar echellegrams obtained from rocket probings of the upper atmosphere. Markedly useful features of the program are its capabilities of (a) recognizing spectral peaks from a background of variable density, (b) obtaining absolute values for the radiance or irradiance. An essential characteristic of the method is the saving of significant amounts of time in the reduction of photographic spectroscopic data. PMID:20062364

  13. Computational analysis of methods for reduction of induced drag

    NASA Technical Reports Server (NTRS)

    Janus, J. M.; Chatterjee, Animesh; Cave, Chris

    1993-01-01

    The purpose of this effort was to perform a computational flow analysis of a design concept centered around induced drag reduction and tip-vortex energy recovery. The flow model solves the unsteady three-dimensional Euler equations, discretized as a finite-volume method, utilizing a high-resolution approximate Riemann solver for cell interface flux definitions. The numerical scheme is an approximately-factored block LU implicit Newton iterative-refinement method. Multiblock domain decomposition is used to partition the field into an ordered arrangement of blocks. Three configurations are analyzed: a baseline fuselage-wing, a fuselage-wing-nacelle, and a fuselage-wing-nacelle-propfan. Aerodynamic force coefficients, propfan performance coefficients, and flowfield maps are used to qualitatively access design efficacy. Where appropriate, comparisons are made with available experimental data.

  14. An inviscid computational method for tactical missile configurations

    NASA Astrophysics Data System (ADS)

    Wardlaw, A. B., Jr.; Solomon, J. M.; Baltakis, F. P.; Hackerman, L. B.

    1981-05-01

    A finite difference method suitable for design calculations of finned bodies is described. Efficient numerical calculations are achieved using a thin fin approximation which neglects fin thickness but retains a correct description of the fin surface slope. The resulting algorithm is suitable for treating relatively thin, straight fins with sharp edges. Methods for treating the fin leading and trailing edges are described which are dependent on the Mach number of the flow normal to the edge. The computed surface pressures are compared to experimental measurements taken on cruciform configurations with supersonic leading and trailing edges and to a swept wing body with detached leading edge shocks. Calculated forces and moments on body-wing-tail configuration with subsonic leading edges are compared to experiment also. Body alone configurations are studied using a Kutta condition to generate a lee-side vortex.

  15. Parallel computation of meshless methods for explicit dynamic analysis.

    SciTech Connect

    Danielson, K. T.; Hao, S.; Liu, W. K.; Uras, R. A.; Li, S.; Reactor Engineering; Northwestern Univ.; Waterways Experiment Station

    2000-03-10

    A parallel computational implementation of modern meshless methods is presented for explicit dynamic analysis. The procedures are demonstrated by application of the Reproducing Kernel Particle Method (RKPM). Aspects of a coarse grain parallel paradigm are detailed for a Lagrangian formulation using model partitioning. Integration points are uniquely defined on separate processors and particle definitions are duplicated, as necessary, so that all support particles for each point are defined locally on the corresponding processor. Several partitioning schemes are considered and a reduced graph-based procedure is presented. Partitioning issues are discussed and procedures to accommodate essential boundary conditions in parallel are presented. Explicit MPI message passing statements are used for all communications among partitions on different processors. The effectiveness of the procedure is demonstrated by highly deformable inelastic example problems.

  16. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual... gross receipts from all sales of the establishment during a 12-month period. The methods of computing...

  17. Dosimetry methods for multi-detector computed tomography.

    PubMed

    Gancheva, M; Dyakov, I; Vassileva, J; Avramova-Cholakova, S; Taseva, D

    2015-07-01

    The aim of this study is to compare four dosimetry methods for wide-beam multi-detector computed tomography (MDCT) in terms of computed tomography dose index free in air (CTDI free-in-air) and CTDI measured in phantom (CTDI phantom). The study was performed with Aquilion One 320-detector row CT (Toshiba), Ingenuity 64-detector row CT (Philips) and Aquilion 64 64-detector row CT (Toshiba). In addition to the standard dosimetry, three other dosimetry methods were also applied. The first method, suggested by International Electrotechnical Commission (IEC) for MDCT, includes free-in-air measurements with a standard 100-mm CT pencil ion chamber, stepped through the X-ray beam, along the z-axis, at intervals equal to its sensitive length. Two cases were studied-with an integration length of 200 mm and with a standard polimetil metakrilat (PMMA) dosimetry phantom. The second approach comprised measurements with a twice-longer phantom and two 100-mm chambers positioned and fixed against each other, forming a detection length of 200 mm. As a third method, phantom measurements were performed to study the real-dose profile along z-axis using thermoluminescent detectors. Fabricated PMMA tube of a total length of 300 mm in cylindrical shape containing LiF detectors was used. CTDI free-in-air measured with an integration length of 300 mm for 160 mm wide beam was by 194 % higher than the same quantity measured using the standard method. For an integration length of 200 mm, the difference was 18 % for 40 mm wide beam and 14 % for 32 mm wide beam in comparison with the standard CTDI measurement. For phantom measurements, the IEC method resulted in difference of 41 % for the beam width 160 mm, 19 % for the beam width 40 mm and 18 % for the beam width 32 mm compared with the method for CTDI vol. CTDI values from direct measurement in the phantom central hole with two chambers differ by 20 % from the calculated values by the IEC method. Dose profile for beam widths of 40, 32 and 16 mm, and analysis and conclusions are presented. PMID:25889607

  18. Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity where more and more complex flow problems can be tackled with this approach. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by a contra-rotating open rotor. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the methodologies of how to apply the immersed boundary method to this moving boundary problem, we will provide a detailed validation of the aeroacoustic analysis approach employing the Launch Ascent and Vehicle Aerodynamics (LAVA) solver. Two free-stream Mach numbers with M=0.2 and M=0.78 are considered in this analysis that are based on the nominally take-off and cruise flow conditions. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. Spectral analysis is used to determine the dominant wave propagation pattern in the acoustic near-field.

  19. A fast phase space method for computing creeping rays

    SciTech Connect

    Motamed, Mohammad . E-mail: mohamad@nada.kth.se; Runborg, Olof . E-mail: olofr@nada.kth.se

    2006-11-20

    Creeping rays can give an important contribution to the solution of medium to high frequency scattering problems. They are generated at the shadow lines of the illuminated scatterer by grazing incident rays and propagate along geodesics on the scatterer surface, continuously shedding diffracted rays in their tangential direction. In this paper, we show how the ray propagation problem can be formulated as a partial differential equation (PDE) in a three-dimensional phase space. To solve the PDE we use a fast marching method. The PDE solution contains information about all possible creeping rays. This information includes the phase and amplitude of the field, which are extracted by a fast post-processing. Computationally, the cost of solving the PDE is less than tracing all rays individually by solving a system of ordinary differential equations. We consider an application to mono-static radar cross section problems where creeping rays from all illumination angles must be computed. The numerical results of the fast phase space method and a comparison with the results of ray tracing are presented.

  20. Matrix element method for high performance computing platforms

    NASA Astrophysics Data System (ADS)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  1. Parallel computation of multigroup reactivity coefficient using iterative method

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  2. Parallel computation of multigroup reactivity coefficient using iterative method

    SciTech Connect

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-09

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  3. Computational method for calligraphic style representation and classification

    NASA Astrophysics Data System (ADS)

    Zhang, Xiafen; Nagy, George

    2015-09-01

    A large collection of reproductions of calligraphy on paper was scanned into images to enable web access for both the academic community and the public. Calligraphic paper digitization technology is mature, but technology for segmentation, character coding, style classification, and identification of calligraphy are lacking. Therefore, computational tools for classification and quantification of calligraphic style are proposed and demonstrated on a statistically characterized corpus. A subset of 259 historical page images is segmented into 8719 individual character images. Calligraphic style is revealed and quantified by visual attributes (i.e., appearance features) of character images sampled from historical works. A style space is defined with the features of five main classical styles as basis vectors. Cross-validated error rates of 10% to 40% are reported on conventional and conservative sampling into training/test sets and on same-work voting with a range of voter participation. Beyond its immediate applicability to education and scholarship, this research lays the foundation for style-based calligraphic forgery detection and for discovery of latent calligraphic groups induced by mentor-student relationships.

  4. Fundamental studies in hypersonic aeroelasticity using computational methods

    NASA Astrophysics Data System (ADS)

    Thuruthimattam, Biju James

    This dissertation describes the aeroelastic analysis of a generic hypersonic vehicle using methods in computational aeroelasticity. This objective is achieved by first considering the behavior of a representative configuration, namely a two degree-of-freedom typical cross-section, followed by that of a three-dimensional model of the generic vehicle, operating at very high Mach numbers. The typical cross-section of a hypersonic vehicle is represented by a double-wedge cross-section, having pitch and plunge degrees of freedom. The flutter boundaries of the typical cross-section are first generated using third-order piston theory, to serve as a basis for comparison with the refined calculations. Prior to the refined calculations, the time-step requirements for the reliable computation of the unsteady airloads using Euler and Navier-Stokes aerodynamics are identified. Computational aeroelastic response results are used to obtain frequency and damping characteristics, and compared with those from piston theory solutions for a variety of flight conditions. A parametric study of offsets, wedge angles; and static angle of attack is conducted. All the solutions are fairly close below the flutter boundary, and differences between the various models increase when the flutter boundary is approached. For this geometry, differences between viscous and inviscid aeroelastic behavior are not substantial. The effects of aerodynamic heating on the aeroelastic behavior of the typical cross-section are incorporated in an approximate manner, by considering the response of a heated wing. Results indicate that aerodynamic heating reduces aeroelastic stability. This analysis was extended to a generic hypersonic vehicle, restrained such that the rigid-body degrees of freedom are absent. The aeroelastic stability boundaries of the canted fin alone were calculated using third-order piston theory. The stability boundaries for the generic vehicle were calculated at different altitudes using piston theory for comparison. The flutter boundaries using first-order piston theory were found to be much higher than those calculated using third-order piston theory. Computational aeroelastic response of the complete vehicle using Euler aerodynamics was found to predict a significantly higher flutter boundary as compared to third-order piston theory, due to substantial three-dimensional flow effects. Also, both methods predicted an increase in the flutter boundary with increasing altitude.

  5. Matching wind turbine rotors and loads: computational methods for designers

    SciTech Connect

    Seale, J.B.

    1983-04-01

    This report provides a comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications. The user must supply: (1) turbine aerodynamic efficiency as a function of tipspeed ratio; (2) mechanical load torque as a function of rotation speed; (3) useful delivered power as a function of incoming mechanical power; (4) site average windspeed and, for maximum accuracy, distribution data. The description of the data includes governing limits consistent with the capacities of components. The report develops, a step-by-step method for converting the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) a decision is made how turbine power is to be governed (it may self-govern) to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics come into play to predict longterm energy output. Most systems can be approximated by a graph-and-calculator approach: Computer-generated families of coefficient curves provide data for algebraic scaling formulas. The method leads not only to energy predictions, but also to insight into the processes being modeled. Direct use of a computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out witn in-depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps, including three different load-compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

  6. Computer Anxiety and Students' Preferred Feedback Methods in EFL Writing

    ERIC Educational Resources Information Center

    Matsumura, Shoichi; Hann, George

    2004-01-01

    Computer-mediated instruction plays a significant role in foreign language education. The incorporation of computer technology into the classroom has also been accompanied by an increasing number of students who experience anxiety when interacting with computers. This study examined the effects of computer anxiety on students' choice of feedback…

  7. Graphical Methods: A Review of Current Methods and Computer Hardware and Software. Technical Report No. 27.

    ERIC Educational Resources Information Center

    Bessey, Barbara L.; And Others

    Graphical methods for displaying data, as well as available computer software and hardware, are reviewed. The authors have emphasized the types of graphs which are most relevant to the needs of the National Center for Education Statistics (NCES) and its readers. The following types of graphs are described: tabulations, stem-and-leaf displays,…

  8. Publications

    Cancer.gov

    Information about NCI publications including PDQ cancer information for patients and health professionals, patient-education publications, fact sheets, dictionaries, NCI blogs and newsletters and major reports.

  9. Computational modeling of multicellular constructs with the material point method.

    PubMed

    Guilkey, James E; Hoying, James B; Weiss, Jeffrey A

    2006-01-01

    Computational modeling of the mechanics of cells and multicellular constructs with standard numerical discretization techniques such as the finite element (FE) method is complicated by the complex geometry, material properties and boundary conditions that are associated with such systems. The objectives of this research were to apply the material point method (MPM), a meshless method, to the modeling of vascularized constructs by adapting the algorithm to accurately handle quasi-static, large deformation mechanics, and to apply the modified MPM algorithm to large-scale simulations using a discretization that was obtained directly from volumetric confocal image data. The standard implicit time integration algorithm for MPM was modified to allow the background computational grid to remain fixed with respect to the spatial distribution of material points during the analysis. This algorithm was used to simulate the 3D mechanics of a vascularized scaffold under tension, consisting of growing microvascular fragments embedded in a collagen gel, by discretizing the construct with over 13.6 million material points. Baseline 3D simulations demonstrated that the modified MPM algorithm was both more accurate and more robust than the standard MPM algorithm. Scaling studies demonstrated the ability of the parallel code to scale to 200 processors. Optimal discretization was established for the simulations of the mechanics of vascularized scaffolds by examining stress distributions and reaction forces. Sensitivity studies demonstrated that the reaction force during simulated extension was highly sensitive to the modulus of the microvessels, despite the fact that they comprised only 10.4% of the volume of the total sample. In contrast, the reaction force was relatively insensitive to the effective Poisson's ratio of the entire sample. These results suggest that the MPM simulations could form the basis for estimating the modulus of the embedded microvessels through a parameter estimation scheme. Because of the generality and robustness of the modified MPM algorithm, the relative ease of generating spatial discretizations from volumetric image data, and the ability of the parallel computational implementation to scale to large processor counts, it is anticipated that this modeling approach may be extended to many other applications, including the analysis of other multicellular constructs and investigations of cell mechanics. PMID:16095601

  10. Novel computational methods to design protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Zhou, Alice Qinhua; O'Hern, Corey; Regan, Lynne

    2014-03-01

    Despite the abundance of structural data, we still cannot accurately predict the structural and energetic changes resulting from mutations at protein interfaces. The inadequacy of current computational approaches to the analysis and design of protein-protein interactions has hampered the development of novel therapeutic and diagnostic agents. In this work, we apply a simple physical model that includes only a minimal set of geometrical constraints, excluded volume, and attractive van der Waals interactions to 1) rank the binding affinity of mutants of tetratricopeptide repeat proteins with their cognate peptides, 2) rank the energetics of binding of small designed proteins to the hydrophobic stem region of the influenza hemagglutinin protein, and 3) predict the stability of T4 lysozyme and staphylococcal nuclease mutants. This work will not only lead to a fundamental understanding of protein-protein interactions, but also to the development of efficient computational methods to rationally design protein interfaces with tunable specificity and affinity, and numerous applications in biomedicine. NSF DMR-1006537, PHY-1019147, Raymond and Beverly Sackler Institute for Biological, Physical and Engineering Sciences, and Howard Hughes Medical Institute.

  11. Computational methods for the verification of adaptive control systems

    NASA Astrophysics Data System (ADS)

    Prasanth, Ravi K.; Boskovic, Jovan; Mehra, Raman K.

    2004-08-01

    Intelligent and adaptive control systems will significantly challenge current verification and validation (V&V) processes, tools, and methods for flight certification. Although traditional certification practices have produced safe and reliable flight systems, they will not be cost effective for next-generation autonomous unmanned air vehicles (UAVs) due to inherent size and complexity increases from added functionality. Affordable V&V of intelligent control systems is by far the most important challenge in the development of UAVs faced by both commercial and military aerospace industry in the United States. This paper presents a formal modeling framework for a class of adaptive control systems and an associated computational scheme. The class of systems considered include neural network-based flight control systems and vehicle health management systems. This class of systems and indeed all adaptive systems are hybrid systems whose continuum dynamics is nonlinear. Our computational procedure is iterative and each iteration has two sequential steps. The first step is to derive an approximating finite-state automaton whose behaviors contain the behaviors of the hybrid system. The second step is to check if the language accepted by the approximating automaton is empty (emptiness checking). The iterations are terminated if the language accepted is empty; otherwise, the approximation is refined and the iteration is continued. This procedure will never produce an "error-free" certificate when the actual system contains errors which is an important requirement in V&V of safety critical systems.

  12. A comprehensive method for optical-emission computed tomography

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew; Bowsher, James; Roper, Justin; Oliver, Tim; Dewhirst, Mark; Oldham, Mark

    2010-07-01

    Optical-computed tomography (CT) and optical-emission computed tomography (ECT) are recent techniques with potential for high-resolution multi-faceted 3D imaging of the structure and function in unsectioned tissue samples up to 1-4 cc. Quantitative imaging of 3D fluorophore distribution (e.g. GFP) using optical-ECT is challenging due to attenuation present within the sample. Uncorrected reconstructed images appear hotter near the edges than at the center. A similar effect is seen in SPECT/PET imaging, although an important difference is attenuation occurs for both emission and excitation photons. This work presents a way to implement not only the emission attenuation correction utilized in SPECT, but also excitation attenuation correction and source strength modeling which are unique to optical-ECT. The performance of the correction methods was investigated by the use of a cylindrical gelatin phantom whose central region was filled with a known distribution of attenuation and fluorophores. Uncorrected and corrected reconstructions were compared to a sectioned slice of the phantom imaged using a fluorescent dissecting microscope. Significant attenuation artifacts were observed in uncorrected images and appeared up to 80% less intense in the central regions due to attenuation and an assumed uniform light source. The corrected reconstruction showed agreement throughout the verification image with only slight variations (~5%). Final experiments demonstrate the correction in tissue as applied to a tumor with constitutive RFP.

  13. 76 FR 67418 - Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ...The National Institute of Standards and Technology (NIST) publishes this notice to seek public comments on the first draft of Special Publication 500-293, US Government Cloud Computing Technology Roadmap, Release 1.0 (Draft). This document is intended to be the mechanism to define and communicate interoperability, portability, and security requirement priorities that must be met in terms of......

  14. Public Experiments and Their Analysis with the Replication Method

    ERIC Educational Resources Information Center

    Heering, Peter

    2007-01-01

    One of those who failed to establish himself as a natural philosopher in 18th century Paris was the future revolutionary Jean Paul Marat. He did not only publish several monographs on heat, optics and electricity in which he attempted to characterise his work as being purely empirical but he also tried to establish himself as a public lecturer.…

  15. "Equal Educational Opportunity": Alternative Financing Methods for Public Education.

    ERIC Educational Resources Information Center

    Akin, John S.

    This paper traces the evaluation of state-local public education finance systems to present; examines the prevalent foundation system of finance; discusses the "Serrano" decision and its implications for foundation systems; and, after an examination of three possible new approaches, recommends an education finance system. The first of the new…

  16. Pedagogical Methods of Teaching "Women in Public Speaking."

    ERIC Educational Resources Information Center

    Pederson, Lucille M.

    A course on women in public speaking, developed at the University of Cincinnati, focuses on the rhetoric of selected women who have been involved in various movements and causes in the United States in the twentieth century. Women studied include educator Mary McLeod Bethune, Congresswoman Jeannette Rankin, suffragette Carrie Chapman Catt, Helen…

  17. Public Experiments and Their Analysis with the Replication Method

    ERIC Educational Resources Information Center

    Heering, Peter

    2007-01-01

    One of those who failed to establish himself as a natural philosopher in 18th century Paris was the future revolutionary Jean Paul Marat. He did not only publish several monographs on heat, optics and electricity in which he attempted to characterise his work as being purely empirical but he also tried to establish himself as a public lecturer.

  18. Presenting an Environmental Analysis to the Public: An Interactive Computer Based Approach.

    NASA Astrophysics Data System (ADS)

    Stauffer, P.; Hopkins, J.; Birdsell, K.; Hollis, D.

    2001-12-01

    The Los Alamos National Laboratory (LANL) Environmental Restoration Project is currently involved in clean-up of many legacy waste sites associated with work performed in the past at LANL. A growing part of the ER mission is to involve the public in the processes of monitoring, remediation, and stewardship. The challange of presenting complex environmantal analysis to the public is addressed via an educational exercise that uses web-based applications to allow interactive learning from a home computer. The presentation begins with discussions of the site history, regulations, and basic facts about VOCs. Measured concentrations of vapor phase VOC are shown on figures which clearly relate the plume to features of concern such as the water table and nearby surface facilities. Nature and extent are demonstrated with an animation that visually shows the relationship of the vapor phase VOC plume to the monitoring boreholes. Simulations of VOC vapor transport are describe and compared to data. Conclusions based on the data and modeling complete the exercise. We hope to use this type of educational tool in the future to provide the public with the knowledge they need to become more proactive in the process of remediating legacy waste sites.

  19. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  20. Methods and computer readable medium for improved radiotherapy dosimetry planning

    DOEpatents

    Wessol, Daniel E.; Frandsen, Michael W.; Wheeler, Floyd J.; Nigg, David W.

    2005-11-15

    Methods and computer readable media are disclosed for ultimately developing a dosimetry plan for a treatment volume irradiated during radiation therapy with a radiation source concentrated internally within a patient or incident from an external beam. The dosimetry plan is available in near "real-time" because of the novel geometric model construction of the treatment volume which in turn allows for rapid calculations to be performed for simulated movements of particles along particle tracks therethrough. The particles are exemplary representations of alpha, beta or gamma emissions emanating from an internal radiation source during various radiotherapies, such as brachytherapy or targeted radionuclide therapy, or they are exemplary representations of high-energy photons, electrons, protons or other ionizing particles incident on the treatment volume from an external source. In a preferred embodiment, a medical image of a treatment volume irradiated during radiotherapy having a plurality of pixels of information is obtained.

  1. Modern wing flutter analysis by computational fluid dynamics methods

    NASA Technical Reports Server (NTRS)

    Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.

    1988-01-01

    The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.

  2. Modern wing flutter analysis by computational fluid dynamics methods

    NASA Technical Reports Server (NTRS)

    Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.

    1987-01-01

    The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.

  3. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    SciTech Connect

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-09-20

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

  4. Computational and experimental methods to decipher the epigenetic code

    PubMed Central

    de Pretis, Stefano; Pelizzola, Mattia

    2014-01-01

    A multi-layered set of epigenetic marks, including post-translational modifications of histones and methylation of DNA, is finely tuned to define the epigenetic state of chromatin in any given cell type under specific conditions. Recently, the knowledge about the combinations of epigenetic marks occurring in the genome of different cell types under various conditions is rapidly increasing. Computational methods were developed for the identification of these states, unraveling the combinatorial nature of epigenetic marks and their association to genomic functional elements and transcriptional states. Nevertheless, the precise rules defining the interplay between all these marks remain poorly characterized. In this perspective we review the current state of this research field, illustrating the power and the limitations of current approaches. Finally, we sketch future avenues of research illustrating how the adoption of specific experimental designs coupled with available experimental approaches could be critical for a significant progress in this area. PMID:25295054

  5. Computational method for transmission eigenvalues for a spherically stratified medium.

    PubMed

    Cheng, Xiaoliang; Yang, Jing

    2015-07-01

    We consider a computational method for the interior transmission eigenvalue problem that arises in acoustic and electromagnetic scattering. The transmission eigenvalues contain useful information about some physical properties, such as the index of refraction. Instead of the existence and estimation of the spectral property of the transmission eigenvalues, we focus on the numerical calculation, especially for spherically stratified media in R3. Due to the nonlinearity and the special structure of the interior transmission eigenvalue problem, there are not many numerical methods to date. First, we reduce the problem into a second-order ordinary differential equation. Then, we apply the Hermite finite element to the weak formulation of the equation. With proper rewriting of the matrix-vector form, we change the original nonlinear eigenvalue problem into a quadratic eigenvalue problem, which can be written as a linear system and solved by the eigs function in MATLAB. This numerical method is fast, effective, and can calculate as many transmission eigenvalues as needed at a time. PMID:26367151

  6. Optimal pulse design in quantum control: A unified computational method

    PubMed Central

    Li, Jr-Shin; Ruths, Justin; Yu, Tsyr-Yan; Arthanari, Haribabu; Wagner, Gerhard

    2011-01-01

    Many key aspects of control of quantum systems involve manipulating a large quantum ensemble exhibiting variation in the value of parameters characterizing the system dynamics. Developing electromagnetic pulses to produce a desired evolution in the presence of such variation is a fundamental and challenging problem in this research area. We present such robust pulse designs as an optimal control problem of a continuum of bilinear systems with a common control function. We map this control problem of infinite dimension to a problem of polynomial approximation employing tools from geometric control theory. We then adopt this new notion and develop a unified computational method for optimal pulse design using ideas from pseudospectral approximations, by which a continuous-time optimal control problem of pulse design can be discretized to a constrained optimization problem with spectral accuracy. Furthermore, this is a highly flexible and efficient numerical method that requires low order of discretization and yields inherently smooth solutions. We demonstrate this method by designing effective broadband π/2 and π pulses with reduced rf energy and pulse duration, which show significant sensitivity enhancement at the edge of the spectrum over conventional pulses in 1D and 2D NMR spectroscopy experiments. PMID:21245345

  7. Matching wind turbine rotors and loads: Computational methods for designers

    NASA Astrophysics Data System (ADS)

    Seale, J. B.

    1983-04-01

    A comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications was reported. A method was developed to convert the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) it is decided how turbine power is to be governed to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics are used to predict longterm energy output. Most systems are approximated by a graph and calculator approach. The method leads to energy predictions, and to insight into modeled processes. A computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out with in depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps; including three different load compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

  8. Computational methods for the detection of cis-regulatory modules.

    PubMed

    Van Loo, Peter; Marynen, Peter

    2009-09-01

    Metazoan transcription regulation occurs through the concerted action of multiple transcription factors that bind co-operatively to cis-regulatory modules (CRMs). The annotation of these key regulators of transcription is lagging far behind the annotation of the transcriptome itself. Here, we give an overview of existing computational methods to detect these CRMs in metazoan genomes. We subdivide these methods into three classes: CRM scanners screen sequences for CRMs based on predefined models that often consist of multiple position weight matrices (PWMs). CRM builders construct models of similar CRMs controlling a set of co-regulated or co-expressed genes. CRM genome screeners screen sequences or complete genomes for CRMs as homotypic or heterotypic clusters of binding sites for any combination of transcription factors. We believe that CRM scanners are currently the most advanced methods, although their applicability is limited. Finally, we argue that CRM builders that make use of PWM libraries will benefit greatly from future advances and will prove to be most instrumental for the annotation of regulatory regions in metazoan genomes. PMID:19498042

  9. Automatic heart positioning method in computed tomography scout images.

    PubMed

    Li, Hong; Liu, Kaihua; Sun, Hang; Bao, Nan; Wang, Xu; Tian, Shi; Qi, Shouliang; Kang, Yan

    2014-01-01

    Computed tomography (CT) radiation dose can be reduced significantly by region of interest (ROI) CT scan. Automatically positioning the heart in CT scout images is an essential step to realize the ROI CT scan of the heart. This paper proposed a fully automatic heart positioning method in CT scout image, including the anteroposterior (A-P) scout image and lateral scout image. The key steps were to determine the feature points of the heart and obtaining part of the heart boundary on the A-P scout image, and then transform the part of the boundary into polar coordinate system and obtain the whole boundary of the heart using slant elliptic equation curve fitting. For heart positioning on the lateral image, the top and bottom boundary obtained from A-P image can be inherited. The proposed method was tested on a clinical routine dataset of 30 cases (30 A-P scout images and 30 lateral scout images). Experimental results show that 26 cases of the dataset have achieved a very good positioning result of the heart both in the A-P scout image and the lateral scout image. The method may be helpful for ROI CT scan of the heart. PMID:25227037

  10. 78 FR 54453 - Notice of Public Meeting-Intersection of Cloud Computing and Mobility Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... forum and workshop are open to the general public. NIST invites organizations to display posters and... Computing or Mobility work at an exhibit table or with a poster. The first 25 organizations requesting an exhibit table or a poster display related to Cloud Computing & Mobility will be accepted for both...

  11. A Critical Review of Computer-Assisted Learning in Public Health via the Internet, 1999-2008

    ERIC Educational Resources Information Center

    Corda, Kirsten W.; Polacek, Georgia N. L. J.

    2009-01-01

    Computers and the internet have been utilized as viable avenues for public health education delivery. Yet the effectiveness, e.g., behavior change, from use of these tools has been limited. Previous reviews have focused on single health topics such as smoking cessation and weight loss. This review broadens the scope to consider computer-assisted…

  12. Developing a personal computer-based data visualization system using public domain software

    NASA Astrophysics Data System (ADS)

    Chen, Philip C.

    1999-03-01

    The current research will investigate the possibility of developing a computing-visualization system using a public domain software system built on a personal computer. Visualization Toolkit (VTK) is available on UNIX and PC platforms. VTK uses C++ to build an executable. It has abundant programming classes/objects that are contained in the system library. Users can also develop their own classes/objects in addition to those existing in the class library. Users can develop applications with any of the C++, Tcl/Tk, and JAVA environments. The present research will show how a data visualization system can be developed with VTK running on a personal computer. The topics will include: execution efficiency; visual object quality; availability of the user interface design; and exploring the feasibility of the VTK-based World Wide Web data visualization system. The present research will feature a case study showing how to use VTK to visualize meteorological data with techniques including, iso-surface, volume rendering, vector display, and composite analysis. The study also shows how the VTK outline, axes, and two-dimensional annotation text and title are enhancing the data presentation. The present research will also demonstrate how VTK works in an internet environment while accessing an executable with a JAVA application programing in a webpage.

  13. Helping Students Soar to Success on Computers: An Investigation of the Soar Study Method for Computer-Based Learning

    ERIC Educational Resources Information Center

    Jairam, Dharmananda; Kiewra, Kenneth A.

    2010-01-01

    This study used self-report and observation techniques to investigate how students study computer-based materials. In addition, it examined if a study method called SOAR can facilitate computer-based learning. SOAR is an acronym that stands for the method's 4 theoretically driven and empirically supported components: select (S), organize (O),…

  14. 29 CFR 779.266 - Methods of computing annual volume of sales or business.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Methods of computing annual volume of sales or business... Apply; Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No computations of annual gross dollar volume are necessary to determine...

  15. 29 CFR 779.266 - Methods of computing annual volume of sales or business.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Methods of computing annual volume of sales or business... Apply; Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No computations of annual gross dollar volume are necessary to determine...

  16. 29 CFR 779.266 - Methods of computing annual volume of sales or business.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Methods of computing annual volume of sales or business... Apply; Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No computations of annual gross dollar volume are necessary to determine...

  17. 29 CFR 779.266 - Methods of computing annual volume of sales or business.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Methods of computing annual volume of sales or business... Apply; Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of... lieu of calendar quarters in computing the annual volume. Once either basis has been adopted it must...

  18. GRACE: Public Health Recovery Methods following an Environmental Disaster

    PubMed Central

    Svendsen, ER; Whittle, N; Wright, L; McKeown, RE; Sprayberry, K; Heim, M; Caldwell, R; Gibson, JJ; Vena, J.

    2014-01-01

    Different approaches are necessary when Community Based Participatory Research (CBPR) of environmental illness is initiated after an environmental disaster within a community. Often such events are viewed as golden scientific opportunities to do epidemiological studies. However, we believe that in such circumstances, community engagement and empowerment needs to be integrated into the public health service efforts in order for both those and any science to be successful, with special care being taken to address the immediate health needs of the community first rather than the pressing needs to answer important scientific questions. We will demonstrate how we have simultaneously provided valuable public health service, embedded generalizable scientific knowledge, and built a successful foundation for supplemental CBPR through our on-going recovery work after the chlorine gas disaster in Graniteville, South Carolina. PMID:20439226

  19. Pesticides and public health: integrated methods of mosquito management.

    PubMed Central

    Rose, R. I.

    2001-01-01

    Pesticides have a role in public health as part of sustainable integrated mosquito management. Other components of such management include surveillance, source reduction or prevention, biological control, repellents, traps, and pesticide-resistance management. We assess the future use of mosquito control pesticides in view of niche markets, incentives for new product development, Environmental Protection Agency registration, the Food Quality Protection Act, and improved pest management strategies for mosquito control. PMID:11266290

  20. Development of computational methods for heavy lift launch vehicles

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Ryan, James S.

    1993-01-01

    The research effort has been focused on the development of an advanced flow solver for complex viscous turbulent flows with shock waves. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. A new computer program named CENS3D has been developed for viscous turbulent flows with discontinuities. Details of the code are described in Appendix A and Appendix B. With the developments of the numerical algorithm and dissipation model, the simulation of three-dimensional viscous compressible flows has become more efficient and accurate. The results of the research are expected to yield a direct impact on the design process of future liquid fueled launch systems.

  1. A stoichiometric calibration method for dual energy computed tomography.

    PubMed

    Bourque, Alexandra E; Carrier, Jean-François; Bouchard, Hugo

    2014-04-21

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0 mm, 0.5 mm and 0.4 mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy. PMID:24694786

  2. A stoichiometric calibration method for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

    2014-04-01

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0 mm, 0.5 mm and 0.4 mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy.

  3. A FAST NEW PUBLIC CODE FOR COMPUTING PHOTON ORBITS IN A KERR SPACETIME

    SciTech Connect

    Dexter, Jason; Agol, Eric

    2009-05-10

    Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. We present a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semianalytically for the first time. We discuss the method, its implementation in a freely available FORTRAN code, and its application to toy problems from the literature.

  4. A Fast New Public Code for Computing Photon Orbits in a Kerr Spacetime

    NASA Astrophysics Data System (ADS)

    Dexter, Jason; Agol, Eric

    2009-05-01

    Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. We present a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semianalytically for the first time. We discuss the method, its implementation in a freely available FORTRAN code, and its application to toy problems from the literature.

  5. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

    SciTech Connect

    Luttman, A.

    2012-03-30

    The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

  6. Non-unitary probabilistic quantum computing circuit and method

    NASA Technical Reports Server (NTRS)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  7. Computational Methods for Domain Partitioning of Protein Structures

    NASA Astrophysics Data System (ADS)

    Veretnik, Stella; Shindyalov, Ilya

    Analysis of protein structures typically begins with decomposition of structure into more basic units, called "structural domains". The underlying goal is to reduce a complex protein structure to a set of simpler yet structurally meaningful units, each of which can be analyzed independently. Structural semi-independence of domains is their hallmark: domains often have compact structure and can fold or function independently. Domains can undergo so-called "domain shuffling"when they reappear in different combinations in different proteins thus implementing different biological functions (Doolittle, 1995). Proteins can then be conceived as being built of such basic blocks: some, especially small proteins, consist usually of just one domain, while other proteins possess a more complex architecture containing multiple domains. Therefore, the methods for partitioning a structure into domains are of critical importance: their outcome defines the set of basic units upon which structural classifications are built and evolutionary analysis is performed. This is especially true nowadays in the era of structural genomics. Today there are many methods that decompose the structure into domains: some of them are manual (i.e., based on human judgment), others are semiautomatic, and still others are completely automatic (based on algorithms implemented as software). Overall there is a high level of consistency and robustness in the process of partitioning a structure into domains (for 80% of proteins); at least for structures where domain location is obvious. The picture is less bright when we consider proteins with more complex architecturesneither human experts nor computational methods can reach consistent partitioning in many such cases. This is a rather accurate reflection of biological phenomena in general since domains are formed by different mechanisms, hence it is nearly impossible to come up with a set of well-defined rules that captures all of the observed cases.

  8. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of...

  9. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 2 2011-04-01 2011-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of...

  10. Publications.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents a variety of publications available from government and nongovernment sources. The government publications are from the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) and are designed for educators, students, and the public. (Author/SA)

  11. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  12. Formulations and computational methods for contact problems in solid mechanics

    NASA Astrophysics Data System (ADS)

    Mirar, Anand Ramchandra

    2000-11-01

    A study of existing formulations and computational methods for contact problems is conducted. The purpose is to gain insights into the solution procedures and pinpoint their limitations so that alternate procedures can be developed. Three such procedures based on the augmented Lagrangian method (ALM) are proposed. Small-scale benchmark problems are solved analytically as well as numerically to study the existing and proposed methods. The variational inequality formulation for frictionless contact is studied using the two bar truss-wall problem in a closed form. Sub-differential formulation is investigated using the spring-wall contact and the truss-wall friction problems. A two-phase analytical procedure is developed for solving the truss-wall frictional contact benchmark problem. The variational equality formulation for contact problems is studied using the penalty method along with the Newton-Raphson procedure. Limitations of such procedures, mainly due to their dependence on the user defined parameters (i.e., the penalty values and the number of time steps), are identified. Based on the study it is concluded that alternate formulations need to be developed. Frictionless contact formulation is developed using the basic concepts of ALM from optimization theory. A new frictional contact formulation (ALM1) is then developed employing ALM. Automatic penalty update procedure is used to eliminate dependence of the solution on the penalty values. Dependence of the solution on the number of time steps in the existing as well as ALM1 formulations is attributed to a flaw in the return mapping procedure for friction. Another new frictional contact formulation (ALM2) is developed to eliminate the dependence of solution on the number of time steps along with the penalty values. Effectiveness of ALM2 is demonstrated by solving the two bar and five bar truss-wall problems. The solutions are compared with the analytical and existing formulations. Design sensitivity analysis of frictional contact problems is also studied and potential advantages of ALM2 over the existing formulations to obtain the sensitivity coefficients are identified. Finally, future directions of the research and conclusions are given.

  13. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  14. Established and emerging dose reduction methods in cardiac computed tomography.

    PubMed

    Small, Gary R; Kazmi, Mustapha; Dekemp, Robert A; Chow, Benjamin J W

    2011-08-01

    Cardiac computed tomography (CT) is a non-invasive modality that is commonly used as an alternative to invasive coronary angiography for the investigation of coronary artery disease. The enthusiasm for this technology has been tempered by a growing appreciation of the potential risks of malignancy associated with the use of ionising radiation. In the spirit of minimizing patient risk, the medical profession and industry have worked hard to developed methods and protocols to reduce patient radiation exposure while maintaining excellent diagnostic accuracy. A complete understanding of radiation reduction techniques will allow clinicians to reduce patient risk while providing an important diagnostic service. This review will consider the established and emerging techniques that may be adopted to reduce patient absorbed doses from x-ray CT. By modifying (1) x-ray tube output, (2) imaging time (scan duration), (3) imaging distance (scan length) and (4) the appropriate use of shielding, clinicians will be able to adhere to the 'as low as reasonably achievable (ALARA)' principle. PMID:21630110

  15. Computational Methods for RNA Structure Validation and Improvement.

    PubMed

    Jain, Swati; Richardson, David C; Richardson, Jane S

    2015-01-01

    With increasing recognition of the roles RNA molecules and RNA/protein complexes play in an unexpected variety of biological processes, understanding of RNA structure-function relationships is of high current importance. To make clean biological interpretations from three-dimensional structures, it is imperative to have high-quality, accurate RNA crystal structures available, and the community has thoroughly embraced that goal. However, due to the many degrees of freedom inherent in RNA structure (especially for the backbone), it is a significant challenge to succeed in building accurate experimental models for RNA structures. This chapter describes the tools and techniques our research group and our collaborators have developed over the years to help RNA structural biologists both evaluate and achieve better accuracy. Expert analysis of large, high-resolution, quality-conscious RNA datasets provides the fundamental information that enables automated methods for robust and efficient error diagnosis in validating RNA structures at all resolutions. The even more crucial goal of correcting the diagnosed outliers has steadily developed toward highly effective, computationally based techniques. Automation enables solving complex issues in large RNA structures, but cannot circumvent the need for thoughtful examination of local details, and so we also provide some guidance for interpreting and acting on the results of current structure validation for RNA. PMID:26068742

  16. Validation of viscous and inviscid computational methods for turbomachinery components

    NASA Technical Reports Server (NTRS)

    Povinelli, L. A.

    1986-01-01

    An assessment of several three-dimensional computer codes used at the NASA Lewis Research Center is presented. Four flow situations are examined, for which both experimental data and computational results are available. The four flows form a basis for the evaluation of the computational procedures. It is concluded that transonic rotor flow at peak efficiency conditions may be calculated with a reasonable degree of accuracy, whereas, off-design conditions are not accurately determined. Duct flows and turbine cascade flows may also be computed with reasonable accuracy whereas radial inflow turbine flow remains a challenging problem.

  17. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  18. Gyrokinetic theory and computational methods for electromagnetic perturbations in tokamaks

    NASA Astrophysics Data System (ADS)

    Qin, Hong

    A general gyrokinetic formalism and appropriate computational methods have been developed for electromagnetic perturbations in toroidal plasmas. This formalism and associated numerical code represent the first self-consistent, comprehensive, fully kinetic model for treating both magnetohydrodynamic (MHD) instabilities and electromagnetic drift waves. The gyrokinetic system of equation is derived by phase- space Lagrangian Lie perturbation methods which enable applications to modes with arbitrary wavelength. An important component missing from previous electromagnetic gyrokinetic theories, the gyrokinetic perpendicular dynamics, is identified and developed in the present analysis. This is accomplished by introducing a new ``distribution function'' and an associated governing gyrokinetic equation. Consequently, the compressional Alfvn waves and cyclotron waves can be systematically treated. The new insights into the gyrokinetic perpendicular dynamics uncovered here clarify the understanding of the gyrokinetic approach-the real spirit of the gyrokinetic reduction is to decouple the gyromotion from the guiding center orbital motion, instead of averaging it out. The gyrokinetic perpendicular dynamics is in fact essential to the recovery of the MHD model from a fully kinetic derivation. In particular, it serves to generalize, in gyrokinetic framework, Spitzer's solution of the fluid/particle paradox to a broader regime of applicability. The gyrokinetic system is also shown to be reducible to a simpler form to deal with shear Alfvn waves. This consists of an appropriate form of the gyrokinetic equation governing the distribution function, the gyrokinetic Poisson equation, and a newly derived gyrokinetic moment equation. If all of the kinetic effects are neglected, the gyrokinetic moment equation is shown to recover the ideal MHD equation for shear Alfvn modes. In addition, a gyrokinetic Ohm's law, including both the perpendicular and the parallel components, is derived. The gyrokinetic equation is solved for the perturbed distribution function by integrating along the unperturbed orbits. Substituting this solution back into the gyrokinetic Poisson equation and the gyrokinetic moment equation yields the eigenmode equation. The eigenvalue problem is then solved by using a Fourier decomposition in the poloidal direction and a finite element method in the radial direction. Both analytic and numerical results from the gyrokinetic model were found to agree very well with the MHD results. Destabilization of the TAEs by energetic particles are known to be vitally important for ignition-class plasmas. For the test case with Maxwellian energetic hydrogen ions, comparisons have accordingly been made between the results from the present non-perturbative, fully kinetic calculation using the KIN-2DEM code and those from the perturbative hybrid calculation with the NOVA-K code. The agreement varies with hot particle thermal velocity. The discrepancy is mainly attributed to the differences in the basic models.

  19. Hands-On versus Computer Simulation Methods in Chemistry.

    ERIC Educational Resources Information Center

    Bourque, Donald R.; Carlson, Gaylen R.

    1987-01-01

    Reports on a study that examined and compared the cognitive effectiveness of a traditional hands-on laboratory exercise with a computer-simulated program on the same topic. Sought to determine if coupling these formats would provide optimum student comprehension. Suggests hands-on exercises be followed by computer simulations as postlaboratory…

  20. Students' Attitudes towards Control Methods in Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Hintze, Hanne; And Others

    1988-01-01

    Describes study designed to investigate dental students' attitudes toward computer-assisted teaching as applied in programs for oral radiology in Denmark. Programs using personal computers and slide projectors with varying degrees of learner and teacher control are described, and differences in attitudes between male and female students are…

  1. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  2. Library Orientation Methods, Mental Maps, and Public Services Planning.

    ERIC Educational Resources Information Center

    Ridgeway, Trish

    Two library orientation methods, a self-guided cassette walking tour and a slide-tape program, were administered to 202 freshmen students to determine if moving through the library increased students' ability to develop a mental map of the library. An effort was made to ensure that the two orientation programs were equivalent. Results from the 148…

  3. Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture

    DOEpatents

    Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2011-10-11

    Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

  4. Four-stage computational technology with adaptive numerical methods for computational aerodynamics

    NASA Astrophysics Data System (ADS)

    Shaydurov, V.; Liu, T.; Zheng, Z.

    2012-10-01

    Computational aerodynamics is a key technology in aircraft design which is ahead of physical experiment and complements it. Of course all three components of computational modeling are actively developed: mathematical models of real aerodynamic processes, numerical algorithms, and high-performance computing. The most impressive progress has been made in the field of computing, though with a considerable complication of computer architecture. Numerical algorithms are developed more conservative. More precisely, they are offered and theoretically justified for more simple mathematical problems. Nevertheless, computational mathematics now has amassed a whole palette of numerical algorithms that can provide acceptable accuracy and interface between modern mathematical models in aerodynamics and high-performance computers. A significant step in this direction was the European Project ADIGMA whose positive experience will be used in International Project TRISTAM for further movement in the field of computational technologies for aerodynamics. This paper gives a general overview of objectives and approaches intended to use and a description of the recommended four-stage computer technology.

  5. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to whether an establishment qualifies for exemption under section...

  6. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to whether an establishment qualifies for exemption under section...

  7. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to whether an establishment qualifies for exemption under section...

  8. The Repeated Replacement Method: A Pure Lagrangian Meshfree Method for Computational Fluid Dynamics

    PubMed Central

    Walker, Wade A.

    2012-01-01

    In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly chops out fluid from active areas and replaces it with new flattened fluid cells with the same mass, momentum, and energy. We call the new cells flattened because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175

  9. Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.

    2011-01-01

    This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…

  10. Asronomical refraction: Computational methods for all zenith angles

    NASA Technical Reports Server (NTRS)

    Auer, L. H.; Standish, E. M.

    2000-01-01

    It is shown that the problem of computing astronomical refraction for any value of the zenith angle may be reduced to a simple, nonsingular, numerical quadrature when the proper choice is made for the independent variable of integration.

  11. Illumination invariant method to detect and track left luggage in public areas

    NASA Astrophysics Data System (ADS)

    Hassan, Waqas; Mitra, Bhargav; Chatwin, Chris; Young, Rupert; Birch, Philip

    2010-04-01

    Surveillance and its security applications have been critical subjects recently with various studies placing a high demand on robust computer vision solutions that can work effectively and efficiently in complex environments without human intervention. In this paper, an efficient illumination invariant template generation and tracking method to identify and track abandoned objects (bags) in public areas is described. Intensity and chromaticity distortion parameters are initially used to generate a binary mask containing all the moving objects in the scene. The binary blobs in the mask are tracked, and those found static through the use of a 'centroid-range' method are segregated. A Laplacian of Gaussian (LoG) filter is then applied to the parts of the current frame and the average background frame, encompassed by the static blobs, to pick up the high frequency components. The total energy is calculated for both the frames, current and background, covered by the detected edge map to ensure that illumination change has not resulted in false segmentation. Finally, the resultant edge-map is registered and tracked through the use of a correlation based matching process. The algorithm has been successfully tested on the iLIDs dataset, results being presented in this paper.

  12. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  13. Small Scale Distance Education; "The Personal (Computer) Touch"; Tutorial Methods for TMA's Using a Computer.

    ERIC Educational Resources Information Center

    Fritsch, Helmut; And Others

    1989-01-01

    The authors present reports of current research on distance education at the FernUniversitat in West Germany. Fritsch discusses adapting distance education techniques for small classes. Kuffner describes procedures for providing feedback to students using personalized computer-generated letters. Klute discusses using a computer with tutorial…

  14. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    NASA Astrophysics Data System (ADS)

    Jia, Meng; Fan, Yang-Yu; Tian, Wei-Jian

    2011-03-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 60872159).

  15. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    PubMed Central

    Gao, Xin

    2013-01-01

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR. PMID:23453016

  16. Progress Towards Computational Method for Circulation Control Airfoils

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rumsey, C. L.; Anders, S. G.

    2005-01-01

    The compressible Reynolds-averaged Navier-Stokes equations are solved for circulation control airfoil flows. Numerical solutions are computed with both structured and unstructured grid solvers. Several turbulence models are considered, including the Spalart-Allmaras model with and without curvature corrections, the shear stress transport model of Menter, and the k-enstrophy model. Circulation control flows with jet momentum coefficients of 0.03, 0.10, and 0.226 are considered. Comparisons are made between computed and experimental pressure distributions, velocity profiles, Reynolds stress profiles, and streamline patterns. Including curvature effects yields the closest agreement with the measured data.

  17. Method for simulating paint mixing on computer monitors

    NASA Astrophysics Data System (ADS)

    Carabott, Ferdinand; Lewis, Garth; Piehl, Simon

    2002-06-01

    Computer programs like Adobe Photoshop can generate a mixture of two 'computer' colors by using the Gradient control. However, the resulting colors diverge from the equivalent paint mixtures in both hue and value. This study examines why programs like Photoshop are unable to simulate paint or pigment mixtures, and offers a solution using Photoshops existing tools. The article discusses how a library of colors, simulating paint mixtures, is created from 13 artists' colors. The mixtures can be imported into Photoshop as a color swatch palette of 1248 colors and as 78 continuous or stepped gradient files, all accessed in a new software package, Chromafile.

  18. One-to-One Computing in Public Schools: Lessons from "Laptops for All" Programs

    ERIC Educational Resources Information Center

    Abell Foundation, 2008

    2008-01-01

    The basic tenet of one-to-one computing is that the student and teacher have Internet-connected, wireless computing devices in the classroom and optimally at home as well. Also known as "ubiquitous computing," this strategy assumes that every teacher and student has her own computing device and obviates the need for moving classes to computer…

  19. 76 FR 12397 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Bureau of the Public Debt (BPD...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-07

    ... in the Federal Register on January 11, 2006, at 71 FR 1795. The relevant BPD SORs are Treasury/ BPD... Application. These SORs were last published in the Federal Register on July 23, 2008, at 73 FR 42906. 2... ADMINISTRATION Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Bureau of the Public Debt...

  20. SOURCE WATER PROTECTION OF PUBLIC DRINKING WATER WELLS: COMPUTER MODELING OF ZONES CONTRIBUTING RECHARGE TO PUMPING WELLS

    EPA Science Inventory

    Computer technology to assist states, tribes, and clients in the design of wellhead and source water protection areas for public water supply wells is being developed through two distinct SubTasks: (Sub task 1) developing a web-based wellhead decision support system, WellHEDSS, t...

  1. New Methods of Mobile Computing: From Smartphones to Smart Education

    ERIC Educational Resources Information Center

    Sykes, Edward R.

    2014-01-01

    Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It…

  2. All for One: Integrating Budgetary Methods by Computer.

    ERIC Educational Resources Information Center

    Herman, Jerry J.

    1994-01-01

    With the advent of high speed and sophisticated computer programs, all budgetary systems can be combined in one fiscal management information system. Defines and provides examples for the four budgeting systems: (1) function/object; (2) planning, programming, budgeting system; (3) zero-based budgeting; and (4) site-based budgeting. (MLF)

  3. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    ERIC Educational Resources Information Center

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  4. [Computation method for optimization of recipes for protein content].

    PubMed

    Kovalev, N I; Karzeva, N J; Fiterer, V O

    1987-01-01

    The authors propose a calculated protein utilization coefficient. This coefficient considers the difference between the utilization rates of the proteins being contained in the mixture and their amino-acid composition. The proposed formula allows calculations by computer. The data obtained show high correlations with the results received by biological tests with Tetrahymena cultures. PMID:3431579

  5. Computed radiography imaging plates and associated methods of manufacture

    SciTech Connect

    Henry, Nathaniel F.; Moses, Alex K.

    2015-08-18

    Computed radiography imaging plates incorporating an intensifying material that is coupled to or intermixed with the phosphor layer, allowing electrons and/or low energy x-rays to impart their energy on the phosphor layer, while decreasing internal scattering and increasing resolution. The radiation needed to perform radiography can also be reduced as a result.

  6. Simple computer method provides contours for radiological images

    NASA Technical Reports Server (NTRS)

    Newell, J. D.; Keller, R. A.; Baily, N. A.

    1975-01-01

    Computer is provided with information concerning boundaries in total image. Gradient of each point in digitized image is calculated with aid of threshold technique; then there is invoked set of algorithms designed to reduce number of gradient elements and to retain only major ones for definition of contour.

  7. Optical Design Methods: Your Head As A Personal Computer

    NASA Astrophysics Data System (ADS)

    Shafer, David

    1985-07-01

    Several design approaches are described which feature the use of your head as a design tool. This involves thinking about the design task at hand, trying to break it into separate, easily understood subtasks, and approaching these in a creative and intelligent fashion, as only humans can do. You and your computer can become a very powerful team when this design philosophy is adopted.

  8. Computer-Graphics and the Literary Construct: A Learning Method.

    ERIC Educational Resources Information Center

    Henry, Avril

    2002-01-01

    Describes an undergraduate student module that was developed at the University of Exeter (United Kingdom) in which students made their own computer graphics to discover and to describe literary structures in texts of their choice. Discusses learning outcomes and refers to the Web site that shows students' course work. (Author/LRW)

  9. New Methods of Mobile Computing: From Smartphones to Smart Education

    ERIC Educational Resources Information Center

    Sykes, Edward R.

    2014-01-01

    Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It

  10. A method for computing the leading-edge suction in a higher-order panel method

    NASA Technical Reports Server (NTRS)

    Ehlers, F. E.; Manro, M. E.

    1984-01-01

    Experimental data show that the phenomenon of a separation induced leading edge vortex is influenced by the wing thickness and the shape of the leading edge. Both thickness and leading edge shape (rounded rather than point) delay the formation of a vortex. Existing computer programs used to predict the effect of a leading edge vortex do not include a procedure for determining whether or not a vortex actually exists. Studies under NASA Contract NAS1-15678 have shown that the vortex development can be predicted by using the relationship between the leading edge suction coefficient and the parabolic nose drag. The linear theory FLEXSTAB was used to calculate the leading edge suction coefficient. This report describes the development of a method for calculating leading edge suction using the capabilities of the higher order panel methods (exact boundary conditions). For a two dimensional case, numerical methods were developed using the double strength and downwash distribution along the chord. A Gaussian quadrature formula that directly incorporates the logarithmic singularity in the downwash distribution, at all panel edges, was found to be the best method.

  11. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  12. 3D modeling method for computer animate based on modified weak structured light method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  13. Multi-Level iterative methods in computational plasma physics

    SciTech Connect

    Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.

    1999-03-01

    Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD.

  14. A MEASURE-THEORETIC COMPUTATIONAL METHOD FOR INVERSE SENSITIVITY PROBLEMS I: METHOD AND ANALYSIS

    PubMed Central

    Breidt, J.; Butler, T.; Estep, D.

    2012-01-01

    We consider the inverse sensitivity analysis problem of quantifying the uncertainty of inputs to a deterministic map given specified uncertainty in a linear functional of the output of the map. This is a version of the model calibration or parameter estimation problem for a deterministic map. We assume that the uncertainty in the quantity of interest is represented by a random variable with a given distribution, and we use the law of total probability to express the inverse problem for the corresponding probability measure on the input space. Assuming that the map from the input space to the quantity of interest is smooth, we solve the generally ill-posed inverse problem by using the implicit function theorem to derive a method for approximating the set-valued inverse that provides an approximate quotient space representation of the input space. We then derive an efficient computational approach to compute a measure theoretic approximation of the probability measure on the input space imparted by the approximate set-valued inverse that solves the inverse problem. PMID:23637467

  15. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  16. On multigrid methods for the Navier-Stokes Computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Krist, S. E.; Zang, T. A.

    1988-01-01

    The overall architecture of the multipurpose parallel-processing Navier-Stokes Computer (NSC) being developed by Princeton and NASA Langley (Nosenchuck et al., 1986) is described and illustrated with extensive diagrams, and the NSC implementation of an elementary multigrid algorithm for simulating isotropic turbulence (based on solution of the incompressible time-dependent Navier-Stokes equations with constant viscosity) is characterized in detail. The present NSC design concept calls for 64 nodes, each with the performance of a class VI supercomputer, linked together by a fiber-optic hypercube network and joined to a front-end computer by a global bus. In this configuration, the NSC would have a storage capacity of over 32 Gword and a peak speed of over 40 Gflops. The multigrid Navier-Stokes code discussed would give sustained operation rates of about 25 Gflops.

  17. Computational Methods for the Analysis of Array Comparative Genomic Hybridization

    PubMed Central

    Chari, Raj; Lockwood, William W.; Lam, Wan L.

    2006-01-01

    Array comparative genomic hybridization (array CGH) is a technique for assaying the copy number status of cancer genomes. The widespread use of this technology has lead to a rapid accumulation of high throughput data, which in turn has prompted the development of computational strategies for the analysis of array CGH data. Here we explain the principles behind array image processing, data visualization and genomic profile analysis, review currently available software packages, and raise considerations for future software development. PMID:17992253

  18. Analysis of multigrid methods on massively parallel computers: Architectural implications

    NASA Technical Reports Server (NTRS)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  19. Computationally Efficient Method of Simulating Creation of Electropores

    NASA Astrophysics Data System (ADS)

    Neu, John; Krassowska, Wanda

    2006-03-01

    Electroporation, in which electric pulses create transient pores in the cell membrane, is an important technique for drug and DNA delivery. Electroporation kinetics is described by an advection-diffusion boundary value problem. This problem must be solved numerically with very small time and space steps, in order to resolve very fast processes occurring during pore creation. This study derives a reduced description of the pore creation transient. This description consists of a single integrodifferential equation for the transmembrane voltage V(t) and collateral formulas for computing the number of pores and the distribution of their radii from V(t). For pulse strengths corresponding to those used in drug and DNA delivery, relative differences in predictions of the reduced versus original problem are: voltage V(t), below 1%; number of pores, below 10%; pore radii, below 6%. Computational efficiency increases with the number of pores and thus with the pulse strength. For the strongest pulses, the run time of the reduced problem was below 1% of the original one. Such time savings can bridge the gap between problems that can be simulated on today's computers and problems that are of practical importance.

  20. Frequency response modeling and control of flexible structures: Computational methods

    NASA Technical Reports Server (NTRS)

    Bennett, William H.

    1989-01-01

    The dynamics of vibrations in flexible structures can be conventiently modeled in terms of frequency response models. For structural control such models capture the distributed parameter dynamics of the elastic structural response as an irrational transfer function. For most flexible structures arising in aerospace applications the irrational transfer functions which arise are of a special class of pseudo-meromorphic functions which have only a finite number of right half place poles. Computational algorithms are demonstrated for design of multiloop control laws for such models based on optimal Wiener-Hopf control of the frequency responses. The algorithms employ a sampled-data representation of irrational transfer functions which is particularly attractive for numerical computation. One key algorithm for the solution of the optimal control problem is the spectral factorization of an irrational transfer function. The basis for the spectral factorization algorithm is highlighted together with associated computational issues arising in optimal regulator design. Options for implementation of wide band vibration control for flexible structures based on the sampled-data frequency response models is also highlighted. A simple flexible structure control example is considered to demonstrate the combined frequency response modeling and control algorithms.

  1. BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology

    PubMed Central

    Gilson, Michael K.; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny

    2016-01-01

    BindingDB, www.bindingdb.org, is a publicly accessible database of experimental protein-small molecule interaction data. Its collection of over a million data entries derives primarily from scientific articles and, increasingly, US patents. BindingDB provides many ways to browse and search for data of interest, including an advanced search tool, which can cross searches of multiple query types, including text, chemical structure, protein sequence and numerical affinities. The PDB and PubMed provide links to data in BindingDB, and vice versa; and BindingDB provides links to pathway information, the ZINC catalog of available compounds, and other resources. The BindingDB website offers specialized tools that take advantage of its large data collection, including ones to generate hypotheses for the protein targets bound by a bioactive compound, and for the compounds bound by a new protein of known sequence; and virtual compound screening by maximal chemical similarity, binary kernel discrimination, and support vector machine methods. Specialized data sets are also available, such as binding data for hundreds of congeneric series of ligands, drawn from BindingDB and organized for use in validating drug design methods. BindingDB offers several forms of programmatic access, and comes with extensive background material and documentation. Here, we provide the first update of BindingDB since 2007, focusing on new and unique features and highlighting directions of importance to the field as a whole. PMID:26481362

  2. BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology.

    PubMed

    Gilson, Michael K; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny

    2016-01-01

    BindingDB, www.bindingdb.org, is a publicly accessible database of experimental protein-small molecule interaction data. Its collection of over a million data entries derives primarily from scientific articles and, increasingly, US patents. BindingDB provides many ways to browse and search for data of interest, including an advanced search tool, which can cross searches of multiple query types, including text, chemical structure, protein sequence and numerical affinities. The PDB and PubMed provide links to data in BindingDB, and vice versa; and BindingDB provides links to pathway information, the ZINC catalog of available compounds, and other resources. The BindingDB website offers specialized tools that take advantage of its large data collection, including ones to generate hypotheses for the protein targets bound by a bioactive compound, and for the compounds bound by a new protein of known sequence; and virtual compound screening by maximal chemical similarity, binary kernel discrimination, and support vector machine methods. Specialized data sets are also available, such as binding data for hundreds of congeneric series of ligands, drawn from BindingDB and organized for use in validating drug design methods. BindingDB offers several forms of programmatic access, and comes with extensive background material and documentation. Here, we provide the first update of BindingDB since 2007, focusing on new and unique features and highlighting directions of importance to the field as a whole. PMID:26481362

  3. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  4. 76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ...NIST announces the Cloud Computing Forum & Workshop IV to be held on November 2, 3 and 4, 2011. This workshop will provide information on the U.S. Government (USG) Cloud Computing Technology Roadmap initiative. This workshop will also provide an updated status on NIST efforts to help develop open standards in interoperability, portability and security in cloud computing. This event is open to......

  5. Adaptive computational methods for SSME internal flow analysis

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1986-01-01

    Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.

  6. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    SciTech Connect

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  7. Introduction to Computational Methods for Stability and Control (COMSAC)

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Fremaux, C. Michael; Chambers, Joseph R.

    2004-01-01

    This Symposium is intended to bring together the often distinct cultures of the Stability and Control (S&C) community and the Computational Fluid Dynamics (CFD) community. The COMSAC program is itself a new effort by NASA Langley to accelerate the application of high end CFD methodologies to the demanding job of predicting stability and control characteristics of aircraft. This talk is intended to set the stage for needing a program like COMSAC. It is not intended to give details of the program itself. The topics include: 1) S&C Challenges; 2) Aero prediction methodology; 3) CFD applications; 4) NASA COMSAC planning; 5) Objectives of symposium; and 6) Closing remarks.

  8. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  9. A Combined Method to Compute the Proximities of Asteroids

    NASA Astrophysics Data System (ADS)

    Šegan, S.; Milisavljević, S.; Marčeta, D.

    2011-09-01

    We describe a simple and efficient numerical-analytical method to find all of the proximities and critical points of the distance function in the case of two elliptical orbits with a common focus. Our method is based on the solutions of Simovljević's (1974) graphical method and on the transcendent equations developed by Lazović (1993). The method is tested on 2 997 576 pairs of asteroid orbits and compared with the algebraic and polynomial solutions of Gronchi (2005). The model with four proximities was obtained by Gronchi (2002) only by applying the method of random samples, i.e., after many simulations and trials with various values of elliptical elements. We found real pairs with four proximities.

  10. A method of examining the structure and topological properties of public-transport networks

    NASA Astrophysics Data System (ADS)

    Dimitrov, Stavri Dimitri; Ceder, Avishai (Avi)

    2016-06-01

    This work presents a new method of examining the structure of public-transport networks (PTNs) and analyzes their topological properties through a combination of computer programming, statistical data and large-network analyses. In order to automate the extraction, processing and exporting of data, a software program was developed allowing to extract the needed data from General Transit Feed Specification, thus overcoming difficulties occurring in accessing and collecting data. The proposed method was applied to a real-life PTN in Auckland, New Zealand, with the purpose of examining whether it showed characteristics of scale-free networks and exhibited features of "small-world" networks. As a result, new regression equations were derived analytically describing observed, strong, non-linear relationships among the probabilities of randomly chosen stops in the PTN to be serviced by a given number of routes. The established dependence is best fitted by an exponential rather than a power-law function, showing that the PTN examined is neither random nor scale-free, but a mixture of the two. This finding explains the presence of hubs that are not typical of exponential networks and simultaneously not highly connected to the other nodes as is the case with scale-free networks. On the other hand, the observed values of the topological properties of the network show that although it is highly clustered, owing to its representation as a directed graph, it differs slightly from "small-world" networks, which are characterized by strong clustering and a short average path length.

  11. Permeability computation on a REV with an immersed finite element method

    SciTech Connect

    Laure, P.; Puaux, G.; Silva, L.; Vincent, M.

    2011-05-04

    An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

  12. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    NASA Technical Reports Server (NTRS)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  13. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  14. Decluttering methods for high density computer-generated graphic displays

    NASA Technical Reports Server (NTRS)

    Schultz, E. E., Jr.; Nichols, D. A.; Curran, P. S.

    1985-01-01

    Several decluttering methods were compared with respect to the speed and accuracy of user performance which resulted. The presence of a map background was also manipulated. Partial removal of nonessential graphic features through symbol simplification was as effective a decluttering technique as was total removal of nonessential graphic features. The presence of a map background interacted with decluttering conditions when response time was the dependent measure. Results indicate that the effectiveness of decluttering methods depends upon the degree to which each method makes essential graphic information distinctive from nonessential information. Practical implications are discussed.

  15. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Method of computing annual volume of sales. 794.123 Section... STANDARDS ACT Exemption From Overtime Pay Requirements Under Section 7(b)(3) of the Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during...

  16. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Method of computing annual volume of sales. 794.123 Section... STANDARDS ACT Exemption From Overtime Pay Requirements Under Section 7(b)(3) of the Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during...

  17. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Method of computing annual volume of sales. 794.123 Section... STANDARDS ACT Exemption From Overtime Pay Requirements Under Section 7(b)(3) of the Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during...

  18. A finite element method for the computation of transonic flow past airfoils

    NASA Technical Reports Server (NTRS)

    Eberle, A.

    1980-01-01

    A finite element method for the computation of the transonic flow with shocks past airfoils is presented using the artificial viscosity concept for the local supersonic regime. Generally, the classic element types do not meet the accuracy requirements of advanced numerical aerodynamics requiring special attention to the choice of an appropriate element. A series of computed pressure distributions exhibits the usefulness of the method.

  19. 26 CFR 1.669(a)-3 - Tax computed by the exact throwback method.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 8 2011-04-01 2011-04-01 false Tax computed by the exact throwback method. 1... method. (a) Tax attributable to amounts treated as received in preceding taxable years. If a taxpayer elects to compute the tax, on amounts deemed distributed under section 666, by the exact throwback...

  20. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR... of Sales § 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during...

  1. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR... of Sales § 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during...

  2. Leading Computational Methods on Scalar and Vector HEC Platforms

    SciTech Connect

    Oliker, Leonid; Carter, Jonathan; Wehner, Michael; Canning, Andrew; Ethier, Stephane; Mirin, Arthur; Bala, Govindasamy; Parks, David; Worley, Patrick H; Kitawaki, Shigemune; Tsuda, Yoshinori

    2005-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: atmospheric modeling (CAM), magnetic fusion (GTC), plasma physics (LBMHD3D), and material science (PARATEC). We compare performance of the vector-based Cray X1, Earth Simulator, and newly-released NEC SX-8 and Cray X1E, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: the first reported vector performance results for CAM simulations utilizing a finite-volume dynamical core on a high-resolution atmospheric grid; a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ESpromodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors, with modern parallel vector systems: the Cray X1, Earth Simulator (ES), and the NEC SX-8. Additionally, we examine performance of CAM on the recently-released Cray X1E. Our research team was the first international group to conduct a performance evaluation study at the Earth Simulator Center; remote ES access is not available.Our work builds on our previous efforts [16, 17] and makes several significant contributions: the first reported vector performance results for CAM simulations utilizing a finite-volume dynamical core on a high-resolution atmospheric grid; a new datadecomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.

  3. Methods of Conserving Heating Energy Utilized in Thirty-One Public School Systems.

    ERIC Educational Resources Information Center

    Davis, Kathy Eggers

    The Memphis City School System was notified by Memphis Light, Gas, and Water that it was necessary to reduce its consumption of natural gas during the winter of 1975-76. A survey was developed and sent to 44 large public school systems to determine which methods of heating energy conservation were used most frequently and which methods were most…

  4. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods

    PubMed Central

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610

  5. Methods of Conserving Heating Energy Utilized in Thirty-One Public School Systems.

    ERIC Educational Resources Information Center

    Davis, Kathy Eggers

    The Memphis City School System was notified by Memphis Light, Gas, and Water that it was necessary to reduce its consumption of natural gas during the winter of 1975-76. A survey was developed and sent to 44 large public school systems to determine which methods of heating energy conservation were used most frequently and which methods were most

  6. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods.

    PubMed

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610

  7. Comparison of Two Numerical Methods for Computing Fractal Dimensions

    NASA Astrophysics Data System (ADS)

    Shiozawa, Yui; Miller, Bruce; Rouet, Jean-Louis

    2012-10-01

    From cosmology to economics, the examples of fractals can be found virtually everywhere. However, since few fractals permit the analytical evaluation of generalized fractal dimensions or R'enyi dimensions, the search for effective numerical methods is inevitable. In this project two promising numerical methods for obtaining generalized fractal dimensions, based on the distribution of distances within a set, are examined. They can be applied, in principle, to any set even if no closed-form expression is available. The biggest advantage of these methods is their ability to generate a spectrum of generalized dimensions almost simultaneously. It should be noted that this feature is essential to the analysis of multifractals. As a test of their effectiveness, here the methods were applied to the generalized Cantor set and the multiplicative binomial process. The generalized dimensions of both sets can be readily derived analytically, thus enabling the accuracy of the numerical methods to be verified. Here we will present a comparison of the analytical results and the predictions of the methods. We will show that, while they are effective, care must be taken in their interpretation.

  8. Structural and mechanistic investigations of photosystem II through computational methods.

    PubMed

    Ho, Felix M

    2012-01-01

    The advent of oxygenic photosynthesis through water oxidation by photosystem II (PSII) transformed the planet, ultimately allowing the evolution of aerobic respiration and an explosion of ecological diversity. The importance of this enzyme to life on Earth has ironically been paralleled by the elusiveness of a detailed understanding of its precise catalytic mechanism. Computational investigations have in recent years provided more and more insights into the structural and mechanistic details that underlie the workings of PSII. This review will present an overview of some of these studies, focusing on those that have aimed at elucidating the mechanism of water oxidation at the CaMn₄ cluster in PSII, and those exploring the features of the structure and dynamics of this enzyme that enable it to catalyse this energetically demanding reaction. This article is part of a Special Issue entitled: Photosystem II. PMID:21565158

  9. Pencil method in elastodynamics: application to ultrasonic field computation

    PubMed

    Gengembre; Lhemery

    2000-03-01

    The principles of pencil elastodynamics and, in more detail, some selected applications of pencil techniques to elastodynamics are described. It is shown how a systematic use of a matrix representation for the wave front curvature and for its transformations simplifies the handling of arbitrary pencils and, consequently, the field computations. Pencil matrix representations for the propagation into homogeneous solids made of isotropic or anisotropic media are derived. The use of matrix representations for pencil reflections on, or refractions through, arbitrarily curved interfaces, together with matrix representations for propagation into homogeneous media, allow us to derive an overall matrix formulation for elastodynamic propagation into complex heterogeneous structures. Combined with the classical Rayleigh integral to account for transducer diffraction effects, the proposed theory is applied to the prediction of ultrasonic fields radiated into complex structures by arbitrary transducers. Examples of interest for application to ultrasonic non-destructive testing are given. PMID:10829712

  10. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  11. Computer capillaroscopy as a new cardiological diagnostics method

    NASA Astrophysics Data System (ADS)

    Gurfinkel, Youri I.; Korol, Oleg A.; Kufal, George E.

    1998-04-01

    The blood flow in capillary vessels plays an important role in sustaining the vital activity of the human organism. The computerized capillaroscope is used for the investigations of nailfold (eponychium) capillary blood flow. An important advantage of the instrument is the possibility of performing non-invasive investigations, i.e., without damage to skin or vessels and causing no pain or unpleasant sensations. The high-class equipment and software allow direct observation of capillary blood flow dynamics on a computer screen at a 700 - 1300 times magnification. For the first time in the clinical practice, it has become possible to precisely measure the speed of capillary blood flow, as well as the frequency of aggregate formation (glued together in clots of blood particles). In addition, provision is made for automatic measurement of capillary size and wall thickness and automatic recording of blood aggregate images for further visual study, documentation, and electronic database management.

  12. Computer-based methods for thermodynamic analysis of materials processing

    NASA Astrophysics Data System (ADS)

    Kaufman, L.

    1983-11-01

    The data base previously developed for multicomponent Sialon Ceramic phase diagrams has been expanded to cover Ce2O3, BeO and Y2O3 additions. Isothermal sections in the MgO-Si3N4-SiO2, Y2O3-SiO2-SiN4 and Ce2O3-SiO2-Si3N4 system near 2000 K were computed and compared with limited experimental data. The trajectory of ordering temperatures for A2/B2 and B2/D03 reactions has been computed along the Fe3Si-Fe3Al composition path in the BCC of the Fe-A1-Si system and compared with experiment. The two phase (fcc & bcc) fields for ordered phases in the iron-aluminum-nickel, iron-aluminum-manganese, and the iron-nickel-manganese system between 700 C and 1200 C. Construction of a data base for fluoride systems consisting of systems containing ZrF4 which are employed to synthesize fluoride glasses has been initiated and used to calculate the composition of maximum liquid stability in the ZrF4-LaF3-BaF2 and the ZrF4-BaF2-NaF systems where glass formation has been observed. The calculations have been extended to consider the effects of AlF3 additions on the glass compositions with good results. An analysis of the titanium-carbon-nitrogen system coupling the thermochemical and phase diagram data was performed to calculate the ternary phase diagram and thermochemical properties over a range of temperature.

  13. 77 FR 22326 - Privacy Act of 1974, as Amended by Public Law 100-503; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-13

    ... by Public Law 100-503, the Computer Matching and Privacy Protection Act of 1988, ACF is publishing a... 1974, as amended by Public Law 100-503, the Computer Matching and Privacy Protection Act of 1988, (5 U... 74 FR 29275, (June 19, 2009), last amended at 75 FR 22187, (April 27, 2010). VA's disclosure...

  14. Managing Public-Access Computers: A How-To-Do-It Manual for Librarians. How-To-Do-It Manuals for Librarians, Number 96.

    ERIC Educational Resources Information Center

    Barclay, Donald A.

    This book, while necessarily concerning itself with computer technology, approaches technology as a tool for providing public-service and helps librarians and others effectively manage public-access computers. The book is organized to progress from more technological to more managerial topics. The first chapter--which answers the question, "What…

  15. Computational methods for constructing protein structure models from 3D electron microscopy maps

    PubMed Central

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-01-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3 Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. PMID:23796504

  16. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  17. Large-Scale Automated Analysis of News Media: A Novel Computational Method for Obesity Policy Research

    PubMed Central

    Hamad, Rita; Pomeranz, Jennifer L.; Siddiqi, Arjumand; Basu, Sanjay

    2015-01-01

    Objective Analyzing news media allows obesity policy researchers to understand popular conceptions about obesity, which is important for targeting health education and policies. A persistent dilemma is that investigators have to read and manually classify thousands of individual news articles to identify how obesity and obesity-related policy proposals may be described to the public in the media. We demonstrate a novel method called “automated content analysis” that permits researchers to train computers to “read” and classify massive volumes of documents. Methods We identified 14,302 newspaper articles that mentioned the word “obesity” during 2011–2012. We examined four states that vary in obesity prevalence and policy (Alabama, California, New Jersey, and North Carolina). We tested the reliability of an automated program to categorize the media’s “framing” of obesity as an individual-level problem (e.g., diet) and/or an environmental-level problem (e.g., obesogenic environment). Results The automated program performed similarly to human coders. The proportion of articles with individual-level framing (27.7–31.0%) was higher than the proportion with neutral (18.0–22.1%) or environmental-level framing (16.0–16.4%) across all states and over the entire study period (p<0.05). Conclusion We demonstrate a novel approach to the study of how obesity concepts are communicated and propagated in news media. PMID:25522013

  18. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  19. An efficient method for computing unsteady transonic aerodynamics of swept wings with control surfaces

    NASA Technical Reports Server (NTRS)

    Liu, D. D.; Kao, Y. F.; Fung, K. Y.

    1989-01-01

    A transonic equivalent strip (TES) method was further developed for unsteady flow computations of arbitrary wing planforms. The TES method consists of two consecutive correction steps to a given nonlinear code such as LTRAN2; namely, the chordwise mean flow correction and the spanwise phase correction. The computation procedure requires direct pressure input from other computed or measured data. Otherwise, it does not require airfoil shape or grid generation for given planforms. To validate the computed results, four swept wings of various aspect ratios, including those with control surfaces, are selected as computational examples. Overall trends in unsteady pressures are established with those obtained by XTRAN3S codes, Isogai's full potential code and measured data by NLR and RAE. In comparison with these methods, the TES has achieved considerable saving in computer time and reasonable accuracy which suggests immediate industrial applications.

  20. Computational methods for estimation of parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.; Murphy, K. A.

    1983-01-01

    Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.

  1. Combination of Thin Lenses--A Computer Oriented Method.

    ERIC Educational Resources Information Center

    Flerackers, E. L. M.; And Others

    1984-01-01

    Suggests a method treating geometric optics using a microcomputer to do the calculations of image formation. Calculations are based on the connection between the composition of lenses and the mathematics of fractional linear equations. Logic of the analysis and an example problem are included. (JM)

  2. The Voronoi Implicit Interface Method for computing multiphase physics

    PubMed Central

    Saye, Robert I.; Sethian, James A.

    2011-01-01

    We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. PMID:22106269

  3. Method and Apparatus for Computed Imaging Backscatter Radiography

    NASA Technical Reports Server (NTRS)

    Shedlock, Daniel (Inventor); Meng, Christopher (Inventor); Sabri, Nissia (Inventor); Dugan, Edward T. (Inventor); Jacobs, Alan M. (Inventor)

    2013-01-01

    Systems and methods of x-ray backscatter radiography are provided. A single-sided, non-destructive imaging technique utilizing x-ray radiation to image subsurface features is disclosed, capable of scanning a region using a fan beam aperture and gathering data using rotational motion.

  4. Using Computers in Relation to Learning Climate in CLIL Method

    ERIC Educational Resources Information Center

    Binterová, Helena; Komínková, Olga

    2013-01-01

    The main purpose of the work is to present a successful implementation of CLIL method in Mathematics lessons in elementary schools. Nowadays at all types of schools (elementary schools, high schools and universities) all over the world every school subject tends to be taught in a foreign language. In 2003, a document called Action plan for…

  5. Limitations of the current methods used to compute meteors orbits

    NASA Astrophysics Data System (ADS)

    Egal, A.; Gural, P.; Vaubaillon, J.; Colas, F.; Thuillot, W.

    2015-10-01

    The Cameras for BEtter Resolution NETwork (CABERNET) project aims to provide the most accurate meteoroid orbits achievable working with digital recordings of night sky imagery. The level of performance obtained is governed by the technical attributes of the collection systems and having both accurate and robust data processing. The technical challenges have been met by employing three cameras, each with a field of view of 40°x26° and a spatial (angular) resolution of 0.01°/pixel. The single image snapshots of meteors achieve temporal discrimination along the track through the use of an electronic shutter coupled to the cameras, operating at a sample rate between 100Hz and 200Hz. The numerical processing of meteor trajectories has already been explored by many authors. This included an examination of the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-fit parameterization method published by Gural (2012). After a comparison of these three techniques, we chose to implement Gural 's method, employing several non-linear minimization techniques and trying to match the modeling as close as possible to the basic data measured, i.e. the meteor space-time positions in the sequence of images. This approach results in a more precise and reliable way to determine both the meteor trajectory and velocity through the atmosphere.

  6. Methods for design and evaluation of integrated hardware-software systems for concurrent computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed.

  7. Recursive method for computing matrix elements for two-body interactions

    NASA Astrophysics Data System (ADS)

    Hyvärinen, Juhani; Suhonen, Jouni

    2015-05-01

    A recursive method for the efficient computation of two-body matrix elements is presented. The method consists of a set of recursion relations for the computationally demanding radial integral and adds one more tool to the set of computational methods introduced by Horie and Sasaki [H. Horie and K. Sasaki, Prog. Theor. Phys. 25, 475 (1961), 10.1143/PTP.25.475]. The neutrinoless double-β decay will serve as the primary application and example, but the method is general and can be applied equally well to other kinds of nuclear structure calculations involving matrix elements of two-body interactions.

  8. Fast calculation method of computer-generated cylindrical hologram using wave-front recording surface.

    PubMed

    Zhao, Yu; Piao, Mei-lan; Li, Gang; Kim, Nam

    2015-07-01

    Fast calculation method for a computer-generated cylindrical hologram (CGCH) is proposed. The method consists of two steps: the first step is a calculation of a virtual wave-front recording surface (WRS), which is located between the 3D object and CGCH. In the second step, in order to obtain a CGCH, we execute the diffraction calculation based on the fast Fourier transform (FFT) from the WRS to the CGCH, which are in the same concentric arrangement. The computational complexity is dramatically reduced in comparison with direct integration method. The simulation results confirm that our proposed method is able to improve the computational speed of CGCH. PMID:26125356

  9. Computational Biology Methods for Characterization of Pluripotent Cells.

    PubMed

    Araúzo-Bravo, Marcos J

    2016-01-01

    Pluripotent cells are a powerful tool for regenerative medicine and drug discovery. Several techniques have been developed to induce pluripotency, or to extract pluripotent cells from different tissues and biological fluids. However, the characterization of pluripotency requires tedious, expensive, time-consuming, and not always reliable wet-lab experiments; thus, an easy, standard quality-control protocol of pluripotency assessment remains to be established. Here to help comes the use of high-throughput techniques, and in particular, the employment of gene expression microarrays, which has become a complementary technique for cellular characterization. Research has shown that the transcriptomics comparison with an Embryonic Stem Cell (ESC) of reference is a good approach to assess the pluripotency. Under the premise that the best protocol is a computer software source code, here I propose and explain line by line a software protocol coded in R-Bioconductor for pluripotency assessment based on the comparison of transcriptomics data of pluripotent cells with an ESC of reference. I provide advice for experimental design, warning about possible pitfalls, and guides for results interpretation. PMID:26141313

  10. An improved computer vision method for white blood cells detection.

    PubMed

    Cuevas, Erik; Daz, Margarita; Manzanares, Miguel; Zaldivar, Daniel; Perez-Cisneros, Marco

    2013-01-01

    The automatic detection of white blood cells (WBCs) still remains as an unsolved issue in medical imaging. The analysis of WBC images has engaged researchers from fields of medicine and computer vision alike. Since WBC can be approximated by an ellipsoid form, an ellipse detector algorithm may be successfully applied in order to recognize such elements. This paper presents an algorithm for the automatic detection of WBC embedded in complicated and cluttered smear images that considers the complete process as a multiellipse detection problem. The approach, which is based on the differential evolution (DE) algorithm, transforms the detection task into an optimization problem whose individuals represent candidate ellipses. An objective function evaluates if such candidate ellipses are actually present in the edge map of the smear image. Guided by the values of such function, the set of encoded candidate ellipses (individuals) are evolved using the DE algorithm so that they can fit into the WBCs which are enclosed within the edge map of the smear image. Experimental results from white blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique in terms of its accuracy and robustness. PMID:23762178

  11. Pragmatic approaches to using computational methods to predict xenobiotic metabolism.

    PubMed

    Piechota, Przemyslaw; Cronin, Mark T D; Hewitt, Mark; Madden, Judith C

    2013-06-24

    In this study the performance of a selection of computational models for the prediction of metabolites and/or sites of metabolism was investigated. These included models incorporated in the MetaPrint2D-React, Meteor, and SMARTCyp software. The algorithms were assessed using two data sets: one a homogeneous data set of 28 Non-Steroidal Anti-Inflammatory Drugs (NSAIDs) and paracetamol (DS1) and the second a diverse data set of 30 top-selling drugs (DS2). The prediction of metabolites for the diverse data set (DS2) was better than for the more homogeneous DS1 for each model, indicating that some areas of chemical space may be better represented than others in the data used to develop and train the models. The study also identified compounds for which none of the packages could predict metabolites, again indicating areas of chemical space where more information is needed. Pragmatic approaches to using metabolism prediction software have also been proposed based on the results described here. These approaches include using cutoff values instead of restrictive reasoning settings in Meteor to reduce the output with little loss of sensitivity and for directing metabolite prediction by preselection based on likely sites of metabolism. PMID:23718189

  12. An Improved Computer Vision Method for White Blood Cells Detection

    PubMed Central

    Cuevas, Erik; Daz, Margarita; Manzanares, Miguel; Zaldivar, Daniel; Perez-Cisneros, Marco

    2013-01-01

    The automatic detection of white blood cells (WBCs) still remains as an unsolved issue in medical imaging. The analysis of WBC images has engaged researchers from fields of medicine and computer vision alike. Since WBC can be approximated by an ellipsoid form, an ellipse detector algorithm may be successfully applied in order to recognize such elements. This paper presents an algorithm for the automatic detection of WBC embedded in complicated and cluttered smear images that considers the complete process as a multiellipse detection problem. The approach, which is based on the differential evolution (DE) algorithm, transforms the detection task into an optimization problem whose individuals represent candidate ellipses. An objective function evaluates if such candidate ellipses are actually present in the edge map of the smear image. Guided by the values of such function, the set of encoded candidate ellipses (individuals) are evolved using the DE algorithm so that they can fit into the WBCs which are enclosed within the edge map of the smear image. Experimental results from white blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique in terms of its accuracy and robustness. PMID:23762178

  13. Computational methods to compute wavefront error due to aero-optic effects

    NASA Astrophysics Data System (ADS)

    Genberg, Victor; Michels, Gregory; Doyle, Keith; Bury, Mark; Sebastian, Thomas

    2013-09-01

    Aero-optic effects can have deleterious effects on high performance airborne optical sensors that must view through turbulent flow fields created by the aerodynamic effects of windows and domes. Evaluating aero-optic effects early in the program during the design stages allows mitigation strategies and optical system design trades to be performed to optimize system performance. This necessitates a computationally efficient means to evaluate the impact of aero-optic effects such that the resulting dynamic pointing errors and wavefront distortions due to the spatially and temporally varying flow field can be minimized or corrected. To this end, an aero-optic analysis capability was developed within the commercial software SigFit that couples CFD results with optical design tools. SigFit reads the CFD generated density profile using the CGNS file format. OPD maps are then created by converting the three-dimensional density field into an index of refraction field and then integrating along specified paths to compute OPD errors across the optical field. The OPD maps may be evaluated directly against system requirements or imported into commercial optical design software including Zemax® and Code V® for a more detailed assessment of the impact on optical performance from which design trades may be performed.

  14. Using an Interactive Computer System to Teach Descriptive Numerical Analysis and Geographic Research Methods to Undergraduate Students.

    ERIC Educational Resources Information Center

    Rivizzigno, Victoria L.

    A method is proposed for using computer systems to introduce students in geography courses on the college level to quantitative methods. Two computer systems are discussed--Interactive Computer Systems (computer packages which enhance student learning by providing instantaneous feedback) and Computer Enhancement of Instruction, CEI, (standard

  15. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  16. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  17. Creating a Public Domain Software Library To Increase Computer Access of Elementary Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    McInturff, Johanna R.

    Information is provided on a practicum that addressed the lack of access to computer-aided instruction by elementary level students with learning disabilities, due to lack of diverse software, limited funding, and insufficient teacher training. The strategies to improve the amount of access time included: increasing the number of computer programs…

  18. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  19. New Computational Methods for the Prediction and Analysis of Helicopter Noise

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper describes several new methods to predict and analyze rotorcraft noise. These methods are: 1) a combined computational fluid dynamics and Kirchhoff scheme for far-field noise predictions, 2) parallel computer implementation of the Kirchhoff integrations, 3) audio and visual rendering of the computed acoustic predictions over large far-field regions, and 4) acoustic tracebacks to the Kirchhoff surface to pinpoint the sources of the rotor noise. The paper describes each method and presents sample results for three test cases. The first case consists of in-plane high-speed impulsive noise and the other two cases show idealized parallel and oblique blade-vortex interactions. The computed results show good agreement with available experimental data but convey much more information about the far-field noise propagation. When taken together, these new analysis methods exploit the power of new computer technologies and offer the potential to significantly improve our prediction and understanding of rotorcraft noise.

  20. Development of supersonic computational aerodynamic program using panel method

    NASA Technical Reports Server (NTRS)

    Maruyama, Y.; Akishita, S.; Nakamura, A.

    1987-01-01

    An aerodynamic program for steady supersonic linearized potential flow using a higher order panel method was developed. Boundary surface is divided into planar triangular panels on each of which a linearly varying doublet and a constant or linearly varying source are distributed. Distributions of source and doublet on the panel assemblies of the panels can be determined by their strengths at nodal points, which are placed at the vertices of the panels for linear distribution or on each panel for constant distribution.