Science.gov

Sample records for methods publications computer

  1. Teaching public health: an innovative method using computer-based project work.

    PubMed

    Bojan, F; Belicza, E; Horvath, F; McKee, M

    1995-01-01

    Restructuring of training in public health in the Hungarian medical schools is being undertaken in the context of a major European Union TEMPUS Joint European Project. Under the aegis of this project a common core curriculum of public health has been developed. As part of the implementation of the curriculum, new approaches to learning are being explored that should enable students to appreciate the nature and magnitude of the major challenges to public health in Hungary and promote the development of their analytic, interpretative and presentational skills. One of the approaches is based on the individual preparation of reports on important public health issues, making use of secondary data from electronic databases (WHO HFA/PC and OECD Health Data) and traditional printed sources (annuals). This method called 'computer-based project work' was introduced in Debrecen in 1992-1993 with a secondary objective to develop basic computing skills. The initial experiences of introducing computer-based project work to the curriculum have been positive. This paper describes a practical example of the implementation of innovative approaches to teaching in a highly traditional setting in Central Europe, and one that provides ideas and encouragement to those facing similar problems in the countries of Central and Eastern Europe and the former Soviet Union. PMID:7623686

  2. A computational method for drug repositioning using publicly available gene expression data

    PubMed Central

    2015-01-01

    Motivation The identification of new therapeutic uses of existing drugs, or drug repositioning, offers the possibility of faster drug development, reduced risk, lesser cost and shorter paths to approval. The advent of high throughput microarray technology has enabled comprehensive monitoring of transcriptional response associated with various disease states and drug treatments. This data can be used to characterize disease and drug effects and thereby give a measure of the association between a given drug and a disease. Several computational methods have been proposed in the literature that make use of publicly available transcriptional data to reposition drugs against diseases. Method In this work, we carry out a data mining process using publicly available gene expression data sets associated with a few diseases and drugs, to identify the existing drugs that can be used to treat genes causing lung cancer and breast cancer. Results Three strong candidates for repurposing have been identified- Letrozole and GDC-0941 against lung cancer, and Ribavirin against breast cancer. Letrozole and GDC-0941 are drugs currently used in breast cancer treatment and Ribavirin is used in the treatment of Hepatitis C. PMID:26679199

  3. Exploration of preterm birth rates using the public health exposome database and computational analysis methods.

    PubMed

    Kershenbaum, Anne D; Langston, Michael A; Levine, Robert S; Saxton, Arnold M; Oyana, Tonny J; Kilbourne, Barbara J; Rogers, Gary L; Gittner, Lisaann S; Baktash, Suzanne H; Matthews-Juarez, Patricia; Juarez, Paul D

    2014-12-01

    Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother's age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

  4. Exploration of Preterm Birth Rates Using the Public Health Exposome Database and Computational Analysis Methods

    PubMed Central

    Kershenbaum, Anne D.; Langston, Michael A.; Levine, Robert S.; Saxton, Arnold M.; Oyana, Tonny J.; Kilbourne, Barbara J.; Rogers, Gary L.; Gittner, Lisaann S.; Baktash, Suzanne H.; Matthews-Juarez, Patricia; Juarez, Paul D.

    2014-01-01

    Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother’s age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

  5. BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability

    SciTech Connect

    Sellers, C.; Fox, B.; Paulz, J.

    1996-03-01

    The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

  6. Computer Science and Technology Publications. NBS Publications List 84.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.

    This bibliography lists publications of the Institute for Computer Sciences and Technology of the National Bureau of Standards. Publications are listed by subject in the areas of computer security, computer networking, and automation technology. Sections list publications of: (1) current Federal Information Processing Standards; (2) computer…

  7. Computers in Public Broadcasting: Who, What, Where.

    ERIC Educational Resources Information Center

    Yousuf, M. Osman

    This handbook offers guidance to public broadcasting managers on computer acquisition and development activities. Based on a 1981 survey of planned and current computer uses conducted by the Corporation for Public Broadcasting (CPB) Information Clearinghouse, computer systems in public radio and television broadcasting stations are listed by…

  8. Computational Methods for Crashworthiness

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Carden, Huey D. (Compiler)

    1993-01-01

    Presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Crashworthiness held at Langley Research Center on 2-3 Sep. 1992 are included. The presentations addressed activities in the area of impact dynamics. Workshop attendees represented NASA, the Army and Air Force, the Lawrence Livermore and Sandia National Laboratories, the aircraft and automotive industries, and academia. The workshop objectives were to assess the state-of-technology in the numerical simulation of crash and to provide guidelines for future research.

  9. Public Databases Supporting Computational Toxicology

    EPA Science Inventory

    A major goal of the emerging field of computational toxicology is the development of screening-level models that predict potential toxicity of chemicals from a combination of mechanistic in vitro assay data and chemical structure descriptors. In order to build these models, resea...

  10. Publication Bias in Methodological Computational Research

    PubMed Central

    Boulesteix, Anne-Laure; Stierle, Veronika; Hapfelmeier, Alexander

    2015-01-01

    The problem of publication bias has long been discussed in research fields such as medicine. There is a consensus that publication bias is a reality and that solutions should be found to reduce it. In methodological computational research, including cancer informatics, publication bias may also be at work. The publication of negative research findings is certainly also a relevant issue, but has attracted very little attention to date. The present paper aims at providing a new formal framework to describe the notion of publication bias in the context of methodological computational research, facilitate and stimulate discussions on this topic, and increase awareness in the scientific community. We report an exemplary pilot study that aims at gaining experiences with the collection and analysis of information on unpublished research efforts with respect to publication bias, and we outline the encountered problems. Based on these experiences, we try to formalize the notion of publication bias. PMID:26508827

  11. Computing in Public Administration: Practice and Education.

    ERIC Educational Resources Information Center

    Norris, Donald F.; Thompson, Lyke

    1988-01-01

    Presents a survey of common and leading-edge computer use practices followed by municipal government personnel and the directors of 12 masters degree programs in public administration. Concludes by suggesting directions for future developments both in public agencies and in the academy. (GEA)

  12. Satellite orbit computation methods

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Mathematical and algorithmical techniques for solution of problems in satellite dynamics were developed, along with solutions to satellite orbit motion. Dynamical analysis of shuttle on-orbit operations were conducted. Computer software routines for use in shuttle mission planning were developed and analyzed, while mathematical models of atmospheric density were formulated.

  13. Computational methods working group

    SciTech Connect

    Gabriel, T. A.

    1997-09-01

    During the Cold Moderator Workshop several working groups were established including one to discuss calculational methods. The charge for this working group was to identify problems in theory, data, program execution, etc., and to suggest solutions considering both deterministic and stochastic methods including acceleration procedures.

  14. Acquisition of Computing Literacy on Shared Public Computers: Children and the "Hole in the Wall"

    ERIC Educational Resources Information Center

    Mitra, Sugata; Dangwal, Ritu; Chatterjee, Shiffon; Jha, Swati; Bisht, Ravinder S.; Kapur, Preeti

    2005-01-01

    Earlier work, often referred to as the "hole in the wall" experiments, has shown that groups of children can learn to use public computers on their own. This paper presents the method and results of an experiment conducted to investigate whether such unsupervised group learning in shared public spaces is universal. The experiment was conducted

  15. Acquisition of Computing Literacy on Shared Public Computers: Children and the "Hole in the Wall"

    ERIC Educational Resources Information Center

    Mitra, Sugata; Dangwal, Ritu; Chatterjee, Shiffon; Jha, Swati; Bisht, Ravinder S.; Kapur, Preeti

    2005-01-01

    Earlier work, often referred to as the "hole in the wall" experiments, has shown that groups of children can learn to use public computers on their own. This paper presents the method and results of an experiment conducted to investigate whether such unsupervised group learning in shared public spaces is universal. The experiment was conducted…

  16. Computational Methods Development at Ames

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  17. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  18. Methods for computing color anaglyphs

    NASA Astrophysics Data System (ADS)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  19. Closing the "Digital Divide": Building a Public Computing Center

    ERIC Educational Resources Information Center

    Krebeck, Aaron

    2010-01-01

    The public computing center offers an economical and environmentally friendly model for providing additional public computer access when and where it is needed. Though not intended to be a replacement for a full-service branch, the public computing center does offer a budget-friendly option for quickly expanding high-demand services into the

  20. Computational methods for stellerator configurations

    SciTech Connect

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings.

  1. Computational methods for stellerator configurations

    NASA Astrophysics Data System (ADS)

    Betancourt, O.

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings.

  2. Geometric methods in quantum computation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun

    Recent advances in the physical sciences and engineering have created great hopes for new computational paradigms and substrates. One such new approach is the quantum computer, which holds the promise of enhanced computational power. Analogous to the way a classical computer is built from electrical circuits containing wires and logic gates, a quantum computer is built from quantum circuits containing quantum wires and elementary quantum gates to transport and manipulate quantum information. Therefore, design of quantum gates and quantum circuits is a prerequisite for any real application of quantum computation. In this dissertation we apply geometric control methods from differential geometry and Lie group representation theory to analyze the properties of quantum gates and to design optimal quantum circuits. Using the Cartan decomposition and the Weyl group, we show that the geometric structure of nonlocal two-qubit gates is a 3-Torus. After further reducing the symmetry, the geometric representation of nonlocal gates is seen to be conveniently visualized as a tetrahedron. Each point in this tetrahedron except on the base corresponds to a different equivalent class of nonlocal gates. This geometric representation is one of the cornerstones for the discussion on quantum computation in this dissertation. We investigate the properties of those two-qubit operations that can generate maximal entanglement. It is an astonishing finding that if we randomly choose a two-qubit operation, the probability that we obtain a perfect entangler is exactly one half. We prove that given a two-body interaction Hamiltonian, it is always possible to explicitly construct a quantum circuit for exact simulation of any arbitrary nonlocal two-qubit gate by turning on the two-body interaction for at most three times, together with at most four local gates. We also provide an analytic approach to construct a universal quantum circuit from any entangling gate supplemented with local gates. Closed form solutions have been derived for each step in this explicit construction procedure. Moreover, the minimum upper bound is found to construct a universal quantum circuit from any Controlled-Unitary gate. A near optimal explicit construction of universal quantum circuits from a given Controlled-Unitary is provided. For the Controlled-NOT and Double-CNOT gate, we then develop simple analytic ways to construct universal quantum circuits with exactly three applications, which is the least possible for these gates. We further discover a new quantum gate (named B gate) that achieves the desired universality with minimal number of gates. Optimal implementation of single-qubit quantum gates is also investigated. Finally, as a real physical application, a constructive way to implement any arbitrary two-qubit operation on a spin electronics system is discussed.

  3. Computational methods for stealth design

    SciTech Connect

    Cable, V.P. )

    1992-08-01

    A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.

  4. Systems Science Methods in Public Health

    PubMed Central

    Luke, Douglas A.; Stamatakis, Katherine A.

    2012-01-01

    Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example in preparing for global pandemics. However, systems science study designs and analytic methods remain underutilized and are not widely featured in public health curricula or training. In this review we present an argument for the utility of systems science methods in public health, introduce three important systems science methods (system dynamics, network analysis, and agent-based modeling), and provide three case studies where these methods have been used to answer important public health science questions in the areas of infectious disease, tobacco control, and obesity. PMID:22224885

  5. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-01

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods. PMID:26549429

  6. Public participation: more than a method?

    PubMed Central

    Boaz, Annette; Chambers, Mary; Stuttaford, Maria

    2014-01-01

    While it is important to support the development of methods for public participation, we argue that this should not be at the expense of a broader consideration of the role of public participation. We suggest that a rights based approach provides a framework for developing more meaningful approaches that move beyond public participation as synonymous with consultation to value the contribution of lay knowledge to the governance of health systems and health research. PMID:25337604

  7. Computational methods for stellerator configurations

    SciTech Connect

    Betancourt, O.

    1989-01-01

    This project consists of two parallel objectives. On the one hand, computational techniques for three dimensional magnetic confinement configurations were developed or refined and on the other hand, this new techniques were applied to the solution of practical fusion energy problems or the techniques themselves were transferred to other fusion researcher for practical use in the field.

  8. The Computer Connection: Putting Computers to Work in the High School Publications Program.

    ERIC Educational Resources Information Center

    Benedict, Mary

    Designed specifically for high school publications advisers who are seeking advice on how to use typesetters, editing terminals, personal computers, and printers in their publications work, this guidebook shares the experiences and advice of other high school advisers who are now using computers for newspapers, yearbooks, and magazines. Each case

  9. How You Can Protect Public Access Computers "and" Their Users

    ERIC Educational Resources Information Center

    Huang, Phil

    2007-01-01

    By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and

  10. How You Can Protect Public Access Computers "and" Their Users

    ERIC Educational Resources Information Center

    Huang, Phil

    2007-01-01

    By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and…

  11. Models for a National Public School Computing Network.

    ERIC Educational Resources Information Center

    Bull, Glen L.; And Others

    1993-01-01

    Discusses a possible national education computer network that would include elementary, secondary, and higher education institutions. Topics addressed include Internet; common standards; distributed computing; open access; equity concerns; and examples of two successful public school networks in Virginia and Texas that are linked through the

  12. Computational methods for fluid flow

    NASA Astrophysics Data System (ADS)

    Peyret, R.; Taylor, T. D.

    Numerical approaches are discussed, taking into account general equations, finite-difference methods, integral and spectral methods, the relationship between numerical approaches, and specialized methods. A description of incompressible flows is provided, giving attention to finite-difference solutions of the Navier-Stokes equations, finite-element methods applied to incompressible flows, spectral method solutions for incompressible flows, and turbulent-flow models and calculations. In a discussion of compressible flows, inviscid compressible flows are considered along with viscous compressible flows. Attention is given to the potential flow solution technique, Green's functions and stream-function vorticity formulation, the discrete vortex method, the cloud-in-cell method, the method of characteristics, turbulence closure equations, a large-eddy simulation model, turbulent-flow calculations with a closure model, and direct simulations of turbulence.

  13. Wildlife software: procedures for publication of computer software

    USGS Publications Warehouse

    Samuel, M.D.

    1990-01-01

    Computers and computer software have become an integral part of the practice of wildlife science. Computers now play an important role in teaching, research, and management applications. Because of the specialized nature of wildlife problems, specific computer software is usually required to address a given problem (e.g., home range analysis). This type of software is not usually available from commercial vendors and therefore must be developed by those wildlife professionals with particular skill in computer programming. Current journal publication practices generally prevent a detailed description of computer software associated with new techniques. In addition, peer review of journal articles does not usually include a review of associated computer software. Thus, many wildlife professionals are usually unaware of computer software that would meet their needs or of major improvements in software they commonly use. Indeed most users of wildlife software learn of new programs or important changes only by word of mouth.

  14. Computational methods for unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Thomas, J. L.

    1987-01-01

    Computational methods for unsteady transonic flows are surveyed with emphasis on prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed, and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

  15. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer...

  16. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer...

  17. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer...

  18. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer...

  19. A Computer-Assisted Instruction in Teaching Abstract Statistics to Public Affairs Undergraduates

    ERIC Educational Resources Information Center

    Ozturk, Ali Osman

    2012-01-01

    This article attempts to demonstrate the applicability of a computer-assisted instruction supported with simulated data in teaching abstract statistical concepts to political science and public affairs students in an introductory research methods course. The software is called the Elaboration Model Computer Exercise (EMCE) in that it takes a great…

  20. A Computer-Assisted Instruction in Teaching Abstract Statistics to Public Affairs Undergraduates

    ERIC Educational Resources Information Center

    Ozturk, Ali Osman

    2012-01-01

    This article attempts to demonstrate the applicability of a computer-assisted instruction supported with simulated data in teaching abstract statistical concepts to political science and public affairs students in an introductory research methods course. The software is called the Elaboration Model Computer Exercise (EMCE) in that it takes a great

  1. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  2. Computational Methods in Nanostructure Design

    NASA Astrophysics Data System (ADS)

    Bellesia, Giovanni; Lampoudi, Sotiria; Shea, Joan-Emma

    Self-assembling peptides can serve as building blocks for novel biomaterials. Replica exchange molecular dynamics simulations are a powerful means to probe the conformational space of these peptides. We discuss the theoretical foundations of this enhanced sampling method and its use in biomolecular simulations. We then apply this method to determine the monomeric conformations of the Alzheimer amyloid-?(12-28) peptide that can serve as initiation sites for aggregation.

  3. Computational Methods for Biomolecular Electrostatics

    PubMed Central

    Dong, Feng; Olsen, Brett; Baker, Nathan A.

    2008-01-01

    An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der Waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

  4. Computational Methods to Model Persistence.

    PubMed

    Vandervelde, Alexandra; Loris, Remy; Danckaert, Jan; Gelens, Lendert

    2016-01-01

    Bacterial persister cells are dormant cells, tolerant to multiple antibiotics, that are involved in several chronic infections. Toxin-antitoxin modules play a significant role in the generation of such persister cells. Toxin-antitoxin modules are small genetic elements, omnipresent in the genomes of bacteria, which code for an intracellular toxin and its neutralizing antitoxin. In the past decade, mathematical modeling has become an important tool to study the regulation of toxin-antitoxin modules and their relation to the emergence of persister cells. Here, we provide an overview of several numerical methods to simulate toxin-antitoxin modules. We cover both deterministic modeling using ordinary differential equations and stochastic modeling using stochastic differential equations and the Gillespie method. Several characteristics of toxin-antitoxin modules such as protein production and degradation, negative autoregulation through DNA binding, toxin-antitoxin complex formation and conditional cooperativity are gradually integrated in these models. Finally, by including growth rate modulation, we link toxin-antitoxin module expression to the generation of persister cells. PMID:26468111

  5. Computational Methods for Ideal Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Kercher, Andrew D.

    Numerical schemes for the ideal magnetohydrodynamics (MHD) are widely used for modeling space weather and astrophysical flows. They are designed to resolve the different waves that propagate through a magnetohydro fluid, namely, the fast, Alfven, slow, and entropy waves. Numerical schemes for ideal magnetohydrodynamics that are based on the standard finite volume (FV) discretization exhibit pseudo-convergence in which non-regular waves no longer exist only after heavy grid refinement. A method is described for obtaining solutions for coplanar and near coplanar cases that consist of only regular waves, independent of grid refinement. The method, referred to as Compound Wave Modification (CWM), involves removing the flux associated with non-regular structures and can be used for simulations in two- and three-dimensions because it does not require explicitly tracking an Alfven wave. For a near coplanar case, and for grids with 213 points or less, we find root-mean-square-errors (RMSEs) that are as much as 6 times smaller. For the coplanar case, in which non-regular structures will exist at all levels of grid refinement for standard FV schemes, the RMSE is as much as 25 times smaller. A multidimensional ideal MHD code has been implemented for simulations on graphics processing units (GPUs). Performance measurements were conducted for both the NVIDIA GeForce GTX Titan and Intel Xeon E5645 processor. The GPU is shown to perform one to two orders of magnitude greater than the CPU when using a single core, and two to three times greater than when run in parallel with OpenMP. Performance comparisons are made for two methods of storing data on the GPU. The first approach stores data as an Array of Structures (AoS), e.g., a point coordinate array of size 3 x n is iterated over. The second approach stores data as a Structure of Arrays (SoA), e.g. three separate arrays of size n are iterated over simultaneously. For an AoS, coalescing does not occur, reducing memory efficiency. All results are given for Cartesian grids, but the algorithms are implemented for a general geometry on a unstructured grids.

  6. Method for tracking core-contributed publications.

    PubMed

    Loomis, Cynthia A; Curchoe, Carol Lynn

    2012-12-01

    Accurately tracking core-contributed publications is an important and often difficult task. Many core laboratories are supported by programmatic grants (such as Cancer Center Support Grant and Clinical Translational Science Awards) or generate data with instruments funded through S10, Major Research Instrumentation, or other granting mechanisms. Core laboratories provide their research communities with state-of-the-art instrumentation and expertise, elevating research. It is crucial to demonstrate the specific projects that have benefited from core services and expertise. We discuss here the method we developed for tracking core contributed publications. PMID:23204927

  7. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral methods in numerical calculations stems from the attractive approximation properties of orthogonal polynomial expansions.

  8. Computational Chemistry Using Modern Electronic Structure Methods

    ERIC Educational Resources Information Center

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  9. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  10. Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

    ERIC Educational Resources Information Center

    Jayakar, Krishna; Park, Eun-A

    2012-01-01

    The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has

  11. Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

    ERIC Educational Resources Information Center

    Jayakar, Krishna; Park, Eun-A

    2012-01-01

    The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

  12. Fast multipole methods for scattering computations

    NASA Astrophysics Data System (ADS)

    Rokhlin, Vladimir; Coifman, Ronald R.; Wickerhauser, Victor

    1994-10-01

    The purpose of this phase of the project was to develop fast algorithms for computations of electromagnetic scattering (radar), and assist in the implementation and development of fast engineering software using these algorithms by the team at Hughes Research Laboratories. Present methods for computing radar cross sections and other scattering crossections are severely limited by prohibitive processing and memory requirements. New fundamental Fast Multipole Methods developed over the last few years by Rokhlin (for 2-D scattering) held the promise for breaking this computational bottleneck, the goal set out in this project was to extend the work to higher dimensions and to complete the computational infrastructure needed for converting these algorithms to engineering tools. The codes and algorithms obtained in this joint effort between HRL and FMAH have already changed the state of the art in this area of electromagnetics simulations and promise to revolutionize computational design technology. We have verified that these algorithms provide the expected improvements and scaling.

  13. How the role of computing is driving new genetics' public policy.

    PubMed

    Marturano, Antonio; Chadwick, Ruth

    2004-01-01

    In this paper we will examine some ethical aspects of the role that computers and computing increasingly play in new genetics. Our claim is that there is no new genetics without computer science. Computer science is important for the new genetics on two levels: (1) from a theoretical perspective, and (2) from the point of view of geneticists practice. With respect to (1), the new genetics is fully impregnate with concepts that are basic for computer science. Regarding (2), recent developments in the Human Genome Project (HGP) have shown that computers shape the practices of molecular genetics; an important example is the Shotgun Method's contribution to accelerating the mapping of the human genome. A new challenge to the HGP is provided by the Open Source Philosophy (I computer science), which is another way computer technologies now influence the shaping of public policy debates involving genomics. PMID:16969960

  14. Updated Panel-Method Computer Program

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1995-01-01

    Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

  15. Computing discharge using the index velocity method

    USGS Publications Warehouse

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratingsthe index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression techniques in which the mean cross-sectional velocity for the standard section is related to the measured index velocity. Most ratings are simple-linear regressions, but more complex ratings may be necessary in some cases. Once the rating is established, validation measurements should be made periodically. Over time, validation measurements may provide additional definition to the rating or result in the creation of a new rating. The computation of discharge is the last step in the index velocity method, and in some ways it is the most straight-forward step. This step differs little from the steps used to compute discharge records for stage-discharge gaging stations. The ratings are entered into database software used for records computation, and continuous records of discharge are computed.

  16. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L. (Ames, IA)

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  17. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... of public domain computer software. (a) General. This section prescribes the procedures for... software under section 805 of Public Law 101-650, 104 Stat. 5089 (1990). Documents recorded in...

  18. Semiempirical methods for computing turbulent flows

    NASA Technical Reports Server (NTRS)

    Belov, I. A.; Ginzburg, I. P.

    1986-01-01

    Two semiempirical theories which provide a basis for determining the turbulent friction and heat exchange near a wall are presented: (1) the Prandtl-Karman theory, and (2) the theory utilizing an equation for the energy of turbulent pulsations. A comparison is made between exact numerical methods and approximate integral methods for computing the turbulent boundary layers in the presence of pressure, blowing, or suction gradients. Using the turbulent flow around a plate as an example, it is shown that, when computing turbulent flows with external turbulence, it is preferable to construct a turbulence model based on the equation for energy of turbulent pulsations.

  19. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  20. Survey of Public IaaS Cloud Computing API

    NASA Astrophysics Data System (ADS)

    Yamato, Yoji; Moriya, Takaaki; Ogawa, Takeshi; Akahani, Junichi

    Recently, Cloud computing is spread rapidly and many Cloud providers start their Cloud services. One of the Cloud computing problems is Cloud provider Lock In for users. Actually, Cloud computing management APIs such as ordering or provisioning are different in each Cloud provider, so that users need to study and implement new APIs when they change Cloud providers. OGF and DMTF start the discussions of standardization of Cloud computing APIs, but there is no standard now. In this technical note, to clarify what APIs cloud providers should provide, we study common APIs for Cloud computing. We survey and compare Cloud computing APIs such as Rackspace Cloud Server, Sun Cloud, GoGrid, ElasticHosts, Amazon EC2 and FlexiScale which are currently provided as public IaaS Cloud APIs in the market. From the survey, the common APIs should support REST access style and provide account management, virtual server management, storage management, network management and resource usage management capabilities. We also show an example of OSS to provide these common APIs compared to normal hosting services OSS.

  1. Computational Methods for Failure Analysis and Life Prediction

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Harris, Charles E. (Compiler); Housner, Jerrold M. (Compiler); Hopkins, Dale A. (Compiler)

    1993-01-01

    This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

  2. Soft computing methods for geoidal height transformation

    NASA Astrophysics Data System (ADS)

    Akyilmaz, O.; zldemir, M. T.; Ayan, T.; elik, R. N.

    2009-07-01

    Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.

  3. Efficient Methods to Compute Genomic Predictions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  4. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  5. Shifted power method for computing tensor eigenvalues.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-07-01

    Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

  6. Shifted power method for computing tensor eigenpairs.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  7. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  8. Experience of public procurement of Open Compute servers

    NASA Astrophysics Data System (ADS)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  9. Wavelet Methods in Computational Fluid Dynamics *

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Vasilyev, Oleg V.

    2010-01-01

    This article reviews state-of-the-art adaptive, multiresolution wavelet methodologies for modeling and simulation of turbulent flows with various examples. Different numerical methods for solving the Navier-Stokes equations in adaptive wavelet bases are described. We summarize coherent vortex extraction methodologies, which utilize the efficient wavelet decomposition of turbulent flows into space-scale contributions, and present a hierarchy of wavelet-based turbulence models. Perspectives for modeling and computing industrially relevant flows are also given.

  10. Computational methods for ideal compressible flow

    NASA Technical Reports Server (NTRS)

    Vanleer, B.

    1983-01-01

    Conservative dissipative difference schemes for computing one dimensional flow are introduced, and the recognition and representation of flow discontinuities are discussed. Multidimensional methods are outlined. Second order finite volume schemes are introduced. Conversion of difference schemes for a single linear convection equation into schemes for the hyperbolic system of the nonlinear conservation laws of ideal compressible flow is explained. Approximate Riemann solvers are presented. Monotone initial value interpolation; and limiters, switches, and artificial dissipation are considered.

  11. A computational method for viscous incompressible flows

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Chang, J. L. C.

    1984-01-01

    An implicit, finite-difference procedure for numerically solving viscous incompressible flows is presented. The pressure-field solution is based on the pseudocompressibility method in which a time-derivative pressure term is introduced into the mass-conservation equation to form a set of hyperbolic equations. The pressure-wave propagation and the spreading of the viscous effect is investigated using simple test problems. Computed results for external and internal flows are presented to verify the present method which has proved to be very robust in simulating incompressible flows.

  12. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  13. Computations of entropy bounds: Multidimensional geometric methods

    SciTech Connect

    Makaruk, H.E.

    1998-02-01

    The entropy bounds for constructive upper bound on the needed number-of-bits for solving a dichotomy is represented by the quotient of two multidimensional solid volumes. For minimization of this upper bound exact calculation of the volume of this quotient is needed. Three methods for exact computing of the volume of a given nD volume are presented: (1) general method for calculation any nD volume by slicing it into volumes of decreasing dimension is presented; (2) a method applying appropriate curvilinear coordinate system is described for volume bounded by symmetrical curvilinear hypersurfaces (spheres, cones, hyperboloids, ellipsoids, cylinders, etc.); and (3) an algorithm for dividing any nD complex into simplices and computing of the volume of the simplices is presented, supplemented by a general formula for calculation of volume of an nD simplex. These mathematical methods enable exact calculation of volume of any complicated multidimensional solids. The methods allow for the calculation of the minimal volume and lead to tighter bounds on the needed number-of-bits.

  14. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  15. Accelerated matrix element method with parallel computing

    NASA Astrophysics Data System (ADS)

    Schouten, D.; DeAbreu, A.; Stelzer, B.

    2015-07-01

    The matrix element method utilizes ab initio calculations of probability densities as powerful discriminants for processes of interest in experimental particle physics. The method has already been used successfully at previous and current collider experiments. However, the computational complexity of this method for final states with many particles and degrees of freedom sets it at a disadvantage compared to supervised classification methods such as decision trees, k nearest-neighbor, or neural networks. This note presents a concrete implementation of the matrix element technique using graphics processing units. Due to the intrinsic parallelizability of multidimensional integration, dramatic speedups can be readily achieved, which makes the matrix element technique viable for general usage at collider experiments.

  16. Probabilistic Computational Methods in Structural Failure Analysis

    NASA Astrophysics Data System (ADS)

    Krejsa, Martin; Kralik, Juraj

    2015-12-01

    Probabilistic methods are used in engineering where a computational model contains random variables. Each random variable in the probabilistic calculations contains uncertainties. Typical sources of uncertainties are properties of the material and production and/or assembly inaccuracies in the geometry or the environment where the structure should be located. The paper is focused on methods for the calculations of failure probabilities in structural failure and reliability analysis with special attention on newly developed probabilistic method: Direct Optimized Probabilistic Calculation (DOProC), which is highly efficient in terms of calculation time and the accuracy of the solution. The novelty of the proposed method lies in an optimized numerical integration that does not require any simulation technique. The algorithm has been implemented in mentioned software applications, and has been used several times in probabilistic tasks and probabilistic reliability assessments.

  17. Numerical methods for problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Mead, Jodi Lorraine

    1998-12-01

    A goal of computational aeroacoustics is the accurate calculation of noise from a jet in the far field. This work concerns the numerical aspects of accurately calculating acoustic waves over large distances and long time. More specifically, the stability, efficiency, accuracy, dispersion and dissipation in spatial discretizations, time stepping schemes, and absorbing boundaries for the direct solution of wave propagation problems are determined. Efficient finite difference methods developed by Tam and Webb, which minimize dispersion and dissipation, are commonly used for the spatial and temporal discretization. Alternatively, high order pseudospectral methods can be made more efficient by using the grid transformation introduced by Kosloff and Tal-Ezer. Work in this dissertation confirms that the grid transformation introduced by Kosloff and Tal-Ezer is not spectrally accurate because, in the limit, the grid transformation forces zero derivatives at the boundaries. If a small number of grid points are used, it is shown that approximations with the Chebyshev pseudospectral method with the Kosloff and Tal-Ezer grid transformation are as accurate as with the Chebyshev pseudospectral method. This result is based on the analysis of the phase and amplitude errors of these methods, and their use for the solution of a benchmark problem in computational aeroacoustics. For the grid transformed Chebyshev method with a small number of grid points it is, however, more appropriate to compare its accuracy with that of high- order finite difference methods. This comparison, for an order of accuracy 10-3 for a benchmark problem in computational aeroacoustics, is performed for the grid transformed Chebyshev method and the fourth order finite difference method of Tam. Solutions with the finite difference method are as accurate. and the finite difference method is more efficient than, the Chebyshev pseudospectral method with the grid transformation. The efficiency of the Chebyshev pseudospectral method is further improved by developing Runge-Kutta methods for the temporal discretization which maximize imaginary stability intervals. Two new Runge-Kutta methods, which allow time steps almost twice as large as the maximal order schemes, while holding dissipation and dispersion fixed, are developed. In the process of studying dispersion and dissipation, it is determined that maximizing dispersion minimizes dissipation, and vice versa. In order to determine accurate and efficient absorbing boundary conditions, absorbing layers are studied and compared with one way wave equations. The matched layer technique for Maxwell equations is equivalent to the absorbing layer technique for the acoustic wave equation introduced by Kosloff and Kosloff. The numerical implementation of the perfectly matched layer for the acoustic wave equation with a large damping parameter results in a small portion of the wave transmitting into the absorbing layer. A large damping parameter also results in a large portion of the wave reflecting back into the domain. The perfectly matched layer is implemented on a single domain for the solution of the second order wave equation, and when implemented in this manner shows no advantage over the matched layer. Solutions of the second order wave equation, with the absorbing boundary condition imposed either by the matched layer or by the one way wave equations, are compared. The comparison shows no advantage of the matched layer over the one way wave equation for the absorbing boundary condition. Hence there is no benefit to be gained by using the matched layer, which necessarily increases the size of the computational domain.

  18. Computational studies of sialyllactones: methods and uses.

    PubMed

    Parrill, A L; Mamuya, N; Dolata, D P; Gervay, J

    1997-06-01

    N-Acetylneuraminic acid (1) is a common sugar in many biological recognition processes. Neuraminidase enzymes recognize and cleave terminal sialic acids from cell surfaces. Viral entry into host cells requires neuraminidase activity, thus inhibition of neuraminidase is a useful strategy for development of drugs for viral infections. A recent crystal structure for influenza viral neuraminidase with sialic acid bound shows that the sialic acid is in a boat conformation [Prot Struct Funct Genet 14: 327 (1992)]. Our studies seek to determine if structural pre-organization can be achieved through the use of sialyllactones. Determination of whether siallylactones are pre-organized in a binding conformation requires conformational analysis. Our inability to find a systematic study comparing the results obtained by various computational methods for carbohydrate modeling led us to compare two different conformational analysis techniques, four different force fields, and three different solvent models. The computational models were compared based on their ability to reproduce experimental coupling constants for sialic acid, sialyl-1,4-lactone, and sialyl-1,7-lactone derivatives. This study has shown that the MM3 forcefield using the implicit solvent model for water implemented in Macromodel best reproduces the experimental coupling constants. The low-energy conformations generated by this combination of computational methods are pre-organized toward conformations which fit well into the active site of neuraminidase. PMID:9249154

  19. Soft Computing Methods for Disulfide Connectivity Prediction

    PubMed Central

    Mrquez-Chamorro, Alfonso E.; Aguilar-Ruiz, Jess S.

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods. PMID:26523116

  20. Soft Computing Methods for Disulfide Connectivity Prediction.

    PubMed

    Mrquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jess S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods. PMID:26523116

  1. Teaching Practical Public Health Evaluation Methods

    ERIC Educational Resources Information Center

    Davis, Mary V.

    2006-01-01

    Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…

  2. Teaching Practical Public Health Evaluation Methods

    ERIC Educational Resources Information Center

    Davis, Mary V.

    2006-01-01

    Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the

  3. Review of Computational Stirling Analysis Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  4. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. PMID:23877155

  5. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  6. A new spectral method to compute FCN

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Huang, C. L.

    2014-12-01

    Free core nutation (FCN) is a rotational modes of the earth with fluid core. All traditional theoretical methods produce FCN period near 460 days with PREM, while the precise observations (VLBI + SG tides) say it should be near 430 days. In order to fill this big gap, astronomers and geophysicists give various assumptions, e.g., increasing core-mantle-boundary (CMB) flattening by about 5%, a strong coupling between nutation and geomagnetic field near CMB, viscous coupling, or topographical coupling etc. Do we really need these unproved assumptions? or is it only the problem of these traditional theoretical methods themselves? Earth models (e.g. PREM) provide accurate and robust profiles of physical parameters, like density and Lame parameters, but their radial derivatives, which are also used in all traditional methods to calculate normal modes (e.g.. FCN), nutation and tides of non-rigid earth theoretically, are not so trustable as the parameters themselves. A new multiple layer spectral method is proposed and applied to the computation of normal modes, to avoid these problems. This new method can solve not only one order ellipsoid but also irregular asymmetric 3D earth model. Our primary result of the FCN period is 435 sidereal days.

  7. Monte Carlo methods on advanced computer architectures

    SciTech Connect

    Martin, W.R.

    1991-12-31

    Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.

  8. Computational methods for optical molecular imaging

    PubMed Central

    Chen, Duan; Wei, Guo-Wei; Cong, Wen-Xiang; Wang, Ge

    2010-01-01

    Summary A new computational technique, the matched interface and boundary (MIB) method, is presented to model the photon propagation in biological tissue for the optical molecular imaging. Optical properties have significant differences in different organs of small animals, resulting in discontinuous coefficients in the diffusion equation model. Complex organ shape of small animal induces singularities of the geometric model as well. The MIB method is designed as a dimension splitting approach to decompose a multidimensional interface problem into one-dimensional ones. The methodology simplifies the topological relation near an interface and is able to handle discontinuous coefficients and complex interfaces with geometric singularities. In the present MIB method, both the interface jump condition and the photon flux jump conditions are rigorously enforced at the interface location by using only the lowest-order jump conditions. This solution near the interface is smoothly extended across the interface so that central finite difference schemes can be employed without the loss of accuracy. A wide range of numerical experiments are carried out to validate the proposed MIB method. The second-order convergence is maintained in all benchmark problems. The fourth-order convergence is also demonstrated for some three-dimensional problems. The robustness of the proposed method over the variable strength of the linear term of the diffusion equation is also examined. The performance of the present approach is compared with that of the standard finite element method. The numerical study indicates that the proposed method is a potentially efficient and robust approach for the optical molecular imaging. PMID:20485461

  9. Computational electromagnetic methods for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3.0 times less volume than Figure-8 coils. Uncertainty quantification (UQ): The location/volume/depth of the stimulated region during TMS is often strongly affected by variability in the position and orientation of TMS coils, as well as anatomical differences between patients. A surrogate model-assisted UQ framework was developed and used to statistically characterize TMS depression therapy. The framework identifies key parameters that strongly affect TMS fields, and partially explains variations in TMS treatment responses.

  10. Computational predictive methods for fracture and fatigue

    NASA Technical Reports Server (NTRS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-01-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  11. The Contingent Valuation Method in Public Libraries

    ERIC Educational Resources Information Center

    Chung, Hye-Kyung

    2008-01-01

    This study aims to present a new model measuring the economic value of public libraries, combining the dissonance minimizing (DM) and information bias minimizing (IBM) format in the contingent valuation (CV) surveys. The possible biases which are tied to the conventional CV surveys are reviewed. An empirical study is presented to compare the model

  12. Computational simulation methods for composite fracture mechanics

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  13. Method and system for cardiac computed tomography

    SciTech Connect

    Harell, G.S.; Morehouse, C.C.; Seppi, E.J.

    1980-01-08

    System and method are set forth enabling reconstruction of images of desired ''frozen action'' cross-sections of the heart or of other bodily organs or similar objects undergoing cyclic displacements. Utilizing a computed tomography scanning apparatus data is acquired during one or more full rotational cycles and suitably stored. The said data corresponding to various angular projections can then be correlated with the desired portion of the object's cyclical motion by means of a reference signal associated with the motion, such as that derived through an electrocardiogram-where a heart is the object of interest. Data taking can also be limited to only the times when the desired portion of the cyclical motion is occurring. A sequential presentation of a plurality of said frozen action cross-sections provides a motion picture of the moving object.

  14. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  15. Modules and methods for all photonic computing

    DOEpatents

    Schultz, David R.; Ma, Chao Hung

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  16. An Efficient Method for Computing Alignment Diagnoses

    NASA Astrophysics Data System (ADS)

    Meilicke, Christian; Stuckenschmidt, Heiner

    Formal, logic-based semantics have long been neglected in ontology matching. As a result, almost all matching systems produce incoherent alignments of ontologies. In this paper we propose a new method for repairing such incoherent alignments that extends previous work on this subject. We describe our approach within the theory of diagnosis and introduce the notion of a local optimal diagnosis. We argue that computing a local optimal diagnosis is a reasonable choice for resolving alignment incoherence and suggest an efficient algorithm. This algorithm partially exploits incomplete reasoning techniques to increase runtime performance. Nevertheless, the completeness and optimality of the solution is still preserved. Finally, we test our approach in an experimental study and discuss results with respect to runtime and diagnostic quality.

  17. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 2 2010-07-01 2010-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  18. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 2 2011-07-01 2011-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  19. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 2 2012-07-01 2012-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  20. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 2 2014-07-01 2014-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  1. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 2 2013-07-01 2013-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  2. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,

  3. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  4. Computational Methods for Electron-Atom Collisions

    NASA Astrophysics Data System (ADS)

    Bartschat, Klaus

    2011-10-01

    In recent years, much progress has been achieved in calculating reliable cross-section data for electron scattering from atoms and ions, in particular quasi-one and quasi-two electron systems such as H, He, the alkalis, and the alkaline-earth metals. Until recently, however, accurate calculations of electron collisions with more complex targets, such as the heavy noble gases Ne -Xe, have remained a significant challenge to theory. We will give an overview of the computational methods presently used for ab initio electron-atom collision calculations, with particular emphasis on their strengths and weaknesses, range of applicability, and expected accuracy. In particular, we will illustrate with a few examples how the B-spline R-matrix (BSR) method with non-orthogonal orbitals has been able to dramatically improve the quality of theoretical datasets for oscillator strengths and in particular for electron collisions with the heavy noble gases. This work was performed in collaboration with Oleg Zatsarinny. It is supported by the United States National Science Foundation under PHY-0757755 and PHY-0903818, and the TeraGrid allocation TG-PHY090031.

  5. User's guide to SAC, a computer program for computing discharge by slope-area method

    USGS Publications Warehouse

    Fulford, Janice M.

    1994-01-01

    This user's guide contains information on using the slope-area program, SAC. SAC can be used to compute peak flood discharges from measurements of high-water marks along a stream reach. The Slope-area method used by the program is the U.S. Geological Survey (USGS) procedure presented in Techniques of Water Resources Investigations of the U.S. Geological Survey, beok 3, chapter A2, "Measurement of Peak Discharge by the Slope-Area Method." The program uses input files that have formats compatible with those used by the water-surface profile program (WSPRO) described in the Federal Highways Administration publication FHWA-IP-89-027. The guide briefly describes the slope-area method documents the input requirements and the output produced, and demonstrates use of SAC.

  6. Predicting the Number of Public Computer Terminals Needed for an On-Line Catalog: A Queuing Theory Approach.

    ERIC Educational Resources Information Center

    Knox, A. Whitney; Miller, Bruce A.

    1980-01-01

    Describes a method for estimating the number of cathode ray tube terminals needed for public use of an online library catalog. Authors claim method could also be used to estimate needed numbers of microform readers for a computer output microform (COM) catalog. Formulae are included. (Author/JD)

  7. Computing Potential Assessment in Atlanta Public Schools Education. Report Number 2.

    ERIC Educational Resources Information Center

    Cobbs, Henry L., Jr.; Wilmoth, James Noel

    The Computing Potential in Atlanta Public School Education (CPAPSE) was developed to determine teacher attitudes about computing potential as an instructional tool and to compare current practice with potential computing applications to determine the degree to which computer resources are being used in grades 2, 3, and 4. During the last week of

  8. The Computer Literacy Dilemma in the Public Schools.

    ERIC Educational Resources Information Center

    Pipho, Chris

    1985-01-01

    The author discusses the following issues: What does computer literacy mean? What is required for certification of teachers and for graduation from high school? What are the ethical, social, and economic issues relating to computer use? (CT)

  9. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

    EPA Science Inventory

    The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

  10. Computational methods in sequence and structure prediction

    NASA Astrophysics Data System (ADS)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed a software package which is capable of designing novel protein structures at the atomic resolution. This software package allows us to perform protein structure design with a flexible backbone. The backbone flexibility includes loop region relaxation as well as a secondary structure collective mode relaxation scheme. (Abstract shortened by UMI.)

  11. Radiological Protection in Cone Beam Computed Tomography (CBCT). ICRP Publication 129.

    PubMed

    Rehani, M M; Gupta, R; Bartling, S; Sharp, G C; Pauwels, R; Berris, T; Boone, J M

    2015-07-01

    The objective of this publication is to provide guidance on radiological protection in the new technology of cone beam computed tomography (CBCT). Publications 87 and 102 dealt with patient dose management in computed tomography (CT) and multi-detector CT. The new applications of CBCT and the associated radiological protection issues are substantially different from those of conventional CT. The perception that CBCT involves lower doses was only true in initial applications. CBCT is now used widely by specialists who have little or no training in radiological protection. This publication provides recommendations on radiation dose management directed at different stakeholders, and covers principles of radiological protection, training, and quality assurance aspects. Advice on appropriate use of CBCT needs to be made widely available. Advice on optimisation of protection when using CBCT equipment needs to be strengthened, particularly with respect to the use of newer features of the equipment. Manufacturers should standardise radiation dose displays on CBCT equipment to assist users in optimisation of protection and comparisons of performance. Additional challenges to radiological protection are introduced when CBCT-capable equipment is used for both fluoroscopy and tomography during the same procedure. Standardised methods need to be established for tracking and reporting of patient radiation doses from these procedures. The recommendations provided in this publication may evolve in the future as CBCT equipment and applications evolve. As with previous ICRP publications, the Commission hopes that imaging professionals, medical physicists, and manufacturers will use the guidelines and recommendations provided in this publication for implementation of the Commission's principle of optimisation of protection of patients and medical workers, with the objective of keeping exposures as low as reasonably achievable, taking into account economic and societal factors, and consistent with achieving the necessary medical outcomes. PMID:26116562

  12. [Method of computer processing of genealogic material].

    PubMed

    Lavrovski?, V A; Revazov, A A

    1977-01-01

    The essential moments of the computer algorithms - a construction of pedigree patterns and determination of inbreeding coefficients - are described. The main attention is given to the field material prossesing for the computer system. The common program is composed to three subprograms: pedigree construction, F estimation and evaluation of information quantity concerning the ancestors. The principal algorithms of the pedigree subprogram is based on computations of a parents computer adress from the proband's code. Such a run is repeated from generation to generation, until the information of ancestors comees to zero. Common ancestors for Wright's inbreeding estimation have been found by sorting out and comparison of husband's and wife's ancestors. The inbreeding coefficient has been formed as the kinship coefficient of parents. PMID:892439

  13. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  14. Computers, City and Suburb: A Study of New York City and Westchester County Public Schools.

    ERIC Educational Resources Information Center

    Picciano, Anthony

    1991-01-01

    Use of computers for instruction in public schools in New York City is compared with that in public schools in suburban Westchester County. Survey responses from 90 city and 46 suburban schools emphasize hardware availability, the nature of instructional software, and perceived problems/progress in integrating computers into the curriculum. (SLD)

  15. Saving lives: a computer simulation game for public education about emergencies

    SciTech Connect

    Morentz, J.W.

    1985-01-01

    One facet of the Information Revolution in which the nation finds itself involves the utilization of computers, video systems, and a variety of telecommunications capabilities by those who must cope with emergency situations. Such technologies possess a significant potential for performing emergency public education and transmitting key information that is essential for survival. An ''Emergency Public Information Competitive Challenge Grant,'' under the aegis of the Federal Emergency Management Agency (FEMA), has sponsored an effort to use computer technology - both large, time-sharing systems and small personal computers - to develop computer games which will help teach techniques of emergency management to the public at large. 24 references.

  16. Strengthening Computer Technology Programs. Special Publication Series No. 49.

    ERIC Educational Resources Information Center

    McKinney, Floyd L., Comp.

    Three papers present examples of strategies used by developing institutions and historically black colleges to strengthen computer technology programs. "Promoting Industry Support in Developing a Computer Technology Program" (Albert D. Robinson) describes how the Washtenaw Community College (Ann Arbor, Michigan) Electrical/Electronics Department

  17. Computer Competencies for All Educators in North Carolina Public Schools.

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh.

    To assist school systems in establishing computer competencies for inservice teacher training and personnel hiring guidelines, the North Carolina State Board of Education in 1985 approved the recommendations of a state task force, and identified three levels of computer competencies for teachers (K-12), i.e., competencies needed by all educators,

  18. Universal Tailored Access: Automating Setup of Public and Classroom Computers.

    ERIC Educational Resources Information Center

    Whittaker, Stephen G.; Young, Ted; Toth-Cohen, Susan

    2002-01-01

    This article describes a setup smart access card that enables users with visual impairments to customize magnifiers and screen readers on computers by loading the floppy disk into the computer and finding and pressing two successive keys. A trial with four elderly users found instruction took about 15 minutes. (Contains 3 references.) (CR)

  19. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  20. Numerical computation of polynomial zeros by means of Aberth's method

    NASA Astrophysics Data System (ADS)

    Bini, Dario

    1996-02-01

    An algorithm for computing polynomial zeros, based on Aberth's method, is presented. The starting approximations are chosen by means of a suitable application of Rouch's theorem. More precisely, an integerq ? 1 and a set of annuliAi,iD1,...,q, in the complex plane, are determined together with the numberki of zeros of the polynomial contained in each annulusAi. As starting approximations we chooseki complex numbers lying on a suitable circle contained in the annulusAi, foriD1,...,q. The computation of Newton's correction is performed in such a way that overflow situations are removed. A suitable stop condition, based on a rigorous backward rounding error analysis, guarantees that the computed approximations are the exact zeros of a "nearby" polynomial. This implies the backward stability of our algorithm. We provide a Fortran 77 implementation of the algorithm which is robust against overflow and allows us to deal with polynomials of any degree, not necessarily monic, whose zeros and coefficients are representable as floating point numbers. In all the tests performed with more than 1000 polynomials having degrees from 10 up to 25,600 and randomly generated coefficients, the Fortran 77 implementation of our algorithm computed approximations to all the zeros within the relative precision allowed by the classical conditioning theorems with 11.1 average iterations. In the worst case the number of iterations needed has been at most 17. Comparisons with available public domain software and with the algorithm PA16AD of Harwell are performed and show the effectiveness of our approach. A multiprecision implementation in MATHEMATICA is presented together with the results of the numerical tests performed.

  1. Accreditation: A Method for Evaluating Public Park and Recreation Systems.

    ERIC Educational Resources Information Center

    Twardzik, Louis F.

    1987-01-01

    This article considers the concept of accreditation as a proper method of evaluating public park and recreation systems. Arguments for accreditation are presented, and the system used to evaluate college park and recreation curricula and administration is described. (MT)

  2. A method of billing third generation computer users

    NASA Technical Reports Server (NTRS)

    Anderson, P. N.; Hyter, D. R.

    1973-01-01

    A method is presented for charging users for the processing of their applications on third generation digital computer systems is presented. For background purposes, problems and goals in billing on third generation systems are discussed. Detailed formulas are derived based on expected utilization and computer component cost. These formulas are then applied to a specific computer system (UNIVAC 1108). The method, although possessing some weaknesses, is presented as a definite improvement over use of second generation billing methods.

  3. 77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-18

    ... National Institute of Standards and Technology Notice of Public Meeting--Cloud Computing and Big Data Forum...) announces a Cloud Computing and Big Data Forum and Workshop to be held on Tuesday, January 15, Wednesday... workshop. The NIST Cloud Computing and Big Data Forum and Workshop will bring together leaders...

  4. 77 FR 26509 - Notice of Public Meeting-Cloud Computing Forum & Workshop V

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-04

    ... National Institute of Standards and Technology Notice of Public Meeting--Cloud Computing Forum & Workshop V... announces the Cloud Computing Forum & Workshop V to be held on Tuesday, Wednesday and Thursday, June 5, 6... provide information on the U.S. Government (USG) Cloud Computing Technology Roadmap initiative....

  5. Excellence in Computational Biology and Informatics — EDRN Public Portal

    Cancer.gov

    9th Early Detection Research Network (EDRN) Scientific Workshop. Excellence in Computational Biology and Informatics: Sponsored by the EDRN Data Sharing Subcommittee Moderator: Daniel Crichton, M.S., NASA Jet Propulsion Laboratory

  6. Awareness of Accessibility Barriers in Computer-Based Instructional Materials and Faculty Demographics at South Dakota Public Universities

    ERIC Educational Resources Information Center

    Olson, Christopher

    2013-01-01

    Advances in technology and course delivery methods have enabled persons with disabilities to enroll in higher education at an increasing rate. Federal regulations state persons with disabilities must be granted equal access to the information contained in computer-based instructional materials, but faculty at the six public universities in South

  7. Awareness of Accessibility Barriers in Computer-Based Instructional Materials and Faculty Demographics at South Dakota Public Universities

    ERIC Educational Resources Information Center

    Olson, Christopher

    2013-01-01

    Advances in technology and course delivery methods have enabled persons with disabilities to enroll in higher education at an increasing rate. Federal regulations state persons with disabilities must be granted equal access to the information contained in computer-based instructional materials, but faculty at the six public universities in South…

  8. Computational Methods for Analyzing Health News Coverage

    ERIC Educational Resources Information Center

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently

  9. Computational Methods for Analyzing Health News Coverage

    ERIC Educational Resources Information Center

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…

  10. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  11. Computational Methods for Collisional Plasma Physics

    SciTech Connect

    Lasinski, B F; Larson, D J; Hewett, D W; Langdon, A B; Still, C H

    2004-02-18

    Modeling the high density, high temperature plasmas produced by intense laser or particle beams requires accurate simulation of a large range of plasma collisionality. Current simulation algorithms accurately and efficiently model collisionless and collision-dominated plasmas. The important parameter regime between these extremes, semi-collisional plasmas, has been inadequately addressed to date. LLNL efforts to understand and harness high energy-density physics phenomena for stockpile stewardship require accurate simulation of such plasmas. We have made significant progress towards our goal: building a new modeling capability to accurately simulate the full range of collisional plasma physics phenomena. Our project has developed a computer model using a two-pronged approach that involves a new adaptive-resolution, ''smart'' particle-in-cell algorithm: complex particle kinetics (CPK); and developing a robust 3D massively parallel plasma production code Z3 with collisional extensions. Our new CPK algorithms expand the function of point particles in traditional plasma PIC models by including finite size and internal dynamics. This project has enhanced LLNL's competency in computational plasma physics and contributed to LLNL's expertise and forefront position in plasma modeling. The computational models developed will be applied to plasma problems of interest to LLNL's stockpile stewardship mission. Such problems include semi-collisional behavior in hohlraums, high-energy-density physics experiments, and the physics of high altitude nuclear explosions (HANE). Over the course of this LDRD project, the world's largest fully electromagnetic PIC calculation was run, enabled by the adaptation of Z3 to the Advanced Simulation and Computing (ASCI) White system. This milestone calculation simulated an entire laser illumination speckle, brought new realism to laser-plasma interaction simulations, and was directly applicable to laser target physics. For the first time, magnetic fields driven by Raman scatter have been observed. Also, Raman rescatter was observed in 2D. This code and its increased suite of dedicated diagnostics are now playing a key role in studies of short-pulse, high-intensity laser matter interactions. In addition, a momentum-conserving electron collision algorithm was incorporated into Z3. Finally, Z3's portability across diverse MPP platforms enabled it to serve the LLNL computing community as a tool for effectively utilizing new machines.

  12. Computational methods and opportunities for phosphorylation network medicine

    PubMed Central

    Chen, Yian Ann; Eschrich, Steven A.

    2014-01-01

    Protein phosphorylation, one of the most ubiquitous post-translational modifications (PTM) of proteins, is known to play an essential role in cell signaling and regulation. With the increasing understanding of the complexity and redundancy of cell signaling, there is a growing recognition that targeting the entire network or system could be a necessary and advantageous strategy for treating cancer. Protein kinases, the proteins that add a phosphate group to the substrate proteins during phosphorylation events, have become one of the largest groups of druggable targets in cancer therapeutics in recent years. Kinase inhibitors are being regularly used in clinics for cancer treatment. This therapeutic paradigm shift in cancer research is partly due to the generation and availability of high-dimensional proteomics data. Generation of this data, in turn, is enabled by increased use of mass-spectrometry (MS)-based or other high-throughput proteomics platforms as well as companion public databases and computational tools. This review briefly summarizes the current state and progress on phosphoproteomics identification, quantification, and platform related characteristics. We review existing database resources, computational tools, methods for phosphorylation network inference, and ultimately demonstrate the connection to therapeutics. Finally, many research opportunities exist for bioinformaticians or biostatisticians based on developments and limitations of the current and emerging technologies. PMID:25530950

  13. Atomistic Method Applied to Computational Modeling of Surface Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Abel, Phillip B.

    2000-01-01

    The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on the BFS (Bozzolo, Ferrante, and Smith) method for the calculation of the energetics, consists of a small number of simple PCbased computer codes that deal with the different aspects of surface alloy formation. Two analysis modes are available within this package. The first mode provides an atom-by-atom description of real and virtual stages 1. during the process of surface alloying, based on the construction of catalogues of configurations where each configuration describes one possible atomic distribution. BFS analysis of this catalogue provides information on accessible states, possible ordering patterns, and details of island formation or film growth. More importantly, it provides insight into the evolution of the system. Software developed by the Computational Materials Group allows for the study of an arbitrary number of elements forming surface alloys, including an arbitrary number of surface atomic layers. The second mode involves large-scale temperature-dependent computer 2. simulations that use the BFS method for the energetics and provide information on the dynamic processes during surface alloying. These simulations require the implementation of Monte-Carlo-based codes with high efficiency within current workstation environments. This methodology capitalizes on the advantages of the BFS method: there are no restrictions on the number or type of elements or on the type of crystallographic structure considered. This removes any restrictions in the definition of the configuration catalogues used in the analytical calculations, thus allowing for the study of arbitrary ordering patterns, ultimately leading to the actual surface alloy structure. Moreover, the Monte Carlo numerical technique used for the large-scale simulations allows for a detailed visualization of the simulated process, the main advantage of this type of analysis being the ability to understand the underlying features that drive these processes. Because of the simplicity of the BFS method for e energetics used in these calculations, a detailed atom-by-atom analysis can be performed at any

  14. Computational complexity for the two-point block method

    NASA Astrophysics Data System (ADS)

    See, Phang Pei; Majid, Zanariah Abdul

    2014-12-01

    In this paper, we discussed and compared the computational complexity for two-point block method and one-point method of Adams type. The computational complexity for both methods is determined based on the number of arithmetic operations performed and expressed in O(n). These two methods will be used to solve two-point second order boundary value problem directly and implemented using variable step size strategy adapted with the multiple shooting technique via three-step iterative method. Two numerical examples will be tested. The results show that the computational complexity of these methods is reliable to estimate the cost of these methods in term of the execution time. We conclude that the two-point block method has better computational performance compare to the one-point method as the total number of steps is larger.

  15. Computational Methods for Jet Noise Simulation

    NASA Technical Reports Server (NTRS)

    Goodrich, John W. (Technical Monitor); Hagstrom, Thomas

    2003-01-01

    The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.

  16. Non-numerical methods on parallel computers

    NASA Astrophysics Data System (ADS)

    Flanders, P. M.

    1982-06-01

    Analysis of computation on parallel computers reveals an interleaving of local (often numeric) processing and data re-organisation. The local processing is readily handled since it is contained within individual processors and easily expressed in terms of element-by-element operations on whole arrays. The intervening data re-organisation accounts for most of the complexity and interest in parallel processing; it is important that operations and techniques are developed for these non-numeric tasks which permit the natural and concise description of algorithms and are readily implemented on parallel hardware. Basic techniques for data organisation and movement are described and illustrated in some numeric and non-numeric problems. Various aspects of matching problems on to arrays of parallel hardware, such as the ICL DAP, are considered. An approach is outlined whereby more sophisticated solutions tocertain problems, such as the fast Fourier transform and sorting, are obtained by working with a specification of the mapping of data on to the store rather than with the physical data organisation.

  17. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  18. The Battle to Secure Our Public Access Computers

    ERIC Educational Resources Information Center

    Sendze, Monique

    2006-01-01

    Securing public access workstations should be a significant part of any library's network and information-security strategy because of the sensitive information patrons enter on these workstations. As the IT manager for the Johnson County Library in Kansas City, Kan., this author is challenged to make sure that thousands of patrons get the access

  19. The Battle to Secure Our Public Access Computers

    ERIC Educational Resources Information Center

    Sendze, Monique

    2006-01-01

    Securing public access workstations should be a significant part of any library's network and information-security strategy because of the sensitive information patrons enter on these workstations. As the IT manager for the Johnson County Library in Kansas City, Kan., this author is challenged to make sure that thousands of patrons get the access…

  20. Funding Methods for Public Higher Education in the SREB States.

    ERIC Educational Resources Information Center

    Caruthers, J. Kent; Marks, Joseph L.

    This report provides background information for discussion of major higher education finance issues and options. Terminology for comparing funding methods for public higher education across states is introduced. An overview of the evolution of the objectives of funding methods over time is provided, and detailed profiles of the major

  1. Computer Technology Standards of Learning for Virginia's Public Schools

    ERIC Educational Resources Information Center

    Virginia Department of Education, 2005

    2005-01-01

    The Computer/Technology Standards of Learning identify and define the progressive development of essential knowledge and skills necessary for students to access, evaluate, use, and create information using technology. They provide a framework for technology literacy and demonstrate a progression from physical manipulation skills for the use of

  2. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  3. The ACLS Survey of Scholars: Views on Publications, Computers, Libraries.

    ERIC Educational Resources Information Center

    Morton, Herbert C.; Price, Anne Jamieson

    1986-01-01

    Reviews results of a survey by the American Council of Learned Societies (ACLS) of 3,835 scholars in the humanities and social sciences who are working both in colleges and universities and outside the academic community. Areas highlighted include professional reading, authorship patterns, computer use, and library use. (LRW)

  4. Classical versus Computer Algebra Methods in Elementary Geometry

    ERIC Educational Resources Information Center

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic

  5. Methods for teaching geometric modelling and computer graphics

    SciTech Connect

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  6. Classical versus Computer Algebra Methods in Elementary Geometry

    ERIC Educational Resources Information Center

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  7. 12 CFR 227.25 - Unfair balance computation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... under 12 CFR 226.12 or 12 CFR 226.13; or (2) Adjustments to finance charges as a result of the return of... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in...

  8. Overview of computational structural methods for modern military aircraft

    NASA Technical Reports Server (NTRS)

    Kudva, J. N.

    1992-01-01

    Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

  9. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  10. Lattice gas methods for computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1995-01-01

    This paper presents the lattice gas solution to the category 1 problems of the ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics. The first and second problems were solved for Delta t = Delta x = 1, and additionally the second problem was solved for Delta t = 1/4 and Delta x = 1/2. The results are striking: even for these large time and space grids the lattice gas numerical solutions are almost indistinguishable from the analytical solutions. A simple bug in the Mathematica code was found in the solutions submitted for comparison, and the comparison plots shown at the end of this volume show the bug. An Appendix to the present paper shows an example lattice gas solution with and without the bug.

  11. Computational Methods for Modification of Metabolic Networks

    PubMed Central

    Tamura, Takeyuki; Lu, Wei; Akutsu, Tatsuya

    2015-01-01

    In metabolic engineering, modification of metabolic networks is an important biotechnology and a challenging computational task. In the metabolic network modification, we should modify metabolic networks by newly adding enzymes or/and knocking-out genes to maximize the biomass production with minimum side-effect. In this mini-review, we briefly review constraint-based formalizations for Minimum Reaction Cut (MRC) problem where the minimum set of reactions is deleted so that the target compound becomes non-producible from the view point of the flux balance analysis (FBA), elementary mode (EM), and Boolean models. Minimum Reaction Insertion (MRI) problem where the minimum set of reactions is added so that the target compound newly becomes producible is also explained with a similar formalization approach. The relation between the accuracy of the models and the risk of overfitting is also discussed. PMID:26106462

  12. A comparison of computational methods for identifying virulence factors.

    PubMed

    Zheng, Lu-Lu; Li, Yi-Xue; Ding, Juan; Guo, Xiao-Kui; Feng, Kai-Yan; Wang, Ya-Jun; Hu, Le-Le; Cai, Yu-Dong; Hao, Pei; Chou, Kuo-Chen

    2012-01-01

    Bacterial pathogens continue to threaten public health worldwide today. Identification of bacterial virulence factors can help to find novel drug/vaccine targets against pathogenicity. It can also help to reveal the mechanisms of the related diseases at the molecular level. With the explosive growth in protein sequences generated in the postgenomic age, it is highly desired to develop computational methods for rapidly and effectively identifying virulence factors according to their sequence information alone. In this study, based on the protein-protein interaction networks from the STRING database, a novel network-based method was proposed for identifying the virulence factors in the proteomes of UPEC 536, UPEC CFT073, P. aeruginosa PAO1, L. pneumophila Philadelphia 1, C. jejuni NCTC 11168 and M. tuberculosis H37Rv. Evaluated on the same benchmark datasets derived from the aforementioned species, the identification accuracies achieved by the network-based method were around 0.9, significantly higher than those by the sequence-based methods such as BLAST, feature selection and VirulentPred. Further analysis showed that the functional associations such as the gene neighborhood and co-occurrence were the primary associations between these virulence factors in the STRING database. The high success rates indicate that the network-based method is quite promising. The novel approach holds high potential for identifying virulence factors in many other various organisms as well because it can be easily extended to identify the virulence factors in many other bacterial species, as long as the relevant significant statistical data are available for them. PMID:22880014

  13. COMSAC: Computational Methods for Stability and Control. Part 1

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

  14. Study of basic computer competence among public health nurses in Taiwan.

    PubMed

    Yang, Kuei-Feng; Yu, Shu; Lin, Ming-Sheng; Hsu, Chia-Ling

    2004-03-01

    Rapid advances in information technology and media have made distance learning on the Internet possible. This new model of learning allows greater efficiency and flexibility in knowledge acquisition. Since basic computer competence is a prerequisite for this new learning model, this study was conducted to examine the basic computer competence of public health nurses in Taiwan and explore factors influencing computer competence. A national cross-sectional randomized study was conducted with 329 public health nurses. A questionnaire was used to collect data and was delivered by mail. Results indicate that basic computer competence of public health nurses in Taiwan is still needs to be improved (mean = 57.57 +- 2.83, total score range from 26-130). Among the five most frequently used software programs, nurses were most knowledgeable about Word and least knowledgeable about PowerPoint. Stepwise multiple regression analysis revealed eight variables (weekly number of hours spent online at home, weekly amount of time spent online at work, weekly frequency of computer use at work, previous computer training, computer at workplace and Internet access, job position, education level, and age) that significantly influenced computer competence, which accounted for 39.0 % of the variance. In conclusion, greater computer competence, broader educational programs regarding computer technology, and a greater emphasis on computers at work are necessary to increase the usefulness of distance learning via the Internet in Taiwan. Building a user-friendly environment is important in developing this new media model of learning for the future. PMID:15136958

  15. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  16. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  17. Transonic Flow Computations Using Nonlinear Potential Methods

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.

  18. Original computer method for the experimental data processing in photoelasticity

    NASA Astrophysics Data System (ADS)

    Oanta, Emil M.; Panait, Cornel; Barhalescu, Mihaela; Sabau, Adrian; Dumitrache, Constantin; Dascalescu, Anca-Elena

    2015-02-01

    Optical methods in experimental mechanics are important because their results are accurate and they may be used for both full field interpretation and analysis of the local rapid variation of the stresses produced by the stress concentrators. Researchers conceived several graphical, analytical and numerical methods for the experimental data reduction. The paper presents an original computer method employed to compute the analytic functions of the isostatics, using the pattern of isoclinics of a photoelastic model or coating. The resulting software instrument may be included in hybrid models consisting of analytical, numerical and experimental studies. The computer-based integration of the results of these studies offers a higher level of understanding of the phenomena. A thorough examination of the sources of inaccuracy of this computer based numerical method was done and the conclusions were tested using the original computer code which implements the algorithm.

  19. Wavelet methods in computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Glowinski, R.; Periaux, J.; Ravachol, M.; Pan, T. W.; Wells, R. O.; Zhou, X.

    We discuss in this paper the numerical solution of boundary value problems for partial diffential equations by methods relying on compactly supported wavelet approximations. After defining compactly supported wavelets and stating their main properties we discuss their application to boundary value problems for partial differential equations, giving a particular attention to the treatment of the boundary conditions. Finally, we discuss application of wavelets to the solution of the Navier-Stokes equations for incompressible viscous fluids.

  20. Statistical and Computational Methods for Genetic Diseases: An Overview

    PubMed Central

    Di Taranto, Maria Donata

    2015-01-01

    The identification of causes of genetic diseases has been carried out by several approaches with increasing complexity. Innovation of genetic methodologies leads to the production of large amounts of data that needs the support of statistical and computational methods to be correctly processed. The aim of the paper is to provide an overview of statistical and computational methods paying attention to methods for the sequence analysis and complex diseases. PMID:26106440

  1. [Teaching quantitative methods in public health: the EHESP experience].

    PubMed

    Grimaud, Olivier; Astagneau, Pascal; Desvarieux, Moïse; Chambaud, Laurent

    2014-01-01

    Many scientific disciplines, including epidemiology and biostatistics, are used in the field of public health. These quantitative sciences are fundamental tools necessary for the practice of future professionals. What then should be the minimum quantitative sciences training, common to all future public health professionals? By comparing the teaching models developed in Columbia University and those in the National School of Public Health in France, the authors recognize the need to adapt teaching to the specific competencies required for each profession. They insist that all public health professionals, whatever their future career, should be familiar with quantitative methods in order to ensure that decision-making is based on a reflective and critical use of quantitative analysis. PMID:25629671

  2. Analytical and numerical methods; advanced computer concepts

    SciTech Connect

    Lax, P D

    1991-03-01

    This past year, two projects have been completed and a new is under way. First, in joint work with R. Kohn, we developed a numerical algorithm to study the blowup of solutions to equations with certain similarity transformations. In the second project, the adaptive mesh refinement code of Berger and Colella for shock hydrodynamic calculations has been parallelized and numerical studies using two different shared memory machines have been done. My current effort is towards the development of Cartesian mesh methods to solve pdes with complicated geometries. Most of the coming year will be spent on this project, which is joint work with Prof. Randy Leveque at the University of Washington in Seattle.

  3. Method for transferring data from an unsecured computer to a secured computer

    DOEpatents

    Nilsen, Curt A. (Castro Valley, CA)

    1997-01-01

    A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

  4. Multiscale methods for computational RNA enzymology

    PubMed Central

    Panteva, Maria T.; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K.; Kuechler, Erich R.; Giambaşu, George M.; Lee, Tai-Sung; York, Darrin M.

    2016-01-01

    RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multi-dimensional “problem space” of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues and bond breaking/forming in the chemical steps of the reaction. The goal of this article is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics (MD) simulations, reference interaction site model (RISM) calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics (HREMD) and quantum mechanical/molecular mechanical (QM/MM) simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme (HDVr) and RNase A. PMID:25726472

  5. The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

    ERIC Educational Resources Information Center

    DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

    2013-01-01

    Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…

  6. The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

    ERIC Educational Resources Information Center

    DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

    2013-01-01

    Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a

  7. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  8. Computational methods to identify new antibacterial targets.

    PubMed

    McPhillie, Martin J; Cain, Ricky M; Narramore, Sarah; Fishwick, Colin W G; Simmons, Katie J

    2015-01-01

    The development of resistance to all current antibiotics in the clinic means there is an urgent unmet need for novel antibacterial agents with new modes of action. One of the best ways of finding these is to identify new essential bacterial enzymes to target. The advent of a number of in silico tools has aided classical methods of discovering new antibacterial targets, and these programs are the subject of this review. Many of these tools apply a cheminformatic approach, utilizing the structural information of either ligand or protein, chemogenomic databases, and docking algorithms to identify putative antibacterial targets. Considering the wealth of potential drug targets identified from genomic research, these approaches are perfectly placed to mine this rich resource and complement drug discovery programs. PMID:24974974

  9. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  10. Computational methods for internal flows with emphasis on turbomachinery

    NASA Technical Reports Server (NTRS)

    Mcnally, W. D.; Sockol, P. M.

    1981-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  11. Low-Rank Incremental Methods for Computing Dominant Singular Subspaces

    SciTech Connect

    Baker, Christopher G; Gallivan, Dr. Kyle A; Van Dooren, Dr. Paul

    2012-01-01

    Computing the singular values and vectors of a matrix is a crucial kernel in numerous scientific and industrial applications. As such, numerous methods have been proposed to handle this problem in a computationally efficient way. This paper considers a family of methods for incrementally computing the dominant SVD of a large matrix A. Specifically, we describe a unification of a number of previously disparate methods for approximating the dominant SVD via a single pass through A. We tie the behavior of these methods to that of a class of optimization-based iterative eigensolvers on A'*A. An iterative procedure is proposed which allows the computation of an accurate dominant SVD via multiple passes through A. We present an analysis of the convergence of this iteration, and provide empirical demonstration of the proposed method on both synthetic and benchmark data.

  12. Developing a multimodal biometric authentication system using soft computing methods.

    PubMed

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision. PMID:25502384

  13. Adding It Up: Is Computer Use Associated with Higher Achievement in Public Elementary Mathematics Classrooms?

    ERIC Educational Resources Information Center

    Kao, Linda Lee

    2009-01-01

    Despite support for technology in schools, there is little evidence indicating whether using computers in public elementary mathematics classrooms is associated with improved outcomes for students. This exploratory study examined data from the Early Childhood Longitudinal Study, investigating whether students' frequency of computer use was related

  14. Observations on the Use of Computer and Broadcast Television Technology in One Public Elementary School.

    ERIC Educational Resources Information Center

    Hoge, John Douglas

    This paper provides participant observations regarding the use of computer and broadcast television technology at a suburban public elementary school in Athens, Georgia during the 1995-1996 school year. The paper describes the hardware and software available in the school, and the use and misuse of computers and broadcast television in the

  15. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  16. Who's in the Queue? A Demographic Analysis of Public Access Computer Users and Uses in U.S. Public Libraries. Research Brief Number 4

    ERIC Educational Resources Information Center

    Manjarrez, Carlos A.; Schoembs, Kyle

    2011-01-01

    Over the past decade, policy discussions about public access computing in libraries have focused on the role that these institutions play in bridging the digital divide. In these discussions, public access computing services are generally targeted at individuals who either cannot afford a computer and Internet access, or have never received formal…

  17. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  18. Computer Simulation: A Method for Training Educational Diagnosticians.

    ERIC Educational Resources Information Center

    Lerner, Janet W.

    Northwestern University's Learning Disabilities Program conducted a study to explore and develop ways of applying computer technology to the fields of reading and learning disabilities and to train specialists who are familiar with computer methods. This study was designed to simulate the actual conditions of the Diagnostic Clinic at Northwestern

  19. I LIKE Computers versus I LIKERT Computers: Rethinking Methods for Assessing the Gender Gap in Computing.

    ERIC Educational Resources Information Center

    Morse, Frances K.; Daiute, Colette

    There is a burgeoning body of research on gender differences in computing attitudes and behaviors. After a decade of experience, researchers from both inside and outside the field of educational computing research are raising methodological and conceptual issues which suggest that perhaps researchers have shortchanged girls and women in

  20. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    ERIC Educational Resources Information Center

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer

  1. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  2. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  3. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  4. Platform-independent method for computer aided schematic drawings

    DOEpatents

    Vell, Jeffrey L. (Slingerlands, NY); Siganporia, Darius M. (Clifton Park, NY); Levy, Arthur J. (Fort Lauderdale, FL)

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  5. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  6. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation.

    PubMed

    Hu, Hongzhi; Mao, Huajuan; Hu, Xiaohua; Hu, Feng; Sun, Xuemin; Jing, Zaiping; Duan, Yunsuo

    2015-01-01

    Due to the extensive social influence, public health emergency has attracted great attention in today's society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event's social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution) provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback) based on ACP simulation system which was successfully applied to the analysis of A (H1N1) Flu emergency. PMID:26609303

  7. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation

    PubMed Central

    Hu, Hongzhi; Mao, Huajuan; Hu, Xiaohua; Hu, Feng; Sun, Xuemin; Jing, Zaiping; Duan, Yunsuo

    2015-01-01

    Due to the extensive social influence, public health emergency has attracted great attention in today's society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event's social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution) provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback) based on ACP simulation system which was successfully applied to the analysis of A (H1N1) Flu emergency. PMID:26609303

  8. Panel-Method Computer Code For Potential Flow

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steven K.

    1992-01-01

    Low-order panel method used to reduce computation time. Panel code PMARC (Panel Method Ames Research Center) numerically simulates flow field around or through complex three-dimensional bodies such as complete aircraft models or wind tunnel. Based on potential-flow theory. Facilitates addition of new features to code and tailoring of code to specific problems and computer-hardware constraints. Written in standard FORTRAN 77.

  9. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  10. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E. (Albuquerque, NM); Gustafson, John L. (Albuquerque, NM); Montry, Gary R. (Albuquerque, NM)

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  11. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  12. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  13. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  14. Key management of the double random-phase-encoding method using public-key encryption

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2010-03-01

    Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.

  15. Computational methods with vortices - The 1988 Freeman Scholar Lecture

    NASA Astrophysics Data System (ADS)

    Sarpkaya, Turgut

    1989-03-01

    Computational methods based upon Helmholtz's concepts of vortex dynamics are reviewed which employ Lagrangian or mixed Lagrangian-Eulerian schemes, the Biot-Savart law, or vortex-in-cell methods. The theoretical basis of vortex methods is first considered, covering such topics as the evolution equations for a vortex sheet, real vortices and instabilities, smoothing techniques, and body representation. Applications of the method discussed include vortical flows in aerodynamics, separated flows about cylindrical bodies, and general three-dimensional flows.

  16. Method for computing the optimal signal distribution and channel capacity.

    PubMed

    Shapiro, E G; Shapiro, D A; Turitsyn, S K

    2015-06-15

    An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet. PMID:26193496

  17. Public health surveillance: historical origins, methods and evaluation.

    PubMed Central

    Declich, S.; Carter, A. O.

    1994-01-01

    In the last three decades, disease surveillance has grown into a complete discipline, quite distinct from epidemiology. This expansion into a separate scientific area within public health has not been accompanied by parallel growth in the literature about its principles and methods. The development of the fundamental concepts of surveillance systems provides a basis on which to build a better understanding of the subject. In addition, the concepts have practical value as they can be used in designing new systems as well as understanding or evaluating currently operating systems. This article reviews the principles of surveillance, beginning with a historical survey of the roots and evolution of surveillance, and discusses the goals of public health surveillance. Methods for data collection, data analysis, interpretation, and dissemination are presented, together with proposed procedures for evaluating and improving a surveillance system. Finally, some points to be considered in establishing a new surveillance system are presented. PMID:8205649

  18. Comparison of methods for computing streamflow statistics for Pennsylvania streams

    USGS Publications Warehouse

    Ehlke, Marla H.; Reed, Lloyd A.

    1999-01-01

    Methods for computing streamflow statistics intended for use on ungaged locations on Pennsylvania streams are presented and compared to frequency distributions of gaged streamflow data. The streamflow statistics used in the comparisons include the 7-day 10-year low flow, 50-year flood flow, and the 100-year flood flow; additional statistics are presented. Streamflow statistics for gaged locations on streams in Pennsylvania were computed using three methods for the comparisons: 1) Log-Pearson type III frequency distribution (Log-Pearson) of continuous-record streamflow data, 2) regional regression equations developed by the U.S. Geological Survey in 1982 (WRI 82-21), and 3) regional regression equations developed by the Pennsylvania State University in 1981 (PSU-IV). Log-Pearson distribution was considered the reference method for evaluation of the regional regression equations. Low-flow statistics were computed using the Log-Pearson distribution and WRI 82-21, whereas flood-flow statistics were computed using all three methods. The urban adjustment for PSU-IV was modified from the recommended computation to exclude Philadelphia and the surrounding areas (region 1) from the adjustment. Adjustments for storage area for PSU-IV were also slightly modified. A comparison of the 7-day 10-year low flow computed from Log-Pearson distribution and WRI-82- 21 showed that the methods produced significantly different values for about 7 percent of the state. The same methods produced 50-year and 100-year flood flows that were significantly different for about 24 percent of the state. Flood-flow statistics computed using Log-Pearson distribution and PSU-IV were not significantly different in any regions of the state. These findings are based on a statistical comparison using the t-test on signed ranks and graphical methods.

  19. Method for implementation of recursive hierarchical segmentation on parallel computers

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  20. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  1. Artificial Intelligence Methods: Challenge in Computer Based Polymer Design

    NASA Astrophysics Data System (ADS)

    Rusu, Teodora; Pinteala, Mariana; Cartwright, Hugh

    2009-08-01

    This paper deals with the use of Artificial Intelligence Methods (AI) in the design of new molecules possessing desired physical, chemical and biological properties. This is an important and difficult problem in the chemical, material and pharmaceutical industries. Traditional methods involve a laborious and expensive trial-and-error procedure, but computer-assisted approaches offer many advantages in the automation of molecular design.

  2. Calculating PI Using Historical Methods and Your Personal Computer.

    ERIC Educational Resources Information Center

    Mandell, Alan

    1989-01-01

    Provides a software program for determining PI to the 15th place after the decimal. Explores the history of determining the value of PI from Archimedes to present computer methods. Investigates Wallis's, Liebniz's, and Buffon's methods. Written for Tandy GW-BASIC (IBM compatible) with 384K. Suggestions for Apple II's are given. (MVL)

  3. A Comparative Assessment of Computer Literacy of Private and Public Secondary School Students in Lagos State, Nigeria

    ERIC Educational Resources Information Center

    Osunwusi, Adeyinka Olumuyiwa; Abifarin, Michael Segun

    2013-01-01

    The aim of this study was to conduct a comparative assessment of computer literacy of private and public secondary school students. Although the definition of computer literacy varies widely, this study treated computer literacy in terms of access to, and use of, computers and the internet, basic knowledge and skills required to use computers and…

  4. A Comparative Assessment of Computer Literacy of Private and Public Secondary School Students in Lagos State, Nigeria

    ERIC Educational Resources Information Center

    Osunwusi, Adeyinka Olumuyiwa; Abifarin, Michael Segun

    2013-01-01

    The aim of this study was to conduct a comparative assessment of computer literacy of private and public secondary school students. Although the definition of computer literacy varies widely, this study treated computer literacy in terms of access to, and use of, computers and the internet, basic knowledge and skills required to use computers and

  5. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    ERIC Educational Resources Information Center

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

  6. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  7. Checklist and Pollard Walk butterfly survey methods on public lands

    USGS Publications Warehouse

    Royer, R.A.; Austin, J.E.; Newton, W.E.

    1998-01-01

    Checklist and Pollard Walk butterfly survey methods were contemporaneously applied to seven public sites in North Dakota during the summer of 1995. Results were compared for effect of method and site on total number of butterflies and total number of species detected per hour. Checklist searching produced significantly more butterfly detections per hour than Pollard Walks at all sites. Number of species detected per hour did not differ significantly either among sites or between methods. Many species were detected by only one method, and at most sites generalist and invader species were more likely to be observed during checklist searches than during Pollard Walks. Results indicate that checklist surveys are a more efficient means for initial determination of a species list for a site, whereas for long-term monitoring the Pollard Walk is more practical and statistically manageable. Pollard Walk transects are thus recommended once a prairie butterfly fauna has been defined for a site by checklist surveys.

  8. A Lanczos eigenvalue method on a parallel computer

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.; Fulton, Robert E.

    1987-01-01

    Eigenvalue analyses of complex structures is a computationally intensive task which can benefit significantly from new and impending parallel computers. This study reports on a parallel computer implementation of the Lanczos method for free vibration analysis. The approach used here subdivides the major Lanczos calculation tasks into subtasks and introduces parallelism down to the subtask levels such as matrix decomposition and forward/backward substitution. The method was implemented on a commercial parallel computer and results were obtained for a long flexible space structure. While parallel computing efficiency for the Lanczos method was good for a moderate number of processors for the test problem, the greatest reduction in time was realized for the decomposition of the stiffness matrix, a calculation which took 70 percent of the time in the sequential program and which took 25 percent of the time on eight processors. For a sample calculation of the twenty lowest frequencies of a 486 degree of freedom problem, the total sequential computing time was reduced by almost a factor of ten using 16 processors.

  9. Integrating Publicly Available Data to Generate Computationally Predicted Adverse Outcome Pathways for Fatty Liver.

    PubMed

    Bell, Shannon M; Angrish, Michelle M; Wood, Charles E; Edwards, Stephen W

    2016-04-01

    Newin vitrotesting strategies make it possible to design testing batteries for large numbers of environmental chemicals. Full utilization of the results requires knowledge of the underlying biological networks and the adverse outcome pathways (AOPs) that describe the route from early molecular perturbations to an adverse outcome. Curation of a formal AOP is a time-intensive process and a rate-limiting step to designing these test batteries. Here, we describe a method for integrating publicly available data in order to generate computationally predicted AOP (cpAOP) scaffolds, which can be leveraged by domain experts to shorten the time for formal AOP development. A network-based workflow was used to facilitate the integration of multiple data types to generate cpAOPs. Edges between graph entities were identified through direct experimental or literature information, or computationally inferred using frequent itemset mining. Data from the TG-GATEs and ToxCast programs were used to channel large-scale toxicogenomics information into a cpAOP network (cpAOPnet) of over 20 000 relationships describing connections between chemical treatments, phenotypes, and perturbed pathways as measured by differential gene expression and high-throughput screening targets. The resulting fatty liver cpAOPnet is available as a resource to the community. Subnetworks of cpAOPs for a reference chemical (carbon tetrachloride, CCl4) and outcome (fatty liver) were compared with published mechanistic descriptions. In both cases, the computational approaches approximated the manually curated AOPs. The cpAOPnet can be used for accelerating expert-curated AOP development and to identify pathway targets that lack genomic markers or high-throughput screening tests. It can also facilitate identification of key events for designing test batteries and for classification and grouping of chemicals for follow up testing. PMID:26895641

  10. A stochastic method for computing hadronic matrix elements

    DOE PAGESBeta

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; Drach, Vincent; Jansen, Karl; Hadjiyiannakou, Kyriakos; Renner, Dru B.

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  11. A fast sweeping method for computing geodesics on triangular manifolds.

    PubMed

    Xu, Song-Gang; Zhang, Yun-Xiang; Yong, Jun-Hai

    2010-02-01

    A wide range of applications in computer intelligence and computer graphics require computing geodesics accurately and efficiently. The fast marching method (FMM) is widely used to solve this problem, of which the complexity is O(N\\log N), where N is the total number of nodes on the manifold. A fast sweeping method (FSM) is proposed and applied on arbitrary triangular manifolds of which the complexity is reduced to O(N). By traversing the undigraph, four orderings are built to produce two groups of interfering waves, which cover all directions of characteristics. The correctness of this method is proved by analyzing the coverage of characteristics. The convergence and error estimation are also presented. PMID:20075455

  12. Customizing computational methods for visual analytics with big data.

    PubMed

    Choo, Jaegul; Park, Haesun

    2013-01-01

    The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data. PMID:24808056

  13. Fully consistent CFD methods for incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kolmogorov, D. K.; Shen, W. Z.; Sørensen, N. N.; Sørensen, J. N.

    2014-06-01

    Nowadays collocated grid based CFD methods are one of the most efficient tools for computations of the flows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure-velocity coupling on collocated grids use the so-called momentum interpolation method of Rhie and Chow [1]. As known, the method and some of its widely spread modifications result in solutions, which are dependent of time step at convergence. In this paper the magnitude of the dependence is shown to contribute about 0.5% into the total error in a typical turbulent flow computation. Nevertheless if coarse grids are used, the standard interpolation methods result in much higher non-consistent behavior. To overcome the problem, a recently developed interpolation method, which is independent of time step, is used. It is shown that in comparison to other time step independent method, the method may enhance the convergence rate of the SIMPLEC algorithm up to 25 %. The method is verified using turbulent flow computations around a NACA 64618 airfoil and the roll-up of a shear layer, which may appear in wind turbine wake.

  14. The continuous slope-area method for computing event hydrographs

    USGS Publications Warehouse

    Smith, Christopher F.; Cordova, Jeffrey T.; Wiele, Stephen M.

    2010-01-01

    The continuous slope-area (CSA) method expands the slope-area method of computing peak discharge to a complete flow event. Continuously recording pressure transducers installed at three or more cross sections provide water-surface slopes and stage during an event that can be used with cross-section surveys and estimates of channel roughness to compute a continuous discharge hydrograph. The CSA method has been made feasible by the availability of low-cost recording pressure transducers that provide a continuous record of stage. The CSA method was implemented on the Babocomari River in Arizona in 2002 to monitor streamflow in the channel reach by installing eight pressure transducers in four cross sections within the reach. Continuous discharge hydrographs were constructed from five streamflow events during 2002-2006. Results from this study indicate that the CSA method can be used to obtain continuous hydrographs and rating curves can be generated from streamflow events.

  15. New computational methods and algorithms for semiconductor science and nanotechnology

    NASA Astrophysics Data System (ADS)

    Gamoke, Benjamin C.

    The design and implementation of sophisticated computational methods and algorithms are critical to solve problems in nanotechnology and semiconductor science. Two key methods will be described to overcome challenges in contemporary surface science. The first method will focus on accurately cancelling interactions in a molecular system, such as modeling adsorbates on periodic surfaces at low coverages, a problem for which current methodologies are computationally inefficient. The second method pertains to the accurate calculation of core-ionization energies through X-ray photoelectron spectroscopy. The development can provide assignment of peaks in X-ray photoelectron spectra, which can determine the chemical composition and bonding environment of surface species. Finally, illustrative surface-adsorbate and gas-phase studies using the developed methods will also be featured.

  16. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. PMID:25698513

  17. Practical Use of Computationally Frugal Model Analysis Methods.

    PubMed

    Hill, Mary C; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2016-03-01

    Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities-this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts. PMID:25810333

  18. A comparative study of computational methods in cosmic gas dynamics

    NASA Technical Reports Server (NTRS)

    Van Albada, G. D.; Van Leer, B.; Roberts, W. W., Jr.

    1982-01-01

    Many theoretical investigations of fluid flows in astrophysics require extensive numerical calculations. The selection of an appropriate computational method is, therefore, important for the astronomer who has to solve an astrophysical flow problem. The present investigation has the objective to provide an informational basis for such a selection by comparing a variety of numerical methods with the aid of a test problem. The test problem involves a simple, one-dimensional model of the gas flow in a spiral galaxy. The numerical methods considered include the beam scheme, Godunov's method (G), the second-order flux-splitting method (FS2), MacCormack's method, and the flux corrected transport methods of Boris and Book (1973). It is found that the best second-order method (FS2) outperforms the best first-order method (G) by a huge margin.

  19. Curriculum modules, software laboratories, and an inexpensive hardware platform for teaching computational methods to undergraduate computer science students

    NASA Astrophysics Data System (ADS)

    Peck, Charles Franklin

    Computational methods are increasingly important to 21st century research and education; bioinformatics and climate change are just two examples of this trend. In this context computer scientists play an important role, facilitating the development and use of the methods and tools used to support computationally-based approaches. The undergraduate curriculum in computer science is one place where computational tools and methods can be introduced to facilitate the development of appropriately prepared computer scientists. To facilitate the evolution of the pedagogy, this dissertation identifies, develops, and organizes curriculum materials, software laboratories, and the reference design for an inexpensive portable cluster computer, all of which are specifically designed to support the teaching of computational methods to undergraduate computer science students. Keywords. computational science, computational thinking, computer science, undergraduate curriculum.

  20. Selection and Integration of a Computer Simulation for Public Budgeting and Finance (PBS 116).

    ERIC Educational Resources Information Center

    Banas, Ed Jr.

    1998-01-01

    Describes the development of a course on public budgeting and finance, which integrated the use of SimCity Classic, a computer-simulation software, with traditional lecture, guest speakers, and collaborative-learning activities. Explains the rationale for the course design and discusses the results from the first semester of teaching the course.

  1. An Exploratory Study of Malaysian Publication Productivity in Computer Science and Information Technology.

    ERIC Educational Resources Information Center

    Gu, Yinian

    2002-01-01

    Explores the Malaysian computer science and information technology publication productivity as indicated by data collected from three Web-based databases. Relates possible reasons for the amount and pattern of contributions to the size of researcher population, the availability of refereed scholarly journals, and the total expenditure allocated to

  2. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  3. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  4. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Zbigniew (Shoreham, NY); Falkowski, Paul (Stony Brook, NY)

    1990-01-01

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

  5. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Z.; Falkowski, P.

    1990-07-17

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.

  6. A computational method for automated characterization of genetic components.

    PubMed

    Yordanov, Boyan; Dalchau, Neil; Grant, Paul K; Pedersen, Michael; Emmott, Stephen; Haseloff, Jim; Phillips, Andrew

    2014-08-15

    The ability to design and construct synthetic biological systems with predictable behavior could enable significant advances in medical treatment, agricultural sustainability, and bioenergy production. However, to reach a stage where such systems can be reliably designed from biological components, integrated experimental and computational techniques that enable robust component characterization are needed. In this paper we present a computational method for the automated characterization of genetic components. Our method exploits a recently developed multichannel experimental protocol and integrates bacterial growth modeling, Bayesian parameter estimation, and model selection, together with data processing steps that are amenable to automation. We implement the method within the Genetic Engineering of Cells modeling and design environment, which enables both characterization and design to be integrated within a common software framework. To demonstrate the application of the method, we quantitatively characterize a synthetic receiver device that responds to the 3-oxohexanoyl-homoserine lactone signal, across a range of experimental conditions. PMID:24628037

  7. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false What rules apply to public access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION PUBLIC AVAILABILITY AND USE USING RECORDS AND DONATED HISTORICAL MATERIALS Research...

  8. pyro: Python-based tutorial for computational methods for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  9. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  10. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  11. Automatic detection of lung nodules in computed tomography images: training and validation of algorithms using public research databases

    NASA Astrophysics Data System (ADS)

    Camarlinghi, Niccol

    2013-09-01

    Lung cancer is one of the main public health issues in developed countries. Lung cancer typically manifests itself as non-calcified pulmonary nodules that can be detected reading lung Computed Tomography (CT) images. To assist radiologists in reading images, researchers started, a decade ago, the development of Computer Aided Detection (CAD) methods capable of detecting lung nodules. In this work, a CAD composed of two CAD subprocedures is presented: , devoted to the identification of parenchymal nodules, and , devoted to the identification of the nodules attached to the pleura surface. Both CADs are an upgrade of two methods previously presented as Voxel Based Neural Approach CAD . The novelty of this paper consists in the massive training using the public research Lung International Database Consortium (LIDC) database and on the implementation of new features for classification with respect to the original VBNA method. Finally, the proposed CAD is blindly validated on the ANODE09 dataset. The result of the validation is a score of 0.393, which corresponds to the average sensitivity of the CAD computed at seven predefined false positive rates: 1/8, 1/4, 1/2, 1, 2, 4, and 8 FP/CT.

  12. A Parallel Iterative Method for Computing Molecular Absorption Spectra.

    PubMed

    Koval, Peter; Foerster, Dietrich; Coulaud, Olivier

    2010-09-14

    We describe a fast parallel iterative method for computing molecular absorption spectra within TDDFT linear response and using the LCAO method. We use a local basis of "dominant products" to parametrize the space of orbital products that occur in the LCAO approach. In this basis, the dynamic polarizability is computed iteratively within an appropriate Krylov subspace. The iterative procedure uses a matrix-free GMRES method to determine the (interacting) density response. The resulting code is about 1 order of magnitude faster than our previous full-matrix method. This acceleration makes the speed of our TDDFT code comparable with codes based on Casida's equation. The implementation of our method uses hybrid MPI and OpenMP parallelization in which load balancing and memory access are optimized. To validate our approach and to establish benchmarks, we compute spectra of large molecules on various types of parallel machines. The methods developed here are fairly general, and we believe they will find useful applications in molecular physics/chemistry, even for problems that are beyond TDDFT, such as organic semiconductors, particularly in photovoltaics. PMID:26616067

  13. Learning From Engineering and Computer Science About Communicating The Field To The Public

    NASA Astrophysics Data System (ADS)

    Moore, S. L.; Tucek, K.

    2014-12-01

    The engineering and computer science community has taken the lead in actively informing the public about their discipline, including the societal contributions and career opportunities. These efforts have been intensified in regards to informing underrepresented populations in STEM about engineering and computer science. Are there lessons to be learned by the geoscience community in communicating the societal impacts and career opportunities in the geosciences, especially in regards to broadening participation and meeting Next Generation Science Standards? An estimated 35 percent increase in the number of geoscientist jobs in the United States forecasted for the period between 2008 and 2018, combined with majority populations becoming minority populations, make it imperative that we improve how we increase the public's understanding of the geosciences and how we present our message to targeted populations. This talk will look at recommendations from the National Academy of Engineering's Changing the Conversation: Messages for Improving the Public Understanding of Engineering, and communication strategies by organizations such as Code.org, to highlight practices that the geoscience community can adopt to increase public awareness of the societal contributions of the geosciences, the career opportunities in the geosciences, and the importance of the geosciences in the Next Generation Science Standards. An effort to communicate geoscience to the public, Earth is Calling, will be compared and contrasted to these efforts, and used as an example of how geological societies and other organizations can engage the general public and targeted groups about the geosciences.

  14. Three-dimensional cardiac computational modelling: methods, features and applications.

    PubMed

    Lopez-Perez, Alejandro; Sebastian, Rafael; Ferrero, Jose M

    2015-01-01

    The combination of computational models and biophysical simulations can help to interpret an array of experimental data and contribute to the understanding, diagnosis and treatment of complex diseases such as cardiac arrhythmias. For this reason, three-dimensional (3D) cardiac computational modelling is currently a rising field of research. The advance of medical imaging technology over the last decades has allowed the evolution from generic to patient-specific 3D cardiac models that faithfully represent the anatomy and different cardiac features of a given alive subject. Here we analyse sixty representative 3D cardiac computational models developed and published during the last fifty years, describing their information sources, features, development methods and online availability. This paper also reviews the necessary components to build a 3D computational model of the heart aimed at biophysical simulation, paying especial attention to cardiac electrophysiology (EP), and the existing approaches to incorporate those components. We assess the challenges associated to the different steps of the building process, from the processing of raw clinical or biological data to the final application, including image segmentation, inclusion of substructures and meshing among others. We briefly outline the personalisation approaches that are currently available in 3D cardiac computational modelling. Finally, we present examples of several specific applications, mainly related to cardiac EP simulation and model-based image analysis, showing the potential usefulness of 3D cardiac computational modelling into clinical environments as a tool to aid in the prevention, diagnosis and treatment of cardiac diseases. PMID:25928297

  15. Computational biology in the cloud: methods and new insights from computing at scale.

    PubMed

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available. PMID:23424149

  16. Computational Methods for Structural Mechanics and Dynamics, part 1

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

  17. Digital data storage systems, computers, and data verification methods

    SciTech Connect

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  18. The ensemble switch method for computing interfacial tensions

    SciTech Connect

    Schmitz, Fabian; Virnau, Peter

    2015-04-14

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.

  19. The ensemble switch method for computing interfacial tensions.

    PubMed

    Schmitz, Fabian; Virnau, Peter

    2015-04-14

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension. PMID:25877563

  20. Computing the crystal growth rate by the interface pinning method.

    PubMed

    Pedersen, Ulf R; Hummel, Felix; Dellago, Christoph

    2015-01-28

    An essential parameter for crystal growth is the kinetic coefficient given by the proportionality between supercooling and average growth velocity. Here, we show that this coefficient can be computed in a single equilibrium simulation using the interface pinning method where two-phase configurations are stabilized by adding a spring-like bias field coupling to an order-parameter that discriminates between the two phases. Crystal growth is a Smoluchowski process and the crystal growth rate can, therefore, be computed from the terminal exponential relaxation of the order parameter. The approach is investigated in detail for the Lennard-Jones model. We find that the kinetic coefficient scales as the inverse square-root of temperature along the high temperature part of the melting line. The practical usability of the method is demonstrated by computing the kinetic coefficient of the elements Na and Si from first principles. A generalized version of the method may be used for computing the rates of crystal nucleation or other rare events. PMID:25637966

  1. Computing the crystal growth rate by the interface pinning method

    NASA Astrophysics Data System (ADS)

    Pedersen, Ulf R.; Hummel, Felix; Dellago, Christoph

    2015-01-01

    An essential parameter for crystal growth is the kinetic coefficient given by the proportionality between supercooling and average growth velocity. Here, we show that this coefficient can be computed in a single equilibrium simulation using the interface pinning method where two-phase configurations are stabilized by adding a spring-like bias field coupling to an order-parameter that discriminates between the two phases. Crystal growth is a Smoluchowski process and the crystal growth rate can, therefore, be computed from the terminal exponential relaxation of the order parameter. The approach is investigated in detail for the Lennard-Jones model. We find that the kinetic coefficient scales as the inverse square-root of temperature along the high temperature part of the melting line. The practical usability of the method is demonstrated by computing the kinetic coefficient of the elements Na and Si from first principles. A generalized version of the method may be used for computing the rates of crystal nucleation or other rare events.

  2. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  3. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  4. Computer-aided methods of determining thyristor thermal transients

    SciTech Connect

    Lu, E.; Bronner, G.

    1988-08-01

    An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs.

  5. INTERVAL SAMPLING METHODS AND MEASUREMENT ERROR: A COMPUTER SIMULATION

    PubMed Central

    Wirth, Oliver; Slaven, James; Taylor, Matthew A.

    2015-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each methods inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. PMID:24127380

  6. Public library computer training for older adults to access high-quality Internet health information.

    PubMed

    Xie, Bo; Bugg, Julie M

    2009-09-01

    An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at a state university. One hundred and thirty-one older adults aged 54-89 participated in the study between September 2007 and July 2008. Key findings include: a) participants had overwhelmingly positive perceptions of the training program; b) after learning about two NIH websites (http://nihseniorhealth.gov and http://medlineplus.gov) from the training, many participants started using these online resources to find high quality health and medical information and, further, to guide their decision-making regarding a health- or medically-related matter; and c) computer anxiety significantly decreased (p < .001) while computer interest and efficacy significantly increased (p = .001 and p < .001, respectively) from pre- to post-training, suggesting statistically significant improvements in computer attitudes between pre- and post-training. The findings have implications for public libraries, LIS academic programs, and other organizations interested in providing similar programs in their communities. PMID:20161649

  7. Public library computer training for older adults to access high-quality Internet health information

    PubMed Central

    Xie, Bo; Bugg, Julie M.

    2010-01-01

    An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at a state university. One hundred and thirty-one older adults aged 54–89 participated in the study between September 2007 and July 2008. Key findings include: a) participants had overwhelmingly positive perceptions of the training program; b) after learning about two NIH websites (http://nihseniorhealth.gov and http://medlineplus.gov) from the training, many participants started using these online resources to find high quality health and medical information and, further, to guide their decision-making regarding a health- or medically-related matter; and c) computer anxiety significantly decreased (p < .001) while computer interest and efficacy significantly increased (p = .001 and p < .001, respectively) from pre- to post-training, suggesting statistically significant improvements in computer attitudes between pre- and post-training. The findings have implications for public libraries, LIS academic programs, and other organizations interested in providing similar programs in their communities. PMID:20161649

  8. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  9. Viscous flow computations with the method of lattice Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Yu, Dazhi; Mei, Renwei; Luo, Li-Shi; Shyy, Wei

    2003-07-01

    The method of lattice Boltzmann equation (LBE) is a kinetic-based approach for fluid flow computations. This method has been successfully applied to the multi-phase and multi-component flows. To extend the application of LBE to high Reynolds number incompressible flows, some critical issues need to be addressed, noticeably flexible spatial resolution, boundary treatments for curved solid wall, dispersion and mode of relaxation, and turbulence model. Recent developments in these aspects are highlighted in this paper. These efforts include the study of force evaluation methods, the development of multi-block methods which provide a means to satisfy different resolution requirement in the near wall region and the far field and reduce the memory requirement and computational time, the progress in constructing the second-order boundary condition for curved solid wall, and the analyses of the single-relaxation-time and multiple-relaxation-time models in LBE. These efforts have lead to successful applications of the LBE method to the simulation of incompressible laminar flows and demonstrated the potential of applying the LBE method to higher Reynolds flows. The progress in developing thermal and compressible LBE models and the applications of LBE method in multi-phase flows, multi-component flows, particulate suspensions, turbulent flow, and micro-flows are reviewed.

  10. Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS.

    PubMed

    Jalali, Arash; Olabode, Olusegun A; Bell, Christopher M

    2012-01-01

    As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities. PMID:23569644

  11. Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS

    PubMed Central

    Jalali, Arash; Olabode, Olusegun A.; Bell, Christopher M.

    2012-01-01

    As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities. PMID:23569644

  12. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  13. Computing homography with RANSAC algorithm: a novel method of registration

    NASA Astrophysics Data System (ADS)

    Li, Xiaowei; Liu, Yue; Wang, Yongtian; Yan, Dayuan

    2005-02-01

    An AR (Augmented Reality) system can integrate computer-generated objects with the image sequences of real world scenes in either an off-line or a real-time way. Registration, or camera pose estimation, is one of the key techniques to determine its performance. The registration methods can be classified as model-based and move-matching. The former approach can accomplish relatively accurate registration results, but it requires the precise model of the scene, which is hard to be obtained. The latter approach carries out registration by computing the ego-motion of the camera. Because it does not require the prior-knowledge of the scene, its registration results sometimes turn out to be less accurate. When the model defined is as simple as a plane, a mixed method is introduced to take advantages of the virtues of the two methods mentioned above. Although unexpected objects often occlude this plane in an AR system, one can still try to detect corresponding points with a contract-expand method, while this will import erroneous correspondences. Computing homography with RANSAC algorithm is used to overcome such shortcomings. Using the robustly estimated homography resulted from RANSAC, the camera projective matrix can be recovered and thus registration is accomplished even when the markers are lost in the scene.

  14. Practical methods to improve the development of computational software

    SciTech Connect

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-07-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  15. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 22 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 22 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  16. Advanced Computational Aeroacoustics Methods for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane (Technical Monitor); Tam, Christopher

    2003-01-01

    Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.

  17. Design and Analysis of Computational Methods for Structural Acoustics

    NASA Astrophysics Data System (ADS)

    Grosh, Karl

    The application of finite element methods to problems in structural acoustics (the vibration of an elastic structure coupled to an acoustic medium) is considered. New methods are developed which yield dramatic improvement in accuracy over the standard Galerkin finite element approach. The goal of the new methods is to decrease the computational burden required to achieve a desired accuracy level at a particular frequency thereby enabling larger scale, higher frequency computations for a given platform. A new class of finite element methods, Galerkin Generalized Least-Squares (GGLS) methods, are developed and applied to model the in vacuo and fluid-loaded vibration response of Reissner-Mindlin plates. Through judicious selection of the design parameters inherent to GGLS methods, this formulation provides a consistent framework for enhancing the accuracy of finite elements. An optimal GGLS method is designed such that the complex wave-number finite element dispersion relations are identical to the analytic relations. Complex wave-number dispersion analysis and numerical experiments demonstrate the dramatic superiority of the new optimal method over the standard finite element approach for coupled and uncoupled plate vibrations. The new method provides for a dramatic decrease in discretization requirements over previous methods. The canonical problem of a baffled, fluid-loaded, finite cylindrical shell is also studied. The finite element formulation for this problem is developed and the results are compared to an analytic solution based on an expansion of the displacement using in vacuo mode shapes. A novel high resolution parameter estimation technique, based on Prony's method, is used to obtain the complex wave-number dispersion relations for the finite structure. The finite element dispersion relations enable the analyst to pinpoint the source of errors and form discretization rules. The stationary phase approximation is used to obtain the dependence of the far field pressure on the surface displacement. This analysis allows for the study of the propagation of errors into the far field as well as the determination of important mechanisms of sound radiation.

  18. Novel Methods for Communicating Plasma Science to the General Public

    NASA Astrophysics Data System (ADS)

    Zwicker, Andrew; Merali, Aliya; Wissel, S. A.; Delooper, John

    2012-10-01

    The broader implications of Plasma Science remains an elusive topic that the general public rarely discusses, regardless of their relevance to energy, the environment, and technology. Recently, we have looked beyond print media for methods to reach large numbers of people in creative and informative ways. These have included video, art, images, and music. For example, our submission to the ``What is a Flame?'' contest was ranked in the top 15 out of 800 submissions. Images of plasmas have won 3 out of 5 of the Princeton University ``Art of Science'' competitions. We use a plasma speaker to teach students of all ages about sound generation and plasma physics. We report on the details of each of these and future videos and animations under development.

  19. SAR/QSAR methods in public health practice

    SciTech Connect

    Demchuk, Eugene Ruiz, Patricia; Chou, Selene; Fowler, Bruce A.

    2011-07-15

    Methods of (Quantitative) Structure-Activity Relationship ((Q)SAR) modeling play an important and active role in ATSDR programs in support of the Agency mission to protect human populations from exposure to environmental contaminants. They are used for cross-chemical extrapolation to complement the traditional toxicological approach when chemical-specific information is unavailable. SAR and QSAR methods are used to investigate adverse health effects and exposure levels, bioavailability, and pharmacokinetic properties of hazardous chemical compounds. They are applied as a part of an integrated systematic approach in the development of Health Guidance Values (HGVs), such as ATSDR Minimal Risk Levels, which are used to protect populations exposed to toxic chemicals at hazardous waste sites. (Q)SAR analyses are incorporated into ATSDR documents (such as the toxicological profiles and chemical-specific health consultations) to support environmental health assessments, prioritization of environmental chemical hazards, and to improve study design, when filling the priority data needs (PDNs) as mandated by Congress, in instances when experimental information is insufficient. These cases are illustrated by several examples, which explain how ATSDR applies (Q)SAR methods in public health practice.

  20. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    SciTech Connect

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  1. Characterization of Meta-Materials Using Computational Electromagnetic Methods

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Shin, Joon

    2005-01-01

    An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

  2. Computer processing improves hydraulics optimization with new methods

    SciTech Connect

    Gavignet, A.A.; Wick, C.J.

    1987-12-01

    In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configuration system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.

  3. Computational methods. [Calculation of dynamic loading to offshore platforms

    SciTech Connect

    Maeda, H. . Inst. of Industrial Science)

    1993-02-01

    With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine hydrodynamics covers, not only hydro, but also aerodynamics such as wind load or current-wave-wind interaction, hydrodynamics such as cavitation, underwater noise, multi-phase flow such as two-phase flow in pipes or air bubble in water or surface and internal waves, and magneto-hydrodynamics such as propulsion due to super conductivity. Among them, two key words are focused on as the identification of marine hydrodynamics in offshore technology; they are free surface and vortex shedding.

  4. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  5. A hierarchical method for molecular docking using cloud computing.

    PubMed

    Kang, Ling; Guo, Quan; Wang, Xicheng

    2012-11-01

    Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs. PMID:23017886

  6. A fast vortex method for computing 2D viscous flow

    SciTech Connect

    Baden, S.B. ); Puckett, E.G. )

    1990-12-01

    The authors present a fast version of the random vortex method for computing incompressible, viscous flow at large Reynolds numbers. The basis of this method is Anderson's method of local corrections and similar ideas for handling the potential and boundary layer flows. The goal of these ideas is to reduce the cost involved in computing the velocity field at each time step from being quadratic to linear as a function of the number of vortex elements. They present the results of a numerical study of the flow in a closed box due to a vortex fixed at its center. The results demonstrate that the addition of the viscous portions of the random vortex method to the method of local corrections does not add appreciably to the cost. Furthermore, the cost of the resulting method is linear when O(10[sup 4]) vortex elements are used, in spite of the fact that the majority of these elements lie in a thin band adjacent to the boundary.

  7. Hindered settling computations using a parallel boundary element method

    SciTech Connect

    Ingber, M.S.; Womble, D.E.

    1994-07-01

    This paper presents a parallel implementation of the boundary element method (BEM) for multiple instruction multiple data (MIMD) computer architectures to determine the hindered settling function for a suspension of sedimenting rigid particles in Stokes flow. The hindered settling function is a measure of the average sedimentation velocity of the suspension. This function can be determined numerically by performing statistical analyses of several random realizations of a physical system characterized by a set of defining parameters. These defining parameters can include the volume fraction of the solid phase, shape factors, orientation characteristics, and others. The boundary element method is particularly well suited for studying such systems because of the simplification in the discretization associated with the method. However, as the number of solid particles to be modeled is increased so are the computational demands. Parallel computation offers the opportunity to model systems of greater complexity. We discuss a parallel boundary element formulation based on the torus-wrap mapping. In this approach, blocks of the coefficient matrix associated with the discretized boundary element equations are assigned to processors as opposed to more traditional parallel boundary element implementations where rows or columns are assigned to processors. The torus-wrap mapping can be shown to minimize the communication volume between processors during the LU factorization. Therefore, the present formulation scales well with increases in the number of processors.

  8. Secure encapsulation and publication of biological services in the cloud computing environment.

    PubMed

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  9. Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

    PubMed Central

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  10. On computer-intensive simulation and estimation methods for rare-event analysis in epidemic models.

    PubMed

    Clémençon, Stéphan; Cousien, Anthony; Felipe, Miraine Dávila; Tran, Viet Chi

    2015-12-10

    This article focuses, in the context of epidemic models, on rare events that may possibly correspond to crisis situations from the perspective of public health. In general, no close analytic form for their occurrence probabilities is available, and crude Monte Carlo procedures fail. We show how recent intensive computer simulation techniques, such as interacting branching particle methods, can be used for estimation purposes, as well as for generating model paths that correspond to realizations of such events. Applications of these simulation-based methods to several epidemic models fitted from real datasets are also considered and discussed thoroughly. PMID:26242476

  11. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1971-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  12. A simple method for computing exact probabilities of mutation numbers.

    PubMed

    Uyenoyama, Marcy K; Takebayashi, Naoki

    2004-05-01

    We describe a method for the recursive computation of exact probability distributions for the number of neutral mutations segregating in samples of arbitrary size and configuration. Construction of the recursions requires only characterization of evolutionary changes as a Markov process and determination of one-step transition matrices. We address the pattern of nucleotide diversity at a neutral marker locus linked to a determinant of mating type. Under a reformulation of parameters, the method also applies directly to metapopulation models with island migration among demes. Characterization of complete probability distributions facilitates parameter estimation and hypothesis testing by likelihood- as well as moment-based approaches. PMID:15066423

  13. Informed public choices for low-carbon electricity portfolios using a computer decision tool.

    PubMed

    Mayer, Lauren A Fleishman; Bruine de Bruin, Wndi; Morgan, M Granger

    2014-04-01

    Reducing CO2 emissions from the electricity sector will likely require policies that encourage the widespread deployment of a diverse mix of low-carbon electricity generation technologies. Public discourse informs such policies. To make informed decisions and to productively engage in public discourse, citizens need to understand the trade-offs between electricity technologies proposed for widespread deployment. Building on previous paper-and-pencil studies, we developed a computer tool that aimed to help nonexperts make informed decisions about the challenges faced in achieving a low-carbon energy future. We report on an initial usability study of this interactive computer tool. After providing participants with comparative and balanced information about 10 electricity technologies, we asked them to design a low-carbon electricity portfolio. Participants used the interactive computer tool, which constrained portfolio designs to be realistic and yield low CO2 emissions. As they changed their portfolios, the tool updated information about projected CO2 emissions, electricity costs, and specific environmental impacts. As in the previous paper-and-pencil studies, most participants designed diverse portfolios that included energy efficiency, nuclear, coal with carbon capture and sequestration, natural gas, and wind. Our results suggest that participants understood the tool and used it consistently. The tool may be downloaded from http://cedmcenter.org/tools-for-cedm/informing-the-public-about-low-carbon-technologies/ . PMID:24564708

  14. On a method computing transient wave propagation in ionospheric regions

    NASA Technical Reports Server (NTRS)

    Gray, K. G.; Bowhill, S. A.

    1978-01-01

    A consequence of an exoatmospheric nuclear burst is an electromagnetic pulse (EMP) radiated from it. In a region far enough away from the burst, where nonlinear effects can be ignored, the EMP can be represented by a large-amplitude narrow-time-width plane-wave pulse. If the ionosphere intervenes the origin and destination of the EMP, frequency dispersion can cause significant changes in the original pulse upon reception. A method of computing these dispersive effects of transient wave propagation is summarized. The method described is different from the standard transform techniques and provides physical insight into the transient wave process. The method, although exact, can be used in approximating the early-time transient response of an ionospheric region by a simple integration with only explicit knowledge of the electron density, electron collision frequency, and electron gyrofrequency required. As an illustration of the method, it is applied to a simple example and contrasted with the corresponding transform solution.

  15. Comparison of different methods for shielding design in computed tomography.

    PubMed

    Ciraj-Bjelac, O; Arandjic, D; Kosutic, D

    2011-09-01

    The purpose of this work is to compare different methods for shielding calculation in computed tomography (CT). The BIR-IPEM (British Institute of Radiology and Institute of Physics in Engineering in Medicine) and NCRP (National Council on Radiation Protection) method were used for shielding thickness calculation. Scattered dose levels and calculated barrier thickness were also compared with those obtained by scatter dose measurements in the vicinity of a dedicated CT unit. Minimal requirement for protective barriers based on BIR-IPEM method ranged between 1.1 and 1.4 mm of lead demonstrating underestimation of up to 20 % and overestimation of up to 30 % when compared with thicknesses based on measured dose levels. For NCRP method, calculated thicknesses were 33 % higher (27-42 %). BIR-IPEM methodology-based results were comparable with values based on scattered dose measurements, while results obtained using NCRP methodology demonstrated an overestimation of the minimal required barrier thickness. PMID:21743070

  16. Optimum threshold selection method of centroid computation for Gaussian spot

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; Wang, Caixia

    2015-10-01

    Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( using?n +?? n as the threshold), ?n and ?n are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.

  17. Evolutionary computational methods to predict oral bioavailability QSPRs.

    PubMed

    Bains, William; Gilbert, Richard; Sviridenko, Lilya; Gascon, Jose-Miguel; Scoffin, Robert; Birchall, Kris; Harvey, Inman; Caldwell, John

    2002-01-01

    This review discusses evolutionary and adaptive methods for predicting oral bioavailability (OB) from chemical structure. Genetic Programming (GP), a specific form of evolutionary computing, is compared with some other advanced computational methods for OB prediction. The results show that classifying drugs into 'high' and 'low' OB classes on the basis of their structure alone is solvable, and initial models are already producing output that would be useful for pharmaceutical research. The results also suggest that quantitative prediction of OB will be tractable. Critical aspects of the solution will involve the use of techniques that can: (i) handle problems with a very large number of variables (high dimensionality); (ii) cope with 'noisy' data; and (iii) implement binary choices to sub-classify molecules with behavior that are qualitatively different. Detailed quantitative predictions will emerge from more refined models that are hybrids derived from mechanistic models of the biology of oral absorption and the power of advanced computing techniques to predict the behavior of the components of those models in silico. PMID:11865672

  18. Approximation method to compute domain related integrals in structural studies

    NASA Astrophysics Data System (ADS)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2015-11-01

    Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the calculus of the integrals associated to the transverse section problems. Thus we use a virtual rectangle which is framing the triangle, being generated supplementary right angled triangles. The sign of rectangle and the signs of the supplementary triangles are conditioned by the sign of the initial triangle. In this way, a generally located triangle for which we have direct calculus relations may be used to generate the discretization of any domain in transverse section associated integrals. A significant consequence of the paper is the opportunity to create modern computer aided engineering applications for structural studies, which use: intelligent applied mathematics background, modern informatics technologies and advanced computing techniques, such as calculus parallelization.

  19. New developments in the multiscale hybrid energy density computational method

    NASA Astrophysics Data System (ADS)

    Min, Sun; Shanying, Wang; Dianwu, Wang; Chongyu, Wang

    2016-01-01

    Further developments in the hybrid multiscale energy density method are proposed on the basis of our previous papers. The key points are as follows. (i) The theoretical method for the determination of the weight parameter in the energy coupling equation of transition region in multiscale model is given via constructing underdetermined equations. (ii) By applying the developed mathematical method, the weight parameters have been given and used to treat some problems in homogeneous charge density systems, which are directly related with multiscale science. (iii) A theoretical algorithm has also been presented for treating non-homogeneous systems of charge density. The key to the theoretical computational methods is the decomposition of the electrostatic energy in the total energy of density functional theory for probing the spanning characteristic at atomic scale, layer by layer, by which the choice of chemical elements and the defect complex effect can be understood deeply. (iv) The numerical computational program and design have also been presented. Project supported by the National Basic Research Program of China (Grant No. 2011CB606402) and the National Natural Science Foundation of China (Grant No. 51071091).

  20. A numerical method to compute interior transmission eigenvalues

    NASA Astrophysics Data System (ADS)

    Kleefeld, Andreas

    2013-10-01

    In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber-Krahn type inequalities for larger transmission eigenvalues that are not yet available.

  1. Public involvement in multi-objective water level regulation development projects-evaluating the applicability of public involvement methods

    SciTech Connect

    Vaentaenen, Ari . E-mail: armiva@utu.fi; Marttunen, Mika . E-mail: Mika.Marttunen@ymparisto.fi

    2005-04-15

    Public involvement is a process that involves the public in the decision making of an organization, for example a municipality or a corporation. It has developed into a widely accepted and recommended policy in environment altering projects. The EU Water Framework Directive (WFD) took force in 2000 and stresses the importance of public involvement in composing river basin management plans. Therefore, the need to develop public involvement methods for different situations and circumstances is evident. This paper describes how various public involvement methods have been applied in a development project involving the most heavily regulated lake in Finland. The objective of the project was to assess the positive and negative impacts of regulation and to find possibilities for alleviating the adverse impacts on recreational use and the aquatic ecosystem. An exceptional effort was made towards public involvement, which was closely connected to planning and decision making. The applied methods were (1) steering group work, (2) survey, (3) dialogue, (4) theme interviews, (5) public meeting and (6) workshops. The information gathered using these methods was utilized in different stages of the project, e.g., in identifying the regulation impacts, comparing alternatives and compiling the recommendations for regulation development. After describing our case and the results from the applied public involvement methods, we will discuss our experiences and the feedback from the public. We will also critically evaluate our own success in coping with public involvement challenges. In addition to that, we present general recommendations for dealing with these problematic issues based on our experiences, which provide new insights for applying various public involvement methods in multi-objective decision making projects.

  2. Computation of multi-material interactions using point method

    SciTech Connect

    Zhang, Duan Z; Ma, Xia; Giguere, Paul T

    2009-01-01

    Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations. Numerical examples are given to demonstrate the new scheme.

  3. An analytical method for computing atomic contact areas in biomolecules.

    PubMed

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. PMID:22965816

  4. A computational method for quantifying morphological variation in scleractinian corals

    NASA Astrophysics Data System (ADS)

    Kruszy?ski, K. J.; Kaandorp, J. A.; van Liere, R.

    2007-12-01

    Morphological variation in marine sessile organisms is frequently related to environmental factors. Quantifying such variation is relevant in a range of ecological studies. For example, analyzing the growth form of fossil organisms may indicate the state of the physical environment in which the organism lived. A quantitative morphological comparison is important in studies where marine sessile organisms are transplanted from one environment to another. This study presents a method for the quantitative analysis of three-dimensional (3D) images of scleractinian corals obtained with X-ray Computed Tomography scanning techniques. The advantage of Computed Tomography scanning is that a full 3D image of a complex branching object, including internal structures, can be obtained with a very high precision. There are several complications in the analysis of this data set. In the analysis of a complex branching object, landmark-based methods usually do not work and different approaches are required where various artifacts (for example cavities, holes in the skeleton, scanning artifacts, etc.) in the data set have to be removed before the analysis. A method is presented, which is based on the construction of a medial axis and a combination of image-processing techniques for the analysis of a 3D image of a complex branching object where the complications mentioned above can be overcome. The method is tested on a range of 3D images of samples of the branching scleractinian coral Madracis mirabilis collected at different depths. It is demonstrated that the morphological variation of these samples can be quantified, and that biologically relevant morphological characteristics, like branch-spacing and surface/volume ratios, can be computed.

  5. COMSAC: Computational Methods for Stability and Control. Part 2

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  6. Computational methods of the Advanced Fluid Dynamics Model

    SciTech Connect

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  7. PREFACE: Theory, Modelling and Computational methods for Semiconductors

    NASA Astrophysics Data System (ADS)

    Migliorato, Max; Probert, Matt

    2010-04-01

    These conference proceedings contain the written papers of the contributions presented at the 2nd International Conference on: Theory, Modelling and Computational methods for Semiconductors. The conference was held at the St Williams College, York, UK on 13th-15th Jan 2010. The previous conference in this series took place in 2008 at the University of Manchester, UK. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in Semiconductor science and technology, where there is a substantial potential for time saving in R&D. The development of high speed computer architectures is finally allowing the routine use of accurate methods for calculating the structural, thermodynamic, vibrational and electronic properties of semiconductors and their heterostructures. This workshop ran for three days, with the objective of bringing together UK and international leading experts in the field of theory of group IV, III-V and II-VI semiconductors together with postdocs and students in the early stages of their careers. The first day focused on providing an introduction and overview of this vast field, aimed particularly at students at this influential point in their careers. We would like to thank all participants for their contribution to the conference programme and these proceedings. We would also like to acknowledge the financial support from the Institute of Physics (Computational Physics group and Semiconductor Physics group), the UK Car-Parrinello Consortium, Accelrys (distributors of Materials Studio) and Quantumwise (distributors of Atomistix). The Editors Acknowledgements Conference Organising Committee: Dr Matt Probert (University of York) and Dr Max Migliorato (University of Manchester) Programme Committee: Dr Marco Califano (University of Leeds), Dr Jacob Gavartin (Accelrys Ltd, Cambridge), Dr Stanko Tomic (STFC Daresbury Laboratory), Dr Gabi Slavcheva (Imperial College London) Proceedings edited and compiled by Dr Max Migliorato and Dr Matt Probert

  8. A Computational Method for Identifying Yeast Cell Cycle Transcription Factors.

    PubMed

    Wu, Wei-Sheng

    2016-01-01

    The eukaryotic cell cycle is a complex process and is precisely regulated at many levels. Many genes specific to the cell cycle are regulated transcriptionally and are expressed just before they are needed. To understand the cell cycle process, it is important to identify the cell cycle transcription factors (TFs) that regulate the expression of cell cycle-regulated genes. Here, we describe a computational method to identify cell cycle TFs in yeast by integrating current ChIP-chip, mutant, transcription factor-binding site (TFBS), and cell cycle gene expression data. For each identified cell cycle TF, our method also assigned specific cell cycle phases in which the TF functions and identified the time lag for the TF to exert regulatory effects on its target genes. Moreover, our method can identify novel cell cycle-regulated genes as a by-product. PMID:26254926

  9. On implicit Runge-Kutta methods for parallel computations

    NASA Technical Reports Server (NTRS)

    Keeling, Stephen L.

    1987-01-01

    Implicit Runge-Kutta methods which are well-suited for parallel computations are characterized. It is claimed that such methods are first of all, those for which the associated rational approximation to the exponential has distinct poles, and these are called multiply explicit (MIRK) methods. Also, because of the so-called order reduction phenomenon, there is reason to require that these poles be real. Then, it is proved that a necessary condition for a q-stage, real MIRK to be A sub 0-stable with maximal order q + 1 is that q = 1, 2, 3, or 5. Nevertheless, it is shown that for every positive integer q, there exists a q-stage, real MIRK which is I-stable with order q. Finally, some useful examples of algebraically stable MIRKs are given.

  10. Comparison of different simulation methods for multiplane computer generated holograms

    NASA Astrophysics Data System (ADS)

    Kmpfe, Thomas; Hudelist, Florian; Waddie, Andrew J.; Taghizadeh, Mohammad R.; Kley, Ernst-Bernhard; Tunnermann, Andreas

    2008-04-01

    Computer generated holograms (CGH) are used to transform an incoming light distribution into a desired output. Recently multi plane CGHs became of interest since they allow the combination of some well known design methods for thin CGHs with unique properties of thick holograms. Iterative methods like the iterative Fourier transform algorithm (IFTA) require an operator that transforms a required optical function into an actual physical structure (e.g. a height structure). Commonly the thin element approximation (TEA) is used for this purpose. Together with the angular spectrum of plane waves (APSW) it has also been successfully used in the case of multi plane CGHs. Of course, due to the approximations inherent in TEA, it can only be applied above a certain feature size. In this contribution we want to give a first comparison of the TEA & ASPW approach with simulation results from the Fourier modal method (FMM) for the example of one dimensional, pattern generating, multi plane CGH.

  11. A computationally light classification method for mobile wellness platforms.

    PubMed

    Knnen, Ville; Mntyjrvi, Jani; Simil, Heidi; Prkk, Juha; Ermes, Miikka

    2008-01-01

    The core of activity recognition in mobile wellness devices is a classification engine which maps observations from sensors to estimated classes. There exists a vast number of different classification algorithms that can be used for this purpose in the machine learning literature. Unfortunately, the computational and space requirements of these methods are often too high for the current mobile devices. In this paper we study a simple linear classifier and find, automatically with SFS and SFFS feature selection methods, a suitable set of features to be used with the classification method. The results show that the simple classifier performs comparable to more complex nonlinear k-Nearest Neighbor Classifier. This depicts great potential in implementing the classifier in small mobile wellness devices. PMID:19162872

  12. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    SciTech Connect

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  13. Review methods for image segmentation from computed tomography images

    SciTech Connect

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-12-04

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

  14. Review methods for image segmentation from computed tomography images

    NASA Astrophysics Data System (ADS)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-12-01

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

  15. Consensus methods: review of original methods and their main alternatives used in public health

    PubMed Central

    Bourre, Fanny; Michel, Philippe; Salmi, Louis Rachid

    2008-01-01

    Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Donnes Sant Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039

  16. A multigrid nonoscillatory method for computing high speed flows

    NASA Technical Reports Server (NTRS)

    Li, C. P.; Shieh, T. H.

    1993-01-01

    A multigrid method using different smoothers has been developed to solve the Euler equations discretized by a nonoscillatory scheme up to fourth order accuracy. The best smoothing property is provided by a five-stage Runge-Kutta technique with optimized coefficients, yet the most efficient smoother is a backward Euler technique in factored and diagonalized form. The singlegrid solution for a hypersonic, viscous conic flow is in excellent agreement with the solution obtained by the third order MUSCL and Roe's method. Mach 8 inviscid flow computations for a complete entry probe have shown that the accuracy is at least as good as the symmetric TVD scheme of Yee and Harten. The implicit multigrid method is four times more efficient than the explicit multigrid technique and 3.5 times faster than the single-grid implicit technique. For a Mach 8.7 inviscid flow over a blunt delta wing at 30 deg incidence, the CPU reduction factor from the three-level multigrid computation is 2.2 on a grid of 37 x 41 x 73 nodes.

  17. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  18. Analysis of flavonoids: tandem mass spectrometry, computational methods, and NMR.

    PubMed

    March, Raymond; Brodbelt, Jennifer

    2008-12-01

    Due to the increasing understanding of the health benefits and chemopreventive properties of flavonoids, there continues to be significant effort dedicated to improved analytical methods for characterizing the structures of flavonoids and monitoring their levels in fruits and vegetables, as well as developing new approaches for mapping the interactions of flavonoids with biological molecules. Tandem mass spectrometry (MS/MS), particularly in conjunction with liquid chromatography (LC), is the dominant technique that has been pursued for elucidation of flavonoids. Metal complexation strategies have proven to be especially promising for enhancing the ionization of flavonoids and yielding key diagnostic product ions for differentiation of isomers. Of particular value is the addition of a chromophoric ligand to allow the application of infrared (IR) multiphoton dissociation as an alternative to collision-induced dissociation (CID) for the differentiation of isomers. CID, including energy-resolved methods, and nuclear magnetic resonance (NMR) have also been utilized widely for structural characterization of numerous classes of flavonoids and development of structure/activity relationships.The gas-phase ion chemistry of flavonoids is an active area of research particularly when combined with accurate mass measurement for distinguishing between isobaric ions. Applications of a variety of ab initio and chemical computation methods to the study of flavonoids have been reported, and the results of computations of ion and molecular structures have been shown together with computations of atomic charges and ion fragmentation. Unambiguous ion structures are obtained rarely using MS alone. Thus, it is necessary to combine MS with spectroscopic techniques such as ultraviolet (UV) and NMR to achieve this objective. The application of NMR data to the mass spectrometric examination of flavonoids is discussed. PMID:18855332

  19. Experiences with the Lanczos method on a parallel computer

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.; Fulton, Robert E.

    1987-01-01

    A parallel computer implementation of the Lanczos method for the free-vibration analysis of structures is considered, and results for two example problems show substantial time-reduction over the sequential solutions. The major Lanczos calculation tasks are subdivided into subtasks, and parallelism is introduced at the subtask level. A speedup of 7.8 on eight processors was obtained for the decomposition step of the problem involving a 60-m three-longeron space mast, and a speedup of 14.6 on 16 processors was obtained for the decomposition step of the problem involving a blade-stiffened graphite-epoxy panel.

  20. Method and apparatus for managing transactions with connected computers

    DOEpatents

    Goldsmith, Steven Y.; Phillips, Laurence R.; Spires, Shannon V.

    2003-01-01

    The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.

  1. Assessment of nonequilibrium radiation computation methods for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Sharma, Surendra

    1993-01-01

    The present understanding of shock-layer radiation in the low density regime, as appropriate to hypersonic vehicles, is surveyed. Based on the relative importance of electron excitation and radiation transport, the hypersonic flows are divided into three groups: weakly ionized, moderately ionized, and highly ionized flows. In the light of this division, the existing laboratory and flight data are scrutinized. Finally, an assessment of the nonequilibrium radiation computation methods for the three regimes in hypersonic flows is presented. The assessment is conducted by comparing experimental data against the values predicted by the physical model.

  2. Compressive sampling in computed tomography: Method and application

    NASA Astrophysics Data System (ADS)

    Hu, Zhanli; Liang, Dong; Xia, Dan; Zheng, Hairong

    2014-06-01

    Since Donoho and Candes et al. published their groundbreaking work on compressive sampling or compressive sensing (CS), CS theory has attracted a lot of attention and become a hot topic, especially in biomedical imaging. Specifically, some CS based methods have been developed to enable accurate reconstruction from sparse data in computed tomography (CT) imaging. In this paper, we will review the progress in CS based CT from aspects of three fundamental requirements of CS: sparse representation, incoherent sampling and reconstruction algorithm. In addition, some potential applications of compressive sampling in CT are introduced.

  3. Fan Flutter Computations Using the Harmonic Balance Method

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Thomas, Jeffrey P.; Reddy, T.S.R.

    2009-01-01

    An experimental forward-swept fan encountered flutter at part-speed conditions during wind tunnel testing. A new propulsion aeroelasticity code, based on a computational fluid dynamics (CFD) approach, was used to model the aeroelastic behavior of this fan. This threedimensional code models the unsteady flowfield due to blade vibrations using a harmonic balance method to solve the Navier-Stokes equations. This paper describes the flutter calculations and compares the results to experimental measurements and previous results from a time-accurate propulsion aeroelasticity code.

  4. Computational Studies of Protein Aggregation: Methods and Applications

    NASA Astrophysics Data System (ADS)

    Morriss-Andrews, Alex; Shea, Joan-Emma

    2015-04-01

    Protein aggregation involves the self-assembly of normally soluble proteins into large supramolecular assemblies. The typical end product of aggregation is the amyloid fibril, an extended structure enriched in β-sheet content. The aggregation process has been linked to a number of diseases, most notably Alzheimer's disease, but fibril formation can also play a functional role in certain organisms. This review focuses on theoretical studies of the process of fibril formation, with an emphasis on the computational models and methods commonly used to tackle this problem.

  5. A new method to compute lunisolar perturbations in satellite motions

    NASA Technical Reports Server (NTRS)

    Kozai, Y.

    1973-01-01

    A new method to compute lunisolar perturbations in satellite motion is proposed. The disturbing function is expressed by the orbital elements of the satellite and the geocentric polar coordinates of the moon and the sun. The secular and long periodic perturbations are derived by numerical integrations, and the short periodic perturbations are derived analytically. The perturbations due to the tides can be included in the same way. In the Appendix, the motion of the orbital plane for a synchronous satellite is discussed; it is concluded that the inclination cannot stay below 7 deg.

  6. Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity so that they are being frequently employed for specific real world applications within NASA. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by highly complex geometries. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the peculiarities of applying the immersed boundary method to this moving boundary problem, we will provide a detailed aeroacoustic analysis of the noise generation mechanisms encountered in the open rotor flow. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. The noise generation mechanisms are analyzed employing spectral analysis, proper orthogonal decomposition and the causality method.

  7. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  8. Computational methods for studying G protein-coupled receptors (GPCRs).

    PubMed

    Kaczor, Agnieszka A; Rutkowska, Ewelina; Bartuzi, Damian; Targowska-Duda, Katarzyna M; Matosiuk, Dariusz; Selent, Jana

    2016-01-01

    The functioning of GPCRs is classically described by the ternary complex model as the interplay of three basic components: a receptor, an agonist, and a G protein. According to this model, receptor activation results from an interaction with an agonist, which translates into the activation of a particular G protein in the intracellular compartment that, in turn, is able to initiate particular signaling cascades. Extensive studies on GPCRs have led to new findings which open unexplored and exciting possibilities for drug design and safer and more effective treatments with GPCR targeting drugs. These include discovery of novel signaling mechanisms such as ligand promiscuity resulting in multitarget ligands and signaling cross-talks, allosteric modulation, biased agonism, and formation of receptor homo- and heterodimers and oligomers which can be efficiently studied with computational methods. Computer-aided drug design techniques can reduce the cost of drug development by up to 50%. In particular structure- and ligand-based virtual screening techniques are a valuable tool for identifying new leads and have been shown to be especially efficient for GPCRs in comparison to water-soluble proteins. Modern computer-aided approaches can be helpful for the discovery of compounds with designed affinity profiles. Furthermore, homology modeling facilitated by a growing number of available templates as well as molecular docking supported by sophisticated techniques of molecular dynamics and quantitative structure-activity relationship models are an excellent source of information about drug-receptor interactions at the molecular level. PMID:26928552

  9. An experiment in hurricane track prediction using parallel computing methods

    NASA Technical Reports Server (NTRS)

    Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

    1994-01-01

    The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

  10. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.

  11. A bibliography on finite element and related methods analysis in reactor physics computations (1971--1997)

    SciTech Connect

    Carpenter, D.C.

    1998-01-01

    This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

  12. Computational method for reducing variance with Affymetrix microarrays

    PubMed Central

    2002-01-01

    Background Affymetrix microarrays are used by many laboratories to generate gene expression profiles. Generally, only large differences (> 1.7-fold) between conditions have been reported. Computational methods to reduce inter-array variability might be of value when attempting to detect smaller differences. We examined whether inter-array variability could be reduced by using data based on the Affymetrix algorithm for pairwise comparisons between arrays (ratio method) rather than data based on the algorithm for analysis of individual arrays (signal method). Six HG-U95A arrays that probed mRNA from young (21–31 yr old) human muscle were compared with six arrays that probed mRNA from older (62–77 yr old) muscle. Results Differences in mean expression levels of young and old subjects were small, rarely > 1.5-fold. The mean within-group coefficient of variation for 4629 mRNAs expressed in muscle was 20% according to the ratio method and 25% according to the signal method. The ratio method yielded more differences according to t-tests (124 vs. 98 differences at P < 0.01), rank sum tests (107 vs. 85 differences at P < 0.01), and the Significance Analysis of Microarrays method (124 vs. 56 differences with false detection rate < 20%; 20 vs. 0 differences with false detection rate < 5%). The ratio method also improved consistency between results of the initial scan and results of the antibody-enhanced scan. Conclusion The ratio method reduces inter-array variance and thereby enhances statistical power. PMID:12204100

  13. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...) Compliance with 17 CFR part 49. Any registered swap data repository that accepts and publicly disseminates...-time public reporting. 43.3 Section 43.3 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting....

  14. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) Compliance with 17 CFR part 49. Any registered swap data repository that accepts and publicly disseminates...-time public reporting. 43.3 Section 43.3 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION (CONTINUED) REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting....

  15. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...) Compliance with 17 CFR part 49. Any registered swap data repository that accepts and publicly disseminates...-time public reporting. 43.3 Section 43.3 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting....

  16. An integrated-intensity method for emission spectrographic computer analysis

    USGS Publications Warehouse

    Thomas, Catharine P.

    1975-01-01

    An integrated-intensity method has been devised to improve the computer analysis of data by emission spectrography. The area of the intensity profile of a spectral line is approximated by a rectangle whose height is related to the intensity difference between the peak and background of the line and whose width is measured at a fixed transmittance below the apex of the line. The method is illustrated by the determination of strontium in the presence of greater than 10 percent calcium. The Sr 3380.711-A line, which is unaffected by calcium and which has a linear analytical curve extending from 100-3,000 ppm, has been used to determine strontium in 18 standard reference rocks covering a wide range of geologic materials. Both the accuracy and the precision of the determinations were well within the accepted range for a semiquantitative procedure.

  17. Computing thermal Wigner densities with the phase integration method

    SciTech Connect

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  18. Computational analysis of methods for reduction of induced drag

    NASA Technical Reports Server (NTRS)

    Janus, J. M.; Chatterjee, Animesh; Cave, Chris

    1993-01-01

    The purpose of this effort was to perform a computational flow analysis of a design concept centered around induced drag reduction and tip-vortex energy recovery. The flow model solves the unsteady three-dimensional Euler equations, discretized as a finite-volume method, utilizing a high-resolution approximate Riemann solver for cell interface flux definitions. The numerical scheme is an approximately-factored block LU implicit Newton iterative-refinement method. Multiblock domain decomposition is used to partition the field into an ordered arrangement of blocks. Three configurations are analyzed: a baseline fuselage-wing, a fuselage-wing-nacelle, and a fuselage-wing-nacelle-propfan. Aerodynamic force coefficients, propfan performance coefficients, and flowfield maps are used to qualitatively access design efficacy. Where appropriate, comparisons are made with available experimental data.

  19. A computational design method for transonic turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Dulikravich, D. S.

    1982-01-01

    This paper describes a systematical computational procedure to find configuration changes necessary to modify the resulting flow past turbomachinery cascades, channels and nozzles, to be shock-free at prescribed transonic operating conditions. The method is based on a finite area transonic analysis technique and the fictitious gas approach. This design scheme has two major areas of application. First, it can be used for design of supercritical cascades, with applications mainly in compressor blade design. Second, it provides subsonic inlet shapes including sonic surfaces with suitable initial data for the design of supersonic (accelerated) exits, like nozzles and turbine cascade shapes. This fast, accurate and economical method with a proven potential for applications to three-dimensional flows is illustrated by some design examples.

  20. Computational discovery of new structures using the minima hopping method

    NASA Astrophysics Data System (ADS)

    Goedecker, Stefan

    2014-03-01

    Theoretical structure prediction methods can find a huge number of possible low energy structures of materials. I will present some basic principles for locating them efficiently and show how these principles are exploited in the minima hopping method. I will next survey some of our applications to various materials. I will present our studies of several hydrogen storage materials for which we found numerous hitherto unknown structures, computer generated silicon allotropes that have promising applications for photovoltaic applications and summarize our search for stable fullerene like structures beyond carbon. I will also address the question of whether theoretically found materials can be synthesized in practice and single out features of the potential energy landscape that facilitate the synthesis.

  1. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions 779.342 Methods of computing annual... gross receipts from all sales of the establishment during a 12-month period. The methods of computing...

  2. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions 779.342 Methods of computing annual... gross receipts from all sales of the establishment during a 12-month period. The methods of computing...

  3. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions 779.342 Methods of computing annual... gross receipts from all sales of the establishment during a 12-month period. The methods of computing...

  4. Radiation Transport Computation in Stochastic Media: Method and Application

    NASA Astrophysics Data System (ADS)

    Liang, Chao

    Stochastic media, characterized by the stochastic distribution of inclusions in a background medium, are typical radiation transport media encountered in natural or engineering systems. In the community of radiation transport computation, there is always a demand of accurate and efficient methods that can account for the nature of the stochastic distribution. In this dissertation, we focus on methodology development for the radiation transport computation that is applied to neutronic analyses of nuclear reactor designs characterized by the stochastic distribution of particle fuel. Reactor concepts with the employment of a fuel design consisting of a random heterogeneous mixture of fissile material and non-fissile moderator are constantly proposed. Key physical quantities such as core criticality and power distribution, reactivity control design parameters, depletion and fuel burn-up need to be carefully evaluated. In order to meet these practical requirements, we first need to develop accurate and fast computational methods that can effectively account for the stochastic nature of double heterogeneity configuration. A Monte Carlo based method called Chord Length Sampling (CLS) method is considered to be a promising method for analyzing those TRISO-type fueled reactors. Although the CLS method has been proposed for more than two decades and much research has been conducted to enhance its applicability, further efforts are still needed to address some key research gaps that exist for the CLS method. (1) There is a general lack of thorough investigation of the factors that give rise to the inaccuracy of the CLS method found by many researchers. The accuracy of the CLS method depends on the optical and geometric properties of the system. In some specific scenarios, considerable inaccuracies have been reported. However, no research has been providing a clear interpretation of the reasons responsible for the inaccuracy in the reported scenarios. Furthermore, no any correction methods have been proposed or developed to improve the accuracy of the CLS in all the applied scenarios. (2) Previous CLS method only deals with the on-the-fly sample of fuel particles in analyzing TRISO-type fueled reactors. Within the fuel particle, which consists of a fuel kernel and a coating, conventional Monte Carlo simulations apply. This strategy may not fully achieve the highest computational efficiency since extra simulation time is taken for tracking neutrons in the coating region. The coating region has negligible neutronic effect on the overall reactor core performance. This indicates a possible strategy to further increase the computational efficiency by directly sampling fuel kernels on-the-fly in the CLS simulations. In order to test the new strategy, a new model of the chord length distribution function is needed, which requires new research effort to develop and test the new model. (3) The previous evaluations and applications of the CLS method have been limited to single-type single-size fuel particle systems, i.e. only one type of fuel particles with constant size is assumed in the fuel zone, which is the case for typical VHTR designs. In practice, however, for different application purposes, two or more types of TRISO fuel particles may be loaded in the same fuel zone, e.g. fissile fuel particles and fertile fuel particles are used for transmutation purpose in some reactors. Moreover, the fuel particle size may not be kept constant and can vary with a range. Typical design containing such fuel particles can be found in the FSV reactor. Therefore, it is desired to develop new computational model to treat multi-type poly-sized particle systems in the neutornic analysis. This requires extending the current CLS method to on-the-fly sample not only the location of the fuel particle, but also the type and size of the fuel particles in order to be applied to a broad range of reactor designs in neutronic analyses. New sampling functions need to be developed for the extended on-the-fly sampling strategy. This Ph.D. dissertation addressed these

  5. Dosimetry methods for multi-detector computed tomography.

    PubMed

    Gancheva, M; Dyakov, I; Vassileva, J; Avramova-Cholakova, S; Taseva, D

    2015-07-01

    The aim of this study is to compare four dosimetry methods for wide-beam multi-detector computed tomography (MDCT) in terms of computed tomography dose index free in air (CTDI free-in-air) and CTDI measured in phantom (CTDI phantom). The study was performed with Aquilion One 320-detector row CT (Toshiba), Ingenuity 64-detector row CT (Philips) and Aquilion 64 64-detector row CT (Toshiba). In addition to the standard dosimetry, three other dosimetry methods were also applied. The first method, suggested by International Electrotechnical Commission (IEC) for MDCT, includes free-in-air measurements with a standard 100-mm CT pencil ion chamber, stepped through the X-ray beam, along the z-axis, at intervals equal to its sensitive length. Two cases were studied-with an integration length of 200 mm and with a standard polimetil metakrilat (PMMA) dosimetry phantom. The second approach comprised measurements with a twice-longer phantom and two 100-mm chambers positioned and fixed against each other, forming a detection length of 200 mm. As a third method, phantom measurements were performed to study the real-dose profile along z-axis using thermoluminescent detectors. Fabricated PMMA tube of a total length of 300 mm in cylindrical shape containing LiF detectors was used. CTDI free-in-air measured with an integration length of 300 mm for 160 mm wide beam was by 194 % higher than the same quantity measured using the standard method. For an integration length of 200 mm, the difference was 18 % for 40 mm wide beam and 14 % for 32 mm wide beam in comparison with the standard CTDI measurement. For phantom measurements, the IEC method resulted in difference of 41 % for the beam width 160 mm, 19 % for the beam width 40 mm and 18 % for the beam width 32 mm compared with the method for CTDI vol. CTDI values from direct measurement in the phantom central hole with two chambers differ by 20 % from the calculated values by the IEC method. Dose profile for beam widths of 40, 32 and 16 mm, and analysis and conclusions are presented. PMID:25889607

  6. Matrix element method for high performance computing platforms

    NASA Astrophysics Data System (ADS)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  7. A fast phase space method for computing creeping rays

    SciTech Connect

    Motamed, Mohammad . E-mail: mohamad@nada.kth.se; Runborg, Olof . E-mail: olofr@nada.kth.se

    2006-11-20

    Creeping rays can give an important contribution to the solution of medium to high frequency scattering problems. They are generated at the shadow lines of the illuminated scatterer by grazing incident rays and propagate along geodesics on the scatterer surface, continuously shedding diffracted rays in their tangential direction. In this paper, we show how the ray propagation problem can be formulated as a partial differential equation (PDE) in a three-dimensional phase space. To solve the PDE we use a fast marching method. The PDE solution contains information about all possible creeping rays. This information includes the phase and amplitude of the field, which are extracted by a fast post-processing. Computationally, the cost of solving the PDE is less than tracing all rays individually by solving a system of ordinary differential equations. We consider an application to mono-static radar cross section problems where creeping rays from all illumination angles must be computed. The numerical results of the fast phase space method and a comparison with the results of ray tracing are presented.

  8. Parallel computation of multigroup reactivity coefficient using iterative method

    SciTech Connect

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-09

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  9. Parallel computation of multigroup reactivity coefficient using iterative method

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  10. [Computational method for prediction of protein functional sites using specificity determinants].

    PubMed

    Kalinina, O V; Rassel, R B; Rakhmaninova, A B; Gel'fand, M S

    2007-01-01

    The current available data on protein sequences largely exceeds the experimental capabilities to annotate their function. So annotation in silico, i.e. using computational methods becomes increasingly important. This annotation is inevitably a prediction, but it can be an important starting point for further experimental studies. Here we present a method for prediction of protein functional sites, SDPsite, based on the identification of protein specificity determinants. Taking as an input a protein sequence alignment and a phylogenetic tree, the algorithm predicts conserved positions and specificity determinants, maps them onto the protein's 3D structure, and searches for clusters of the predicted positions. Comparison of the obtained predictions with experimental data and data on performance of several other methods for prediction of functional sites reveals that SDPsite agrees well with the experiment and outperforms most of the previously available methods. SDPsite is publicly available under http://bioinf.fbb.msu.ru/SDPsite. PMID:17380902

  11. Systems Engineering Methods for Enhancing the Value Stream in Public Health Preparedness: The Role of Markov Models, Simulation, and Optimization

    PubMed Central

    Yaylali, Emine; Taheri, Javad

    2014-01-01

    Objectives Large-scale incidents such as the 2009 H1N1 outbreak, the 2011 European Escherichia coli outbreak, and Hurricane Sandy demonstrate the need for continuous improvement in emergency preparation, alert, and response systems globally. As questions relating to emergency preparedness and response continue to rise to the forefront, the field of industrial and systems engineering (ISE) emerges, as it provides sophisticated techniques that have the ability to model the system, simulate, and optimize complex systems, even under uncertainty. Methods We applied three ISE techniquesMarkov modeling, operations research (OR) or optimization, and computer simulationto public health emergency preparedness. Results We present three models developed through a four-year partnership with stakeholders from state and local public health for effectively, efficiently, and appropriately responding to potential public health threats: (1) an OR model for optimal alerting in response to a public health event, (2) simulation models developed to respond to communicable disease events from the perspective of public health, and (3) simulation models for implementing pandemic influenza vaccination clinics representative of clinics in operation for the 20092010 H1N1 vaccinations in North Carolina. Conclusions The methods employed by the ISE discipline offer powerful new insights to understand and improve public health emergency preparedness and response systems. The models can be used by public health practitioners not only to inform their planning decisions but also to provide a quantitative argument to support public health decision making and investment. PMID:25355986

  12. Establishing an international computer network for research and teaching in public health and epidemiology.

    PubMed

    Ostbye, T; Bojan, F; Rennert, G; Hurlen, P; Garner, B

    1991-01-01

    Most universities and major research institutions in North America, Western Europe and around the Pacific are connected via computer communication networks. The authors have used these networks' accessible, low cost, electronic mail system to develop a network of public health researchers and teachers. Current and potential uses of this network are discussed. These networks can not only facilitate international cooperation within public health; they also make it possible to conduct international collaborative research projects that would be too cumbersome and time consuming to initialize and conduct without this communication facility. One participant from Hungary has been able to participate in the network by using telefax. This has some drawbacks compared to electronic mail. In this era of rapid change in Eastern Europe, we urge that electronic communication be made freely available to colleagues in Eastern Europe. PMID:2026221

  13. Computational method for calligraphic style representation and classification

    NASA Astrophysics Data System (ADS)

    Zhang, Xiafen; Nagy, George

    2015-09-01

    A large collection of reproductions of calligraphy on paper was scanned into images to enable web access for both the academic community and the public. Calligraphic paper digitization technology is mature, but technology for segmentation, character coding, style classification, and identification of calligraphy are lacking. Therefore, computational tools for classification and quantification of calligraphic style are proposed and demonstrated on a statistically characterized corpus. A subset of 259 historical page images is segmented into 8719 individual character images. Calligraphic style is revealed and quantified by visual attributes (i.e., appearance features) of character images sampled from historical works. A style space is defined with the features of five main classical styles as basis vectors. Cross-validated error rates of 10% to 40% are reported on conventional and conservative sampling into training/test sets and on same-work voting with a range of voter participation. Beyond its immediate applicability to education and scholarship, this research lays the foundation for style-based calligraphic forgery detection and for discovery of latent calligraphic groups induced by mentor-student relationships.

  14. Matching wind turbine rotors and loads: computational methods for designers

    SciTech Connect

    Seale, J.B.

    1983-04-01

    This report provides a comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications. The user must supply: (1) turbine aerodynamic efficiency as a function of tipspeed ratio; (2) mechanical load torque as a function of rotation speed; (3) useful delivered power as a function of incoming mechanical power; (4) site average windspeed and, for maximum accuracy, distribution data. The description of the data includes governing limits consistent with the capacities of components. The report develops, a step-by-step method for converting the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) a decision is made how turbine power is to be governed (it may self-govern) to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics come into play to predict longterm energy output. Most systems can be approximated by a graph-and-calculator approach: Computer-generated families of coefficient curves provide data for algebraic scaling formulas. The method leads not only to energy predictions, but also to insight into the processes being modeled. Direct use of a computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out witn in-depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps, including three different load-compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

  15. Graphical Methods: A Review of Current Methods and Computer Hardware and Software. Technical Report No. 27.

    ERIC Educational Resources Information Center

    Bessey, Barbara L.; And Others

    Graphical methods for displaying data, as well as available computer software and hardware, are reviewed. The authors have emphasized the types of graphs which are most relevant to the needs of the National Center for Education Statistics (NCES) and its readers. The following types of graphs are described: tabulations, stem-and-leaf displays,…

  16. Computational modeling of multicellular constructs with the material point method.

    PubMed

    Guilkey, James E; Hoying, James B; Weiss, Jeffrey A

    2006-01-01

    Computational modeling of the mechanics of cells and multicellular constructs with standard numerical discretization techniques such as the finite element (FE) method is complicated by the complex geometry, material properties and boundary conditions that are associated with such systems. The objectives of this research were to apply the material point method (MPM), a meshless method, to the modeling of vascularized constructs by adapting the algorithm to accurately handle quasi-static, large deformation mechanics, and to apply the modified MPM algorithm to large-scale simulations using a discretization that was obtained directly from volumetric confocal image data. The standard implicit time integration algorithm for MPM was modified to allow the background computational grid to remain fixed with respect to the spatial distribution of material points during the analysis. This algorithm was used to simulate the 3D mechanics of a vascularized scaffold under tension, consisting of growing microvascular fragments embedded in a collagen gel, by discretizing the construct with over 13.6 million material points. Baseline 3D simulations demonstrated that the modified MPM algorithm was both more accurate and more robust than the standard MPM algorithm. Scaling studies demonstrated the ability of the parallel code to scale to 200 processors. Optimal discretization was established for the simulations of the mechanics of vascularized scaffolds by examining stress distributions and reaction forces. Sensitivity studies demonstrated that the reaction force during simulated extension was highly sensitive to the modulus of the microvessels, despite the fact that they comprised only 10.4% of the volume of the total sample. In contrast, the reaction force was relatively insensitive to the effective Poisson's ratio of the entire sample. These results suggest that the MPM simulations could form the basis for estimating the modulus of the embedded microvessels through a parameter estimation scheme. Because of the generality and robustness of the modified MPM algorithm, the relative ease of generating spatial discretizations from volumetric image data, and the ability of the parallel computational implementation to scale to large processor counts, it is anticipated that this modeling approach may be extended to many other applications, including the analysis of other multicellular constructs and investigations of cell mechanics. PMID:16095601

  17. Developing of the Computer Method for Annotation of Bacterial Genes

    PubMed Central

    Golyshev, Mikhail A.; Korotkov, Eugene V.

    2015-01-01

    Over the last years a great number of bacterial genomes were sequenced. Now one of the most important challenges of computational genomics is the functional annotation of nucleic acid sequences. In this study we presented the computational method and the annotation system for predicting biological functions using phylogenetic profiles. The phylogenetic profile of a gene was created by way of searching for similarities between the nucleotide sequence of the gene and 1204 reference genomes, with further estimation of the statistical significance of found similarities. The profiles of the genes with known functions were used for prediction of possible functions and functional groups for the new genes. We conducted the functional annotation for genes from 104 bacterial genomes and compared the functions predicted by our system with the already known functions. For the genes that have already been annotated, the known function matched the function we predicted in 63% of the time, and in 86% of the time the known function was found within the top five predicted functions. Besides, our system increased the share of annotated genes by 19%. The developed system may be used as an alternative or complementary system to the current annotation systems. PMID:26770195

  18. Novel computational methods to design protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Zhou, Alice Qinhua; O'Hern, Corey; Regan, Lynne

    2014-03-01

    Despite the abundance of structural data, we still cannot accurately predict the structural and energetic changes resulting from mutations at protein interfaces. The inadequacy of current computational approaches to the analysis and design of protein-protein interactions has hampered the development of novel therapeutic and diagnostic agents. In this work, we apply a simple physical model that includes only a minimal set of geometrical constraints, excluded volume, and attractive van der Waals interactions to 1) rank the binding affinity of mutants of tetratricopeptide repeat proteins with their cognate peptides, 2) rank the energetics of binding of small designed proteins to the hydrophobic stem region of the influenza hemagglutinin protein, and 3) predict the stability of T4 lysozyme and staphylococcal nuclease mutants. This work will not only lead to a fundamental understanding of protein-protein interactions, but also to the development of efficient computational methods to rationally design protein interfaces with tunable specificity and affinity, and numerous applications in biomedicine. NSF DMR-1006537, PHY-1019147, Raymond and Beverly Sackler Institute for Biological, Physical and Engineering Sciences, and Howard Hughes Medical Institute.

  19. Computational methods for the verification of adaptive control systems

    NASA Astrophysics Data System (ADS)

    Prasanth, Ravi K.; Boskovic, Jovan; Mehra, Raman K.

    2004-08-01

    Intelligent and adaptive control systems will significantly challenge current verification and validation (V&V) processes, tools, and methods for flight certification. Although traditional certification practices have produced safe and reliable flight systems, they will not be cost effective for next-generation autonomous unmanned air vehicles (UAVs) due to inherent size and complexity increases from added functionality. Affordable V&V of intelligent control systems is by far the most important challenge in the development of UAVs faced by both commercial and military aerospace industry in the United States. This paper presents a formal modeling framework for a class of adaptive control systems and an associated computational scheme. The class of systems considered include neural network-based flight control systems and vehicle health management systems. This class of systems and indeed all adaptive systems are hybrid systems whose continuum dynamics is nonlinear. Our computational procedure is iterative and each iteration has two sequential steps. The first step is to derive an approximating finite-state automaton whose behaviors contain the behaviors of the hybrid system. The second step is to check if the language accepted by the approximating automaton is empty (emptiness checking). The iterations are terminated if the language accepted is empty; otherwise, the approximation is refined and the iteration is continued. This procedure will never produce an "error-free" certificate when the actual system contains errors which is an important requirement in V&V of safety critical systems.

  20. A comprehensive method for optical-emission computed tomography

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew; Bowsher, James; Roper, Justin; Oliver, Tim; Dewhirst, Mark; Oldham, Mark

    2010-07-01

    Optical-computed tomography (CT) and optical-emission computed tomography (ECT) are recent techniques with potential for high-resolution multi-faceted 3D imaging of the structure and function in unsectioned tissue samples up to 1-4 cc. Quantitative imaging of 3D fluorophore distribution (e.g. GFP) using optical-ECT is challenging due to attenuation present within the sample. Uncorrected reconstructed images appear hotter near the edges than at the center. A similar effect is seen in SPECT/PET imaging, although an important difference is attenuation occurs for both emission and excitation photons. This work presents a way to implement not only the emission attenuation correction utilized in SPECT, but also excitation attenuation correction and source strength modeling which are unique to optical-ECT. The performance of the correction methods was investigated by the use of a cylindrical gelatin phantom whose central region was filled with a known distribution of attenuation and fluorophores. Uncorrected and corrected reconstructions were compared to a sectioned slice of the phantom imaged using a fluorescent dissecting microscope. Significant attenuation artifacts were observed in uncorrected images and appeared up to 80% less intense in the central regions due to attenuation and an assumed uniform light source. The corrected reconstruction showed agreement throughout the verification image with only slight variations (~5%). Final experiments demonstrate the correction in tissue as applied to a tumor with constitutive RFP.

  1. Computational Simulation of Buoyancy-Driven Flows Using Vortex Methods.

    NASA Astrophysics Data System (ADS)

    Egan, Erik Witmer

    A new vortex method for simulating two-dimensional buoyancy-driven flows is presented. This Lagrangian method utilizes a discrete representation of the known density field along with the vorticity transport equation and Boussinesq approximation to yield the baroclinically-generated vorticity field, also in a discrete form. The corresponding velocity field is then computed using a vorticity-streamfunction scheme similar to the vortex-in-cell approach. Complete simulations for a variety of Rayleigh-Taylor stability problems are presented, as are preliminary results for Rayleigh-Bernard flows. The discrete vorticity field is made up of vertically -oriented vortex dipole markers. The mutual interactions among these markers are determined by redistributing the dipolar marker vorticity onto a fixed array of true vortices. Standard vortex-in-cell techniques can then be used to generate marker velocities. The vorticity redistribution step is accomplished by matching the far-field velocity of a single dipole marker to that generated by the local grid vortices. The overall simulation method is termed the Dipole-in-Cell approach. Viscous and thermal diffusion effects (for Rayleigh -Benard flows only) are described using a random walk scheme. Rayleigh-Taylor simulations for both single- and double-interface geometries show the expected linear and nonlinear flow development, including the recirculation associated with the Kelvin-Helmholtz interfacial instability. The double-interface results show the development of an "anti -spike" along the top interface, as seen in other studies. The simulations are also shown to be capable of following the impact of a mass of fluid on solid boundaries and pools of stagnant fluid. The Rayleigh-Benard results demonstrate the validity of the random walk mechanism for simulating diffusion and the ability to generate rough representations of the classic Benard convection cells. The accuracy of the Benard cell results is limited by the long computation times required to reach steady state for small Rayleigh numbers. For the large Rayleigh number flows of greatest interest, no such problems will occur and the method should be well suited to simulating them. Suggestions are made for method improvements, including extensions to three-dimensional flow problems.

  2. Modern wing flutter analysis by computational fluid dynamics methods

    NASA Technical Reports Server (NTRS)

    Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.

    1988-01-01

    The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.

  3. Modern wing flutter analysis by computational fluid dynamics methods

    NASA Technical Reports Server (NTRS)

    Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.

    1987-01-01

    The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.

  4. Methods and computer readable medium for improved radiotherapy dosimetry planning

    DOEpatents

    Wessol, Daniel E.; Frandsen, Michael W.; Wheeler, Floyd J.; Nigg, David W.

    2005-11-15

    Methods and computer readable media are disclosed for ultimately developing a dosimetry plan for a treatment volume irradiated during radiation therapy with a radiation source concentrated internally within a patient or incident from an external beam. The dosimetry plan is available in near "real-time" because of the novel geometric model construction of the treatment volume which in turn allows for rapid calculations to be performed for simulated movements of particles along particle tracks therethrough. The particles are exemplary representations of alpha, beta or gamma emissions emanating from an internal radiation source during various radiotherapies, such as brachytherapy or targeted radionuclide therapy, or they are exemplary representations of high-energy photons, electrons, protons or other ionizing particles incident on the treatment volume from an external source. In a preferred embodiment, a medical image of a treatment volume irradiated during radiotherapy having a plurality of pixels of information is obtained.

  5. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    SciTech Connect

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-09-20

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

  6. Computational and experimental methods to decipher the epigenetic code

    PubMed Central

    de Pretis, Stefano; Pelizzola, Mattia

    2014-01-01

    A multi-layered set of epigenetic marks, including post-translational modifications of histones and methylation of DNA, is finely tuned to define the epigenetic state of chromatin in any given cell type under specific conditions. Recently, the knowledge about the combinations of epigenetic marks occurring in the genome of different cell types under various conditions is rapidly increasing. Computational methods were developed for the identification of these states, unraveling the combinatorial nature of epigenetic marks and their association to genomic functional elements and transcriptional states. Nevertheless, the precise rules defining the interplay between all these marks remain poorly characterized. In this perspective we review the current state of this research field, illustrating the power and the limitations of current approaches. Finally, we sketch future avenues of research illustrating how the adoption of specific experimental designs coupled with available experimental approaches could be critical for a significant progress in this area. PMID:25295054

  7. Search systems and computer-implemented search methods

    SciTech Connect

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  8. Density functional methods as computational tools in materials design

    NASA Astrophysics Data System (ADS)

    Li, Y. S.; van Daelen, M. A.; Wrinn, M.; King-Smith, D.; Newsam, J. M.; Delley, B.; Wimmer, E.; Klitsner, T.; Sears, M. P.; Carlson, G. A.; Nelson, J. S.; Allan, D. C.; Teter, M. P.

    1994-04-01

    This article gives a brief overview of density functional theory and discusses two specific implementations: a numerical localized basis approach (DMol) and the pseudopotential plane-wave method. Characteristic examples include Cu, clusters, CO and NO dissociation on copper surfaces, Li-, K-, and O-endohedral fullerenes, tris-quaternary ammonium cations as zeolite template, and oxygen defects in bulk SiO2. The calculations reveal the energetically favorable structures (estimated to be within 0.02 of experiment), the energetics of geometric changes, and the electronic structures underlying the bonding mechanisms. A characteristic DMo1 calculation on a 128-node nCUBE 2 parallel computer shows a speedup of about 107 over a single processor. A plane-wave calculation on a unit cell with 64 silicon atoms using 1024 nCUBE 2 processors runs about five times faster than on a single-processor CRAY YMP.

  9. Matching wind turbine rotors and loads: Computational methods for designers

    NASA Astrophysics Data System (ADS)

    Seale, J. B.

    1983-04-01

    A comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications was reported. A method was developed to convert the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) it is decided how turbine power is to be governed to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics are used to predict longterm energy output. Most systems are approximated by a graph and calculator approach. The method leads to energy predictions, and to insight into modeled processes. A computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out with in depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps; including three different load compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

  10. Computational method for transmission eigenvalues for a spherically stratified medium.

    PubMed

    Cheng, Xiaoliang; Yang, Jing

    2015-07-01

    We consider a computational method for the interior transmission eigenvalue problem that arises in acoustic and electromagnetic scattering. The transmission eigenvalues contain useful information about some physical properties, such as the index of refraction. Instead of the existence and estimation of the spectral property of the transmission eigenvalues, we focus on the numerical calculation, especially for spherically stratified media in R3. Due to the nonlinearity and the special structure of the interior transmission eigenvalue problem, there are not many numerical methods to date. First, we reduce the problem into a second-order ordinary differential equation. Then, we apply the Hermite finite element to the weak formulation of the equation. With proper rewriting of the matrix-vector form, we change the original nonlinear eigenvalue problem into a quadratic eigenvalue problem, which can be written as a linear system and solved by the eigs function in MATLAB. This numerical method is fast, effective, and can calculate as many transmission eigenvalues as needed at a time. PMID:26367151

  11. Helping Students Soar to Success on Computers: An Investigation of the Soar Study Method for Computer-Based Learning

    ERIC Educational Resources Information Center

    Jairam, Dharmananda; Kiewra, Kenneth A.

    2010-01-01

    This study used self-report and observation techniques to investigate how students study computer-based materials. In addition, it examined if a study method called SOAR can facilitate computer-based learning. SOAR is an acronym that stands for the method's 4 theoretically driven and empirically supported components: select (S), organize (O),

  12. Helping Students Soar to Success on Computers: An Investigation of the Soar Study Method for Computer-Based Learning

    ERIC Educational Resources Information Center

    Jairam, Dharmananda; Kiewra, Kenneth A.

    2010-01-01

    This study used self-report and observation techniques to investigate how students study computer-based materials. In addition, it examined if a study method called SOAR can facilitate computer-based learning. SOAR is an acronym that stands for the method's 4 theoretically driven and empirically supported components: select (S), organize (O),…

  13. Publications

    Cancer.gov

    Information about NCI publications including PDQ cancer information for patients and health professionals, patient-education publications, fact sheets, dictionaries, NCI blogs and newsletters and major reports.

  14. 29 CFR 779.266 - Methods of computing annual volume of sales or business.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Methods of computing annual volume of sales or business... Apply; Enterprise Coverage Computing the Annual Volume 779.266 Methods of computing annual volume of... lieu of calendar quarters in computing the annual volume. Once either basis has been adopted it must...

  15. 29 CFR 779.266 - Methods of computing annual volume of sales or business.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Methods of computing annual volume of sales or business... Apply; Enterprise Coverage Computing the Annual Volume 779.266 Methods of computing annual volume of... lieu of calendar quarters in computing the annual volume. Once either basis has been adopted it must...

  16. 29 CFR 779.266 - Methods of computing annual volume of sales or business.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Methods of computing annual volume of sales or business... Apply; Enterprise Coverage Computing the Annual Volume 779.266 Methods of computing annual volume of... lieu of calendar quarters in computing the annual volume. Once either basis has been adopted it must...

  17. 76 FR 67418 - Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ...The National Institute of Standards and Technology (NIST) publishes this notice to seek public comments on the first draft of Special Publication 500-293, US Government Cloud Computing Technology Roadmap, Release 1.0 (Draft). This document is intended to be the mechanism to define and communicate interoperability, portability, and security requirement priorities that must be met in terms of......

  18. Special computer-aided computed tomography (CT) volume measurement and comparison method for pulmonary tuberculosis (TB)

    PubMed Central

    Liu, Jingming; Sun, Zhaogang; Xie, Ruming; Gao, Mengqiu; Li, Chuanyou

    2015-01-01

    The computed tomography (CT) manifestations in pulmonary tuberculosis (PTB) patients are complex and could not be quantitatively evaluated. We aimed to establish a new method to objectively measure the lung injury level in PTB by thoracic CT and make quantitative comparisons. In the retrospective study, a total of 360 adults were selected and divided into four groups according to their CT manifestations and medical history: Normal group, PTB group, PTB with diabetes mellitus (DM) group and Death caused by PTB group. Five additional patients who had continuous CT scans were chosen for preliminary longitudinal analysis. We established a new computer-aided CT volume measurement and comparison method for PTB patients (CACTV-PTB) which measured lung volume (LV) and thoracic volume (TV). RLT was calculated as the ratio of LV to TV and comparisons were performed among different groups. Standardized RLT (SRLT) was used in the longitudinal analysis among different patients. In the Normal group, LV and TV were positively correlated in linear regression (?=-0.5+0.46X, R2=0.796, P<0.01). RLT values were significantly different among four groups (Normal: 0.400.05, PTB: 0.370.08, PTB+DM: 0.340.06, Death: 0.230.04). The curves of SRLT value from different patients shared a same start point and could be compared directly. Utilizing the novel objective method CACTV-PTB makes it possible to compare the severity and dynamic change among different PTB patients. Our early experience also suggested that the lung injury is severer in the PTB+DM group than in the PTB group. PMID:26628995

  19. Public Experiments and Their Analysis with the Replication Method

    ERIC Educational Resources Information Center

    Heering, Peter

    2007-01-01

    One of those who failed to establish himself as a natural philosopher in 18th century Paris was the future revolutionary Jean Paul Marat. He did not only publish several monographs on heat, optics and electricity in which he attempted to characterise his work as being purely empirical but he also tried to establish himself as a public lecturer.

  20. Pedagogical Methods of Teaching "Women in Public Speaking."

    ERIC Educational Resources Information Center

    Pederson, Lucille M.

    A course on women in public speaking, developed at the University of Cincinnati, focuses on the rhetoric of selected women who have been involved in various movements and causes in the United States in the twentieth century. Women studied include educator Mary McLeod Bethune, Congresswoman Jeannette Rankin, suffragette Carrie Chapman Catt, Helen

  1. Pedagogical Methods of Teaching "Women in Public Speaking."

    ERIC Educational Resources Information Center

    Pederson, Lucille M.

    A course on women in public speaking, developed at the University of Cincinnati, focuses on the rhetoric of selected women who have been involved in various movements and causes in the United States in the twentieth century. Women studied include educator Mary McLeod Bethune, Congresswoman Jeannette Rankin, suffragette Carrie Chapman Catt, Helen…

  2. Developing a personal computer-based data visualization system using public domain software

    NASA Astrophysics Data System (ADS)

    Chen, Philip C.

    1999-03-01

    The current research will investigate the possibility of developing a computing-visualization system using a public domain software system built on a personal computer. Visualization Toolkit (VTK) is available on UNIX and PC platforms. VTK uses C++ to build an executable. It has abundant programming classes/objects that are contained in the system library. Users can also develop their own classes/objects in addition to those existing in the class library. Users can develop applications with any of the C++, Tcl/Tk, and JAVA environments. The present research will show how a data visualization system can be developed with VTK running on a personal computer. The topics will include: execution efficiency; visual object quality; availability of the user interface design; and exploring the feasibility of the VTK-based World Wide Web data visualization system. The present research will feature a case study showing how to use VTK to visualize meteorological data with techniques including, iso-surface, volume rendering, vector display, and composite analysis. The study also shows how the VTK outline, axes, and two-dimensional annotation text and title are enhancing the data presentation. The present research will also demonstrate how VTK works in an internet environment while accessing an executable with a JAVA application programing in a webpage.

  3. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna... each point of +17 dBu field strength for all radials and draw the contour by connecting the...

  4. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna... each point of +17 dBu field strength for all radials and draw the contour by connecting the...

  5. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna... each point of +17 dBu field strength for all radials and draw the contour by connecting the...

  6. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna... each point of +17 dBu field strength for all radials and draw the contour by connecting the...

  7. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna... each point of +17 dBu field strength for all radials and draw the contour by connecting the...

  8. Development of computational methods for heavy lift launch vehicles

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Ryan, James S.

    1993-01-01

    The research effort has been focused on the development of an advanced flow solver for complex viscous turbulent flows with shock waves. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. A new computer program named CENS3D has been developed for viscous turbulent flows with discontinuities. Details of the code are described in Appendix A and Appendix B. With the developments of the numerical algorithm and dissipation model, the simulation of three-dimensional viscous compressible flows has become more efficient and accurate. The results of the research are expected to yield a direct impact on the design process of future liquid fueled launch systems.

  9. Oxford textbook of public health. Volume 3. Investigative methods in public health

    SciTech Connect

    Holland, W.W.; Detels, R.; Knox, G.

    1985-01-01

    This book contains 31 chapters. Some of the chapter titles are: Cross-sectional studies; Viral diseases of public health importance; Arboviruses; The principles of an epidemic field investigation; Field investigations in air; Radiation; Iatrogenic hazards; and Field investigations of noise hazards.

  10. A stoichiometric calibration method for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

    2014-04-01

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0 mm, 0.5 mm and 0.4 mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy.

  11. A stoichiometric calibration method for dual energy computed tomography.

    PubMed

    Bourque, Alexandra E; Carrier, Jean-Franois; Bouchard, Hugo

    2014-04-21

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider etal (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than 1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 0.4)% and (1.6 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 2.7)% and (0.5 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3mm, 0.7mm and 0.5mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0mm, 0.5mm and 0.4mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy. PMID:24694786

  12. Non-unitary probabilistic quantum computing circuit and method

    NASA Technical Reports Server (NTRS)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  13. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

    SciTech Connect

    Luttman, A.

    2012-03-30

    The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

  14. Pesticides and public health: integrated methods of mosquito management.

    PubMed Central

    Rose, R. I.

    2001-01-01

    Pesticides have a role in public health as part of sustainable integrated mosquito management. Other components of such management include surveillance, source reduction or prevention, biological control, repellents, traps, and pesticide-resistance management. We assess the future use of mosquito control pesticides in view of niche markets, incentives for new product development, Environmental Protection Agency registration, the Food Quality Protection Act, and improved pest management strategies for mosquito control. PMID:11266290

  15. Computational Methods for Domain Partitioning of Protein Structures

    NASA Astrophysics Data System (ADS)

    Veretnik, Stella; Shindyalov, Ilya

    Analysis of protein structures typically begins with decomposition of structure into more basic units, called "structural domains". The underlying goal is to reduce a complex protein structure to a set of simpler yet structurally meaningful units, each of which can be analyzed independently. Structural semi-independence of domains is their hallmark: domains often have compact structure and can fold or function independently. Domains can undergo so-called "domain shuffling"when they reappear in different combinations in different proteins thus implementing different biological functions (Doolittle, 1995). Proteins can then be conceived as being built of such basic blocks: some, especially small proteins, consist usually of just one domain, while other proteins possess a more complex architecture containing multiple domains. Therefore, the methods for partitioning a structure into domains are of critical importance: their outcome defines the set of basic units upon which structural classifications are built and evolutionary analysis is performed. This is especially true nowadays in the era of structural genomics. Today there are many methods that decompose the structure into domains: some of them are manual (i.e., based on human judgment), others are semiautomatic, and still others are completely automatic (based on algorithms implemented as software). Overall there is a high level of consistency and robustness in the process of partitioning a structure into domains (for 80% of proteins); at least for structures where domain location is obvious. The picture is less bright when we consider proteins with more complex architecturesneither human experts nor computational methods can reach consistent partitioning in many such cases. This is a rather accurate reflection of biological phenomena in general since domains are formed by different mechanisms, hence it is nearly impossible to come up with a set of well-defined rules that captures all of the observed cases.

  16. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 4 2013-07-01 2013-07-01 false Methods for computing interest benefits and special... LOAN (FFEL) PROGRAM Federal Payments of Interest and Special Allowance 682.304 Methods for computing... shall use the average daily balance method to determine the balance on which the Secretary computes...

  17. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 4 2011-07-01 2011-07-01 false Methods for computing interest benefits and special... LOAN (FFEL) PROGRAM Federal Payments of Interest and Special Allowance 682.304 Methods for computing... shall use the average daily balance method to determine the balance on which the Secretary computes...

  18. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 4 2012-07-01 2012-07-01 false Methods for computing interest benefits and special... LOAN (FFEL) PROGRAM Federal Payments of Interest and Special Allowance 682.304 Methods for computing... shall use the average daily balance method to determine the balance on which the Secretary computes...

  19. 26 CFR 1.669(a)-3 - Tax computed by the exact throwback method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Tax computed by the exact throwback method. 1... Taxable Years Beginning Before January 1, 1969 1.669(a)-3 Tax computed by the exact throwback method. (a... compute the tax, on amounts deemed distributed under section 666, by the exact throwback method...

  20. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 4 2014-07-01 2014-07-01 false Methods for computing interest benefits and special... LOAN (FFEL) PROGRAM Federal Payments of Interest and Special Allowance 682.304 Methods for computing... shall use the average daily balance method to determine the balance on which the Secretary computes...

  1. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Methods for computing interest benefits and special...) PROGRAM Federal Payments of Interest and Special Allowance 682.304 Methods for computing interest... shall use the average daily balance method to determine the balance on which the Secretary computes...

  2. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of...

  3. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of...

  4. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 2 2012-04-01 2012-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of...

  5. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 2 2011-04-01 2011-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of...

  6. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 2 2013-04-01 2013-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of...

  7. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  8. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  9. Adapting Methods of Evaluation to Publications Used in Admissions.

    ERIC Educational Resources Information Center

    Bradham, Jo Allen

    1980-01-01

    Suggests ways to adapt five methods of evaluation to recruitment literature. Methods discussed are: (1) evaluation as measurement; (2) evaluation as the assessment of congruence between objectives and achievement; (3) evaluation as professional judgement; (4) evaluation as decision-maker; and (5) evaluation as comprehensive or goal-free

  10. Publicity.

    ERIC Educational Resources Information Center

    Chisholm, Joan

    Publicity for preschool cooperatives is described. Publicity helps produce financial support for preschool cooperatives. It may take the form of posters, brochures, newsletters, open house, newspaper coverage, and radio and television. Word of mouth and general good will in the community are the best avenues of publicity that a cooperative nursery

  11. Publications.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents a variety of publications available from government and nongovernment sources. The government publications are from the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) and are designed for educators, students, and the public. (Author/SA)

  12. Students' Attitudes towards Control Methods in Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Hintze, Hanne; And Others

    1988-01-01

    Describes study designed to investigate dental students' attitudes toward computer-assisted teaching as applied in programs for oral radiology in Denmark. Programs using personal computers and slide projectors with varying degrees of learner and teacher control are described, and differences in attitudes between male and female students are…

  13. Students' Attitudes towards Control Methods in Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Hintze, Hanne; And Others

    1988-01-01

    Describes study designed to investigate dental students' attitudes toward computer-assisted teaching as applied in programs for oral radiology in Denmark. Programs using personal computers and slide projectors with varying degrees of learner and teacher control are described, and differences in attitudes between male and female students are

  14. Gyrokinetic theory and computational methods for electromagnetic perturbations in tokamaks

    NASA Astrophysics Data System (ADS)

    Qin, Hong

    A general gyrokinetic formalism and appropriate computational methods have been developed for electromagnetic perturbations in toroidal plasmas. This formalism and associated numerical code represent the first self-consistent, comprehensive, fully kinetic model for treating both magnetohydrodynamic (MHD) instabilities and electromagnetic drift waves. The gyrokinetic system of equation is derived by phase- space Lagrangian Lie perturbation methods which enable applications to modes with arbitrary wavelength. An important component missing from previous electromagnetic gyrokinetic theories, the gyrokinetic perpendicular dynamics, is identified and developed in the present analysis. This is accomplished by introducing a new ``distribution function'' and an associated governing gyrokinetic equation. Consequently, the compressional Alfvn waves and cyclotron waves can be systematically treated. The new insights into the gyrokinetic perpendicular dynamics uncovered here clarify the understanding of the gyrokinetic approach-the real spirit of the gyrokinetic reduction is to decouple the gyromotion from the guiding center orbital motion, instead of averaging it out. The gyrokinetic perpendicular dynamics is in fact essential to the recovery of the MHD model from a fully kinetic derivation. In particular, it serves to generalize, in gyrokinetic framework, Spitzer's solution of the fluid/particle paradox to a broader regime of applicability. The gyrokinetic system is also shown to be reducible to a simpler form to deal with shear Alfvn waves. This consists of an appropriate form of the gyrokinetic equation governing the distribution function, the gyrokinetic Poisson equation, and a newly derived gyrokinetic moment equation. If all of the kinetic effects are neglected, the gyrokinetic moment equation is shown to recover the ideal MHD equation for shear Alfvn modes. In addition, a gyrokinetic Ohm's law, including both the perpendicular and the parallel components, is derived. The gyrokinetic equation is solved for the perturbed distribution function by integrating along the unperturbed orbits. Substituting this solution back into the gyrokinetic Poisson equation and the gyrokinetic moment equation yields the eigenmode equation. The eigenvalue problem is then solved by using a Fourier decomposition in the poloidal direction and a finite element method in the radial direction. Both analytic and numerical results from the gyrokinetic model were found to agree very well with the MHD results. Destabilization of the TAEs by energetic particles are known to be vitally important for ignition-class plasmas. For the test case with Maxwellian energetic hydrogen ions, comparisons have accordingly been made between the results from the present non-perturbative, fully kinetic calculation using the KIN-2DEM code and those from the perturbative hybrid calculation with the NOVA-K code. The agreement varies with hot particle thermal velocity. The discrepancy is mainly attributed to the differences in the basic models.

  15. Conformational study of vasoactive intestinal peptide by computational methods.

    PubMed

    Filizola, M; Carten-Farina, M; Perez, J J

    1997-07-01

    The conformational profile of vasoactive intestinal peptide (VIP) was characterized using computational methods. The strategy devised included a close examination of the conformational profile of the first 11 residues fragment followed by a study that considered the compatibility of the different conformations found with a continuation of the polypeptide chain in a alpha-helical conformation. Accordingly, a detailed analysis of the conformational preferences of the N-terminal fragment of VIP(1-11) was carried out within the framework of the molecular mechanics approach, using simulated annealing in an iterative fashion as the sampling technique. In a second step, low-energy structures of the fragment were fused to the remainder of the VIP chain in the form of two noninteracting alpha-helices, according to a model of the structure of the peptide proposed from NMR studies. After investigation for compatibility of each of the low-energy structures of VIP(1-11) with the two helical regions by energy minimization, only 5 of 35 structures were discarded. Analysis of the structures characterized indicates that most of the conformations of VIP(1-11), including the global minimum, can be described as bent conformations. Conformations exhibiting alpha-turns and beta-turns, previously proposed by NMR studies were also characterized. The conformational analysis also suggests that the common structural features found in VIP(1-11) should also be present in VIP. Finally, because of the sequence homology between VIP and Peptide T, and the fact that both are ligands of the CD4 receptor, both sets of low-energy conformations were compared for similarity. The relevance of these results as guidance of the design of new peptide analogs targeted to the CD4 receptor is also discussed. PMID:9273888

  16. Profiling Animal Toxicants by Automatically Mining Public Bioassay Data: A Big Data Approach for Computational Toxicology

    PubMed Central

    Zhang, Jun; Hsieh, Jui-Hua; Zhu, Hao

    2014-01-01

    In vitro bioassays have been developed and are currently being evaluated as potential alternatives to traditional animal toxicity models. Already, the progress of high throughput screening techniques has resulted in an enormous amount of publicly available bioassay data having been generated for a large collection of compounds. When a compound is tested using a collection of various bioassays, all the testing results can be considered as providing a unique bio-profile for this compound, which records the responses induced when the compound interacts with different cellular systems or biological targets. Profiling compounds of environmental or pharmaceutical interest using useful toxicity bioassay data is a promising method to study complex animal toxicity. In this study, we developed an automatic virtual profiling tool to evaluate potential animal toxicants. First, we automatically acquired all PubChem bioassay data for a set of 4,841 compounds with publicly available rat acute toxicity results. Next, we developed a scoring system to evaluate the relevance between these extracted bioassays and animal acute toxicity. Finally, the top ranked bioassays were selected to profile the compounds of interest. The resulting response profiles proved to be useful to prioritize untested compounds for their animal toxicity potentials and form a potential in vitro toxicity testing panel. The protocol developed in this study could be combined with structure-activity approaches and used to explore additional publicly available bioassay datasets for modeling a broader range of animal toxicities. PMID:24950175

  17. Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture

    DOEpatents

    Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2011-10-11

    Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

  18. Four-stage computational technology with adaptive numerical methods for computational aerodynamics

    NASA Astrophysics Data System (ADS)

    Shaydurov, V.; Liu, T.; Zheng, Z.

    2012-10-01

    Computational aerodynamics is a key technology in aircraft design which is ahead of physical experiment and complements it. Of course all three components of computational modeling are actively developed: mathematical models of real aerodynamic processes, numerical algorithms, and high-performance computing. The most impressive progress has been made in the field of computing, though with a considerable complication of computer architecture. Numerical algorithms are developed more conservative. More precisely, they are offered and theoretically justified for more simple mathematical problems. Nevertheless, computational mathematics now has amassed a whole palette of numerical algorithms that can provide acceptable accuracy and interface between modern mathematical models in aerodynamics and high-performance computers. A significant step in this direction was the European Project ADIGMA whose positive experience will be used in International Project TRISTAM for further movement in the field of computational technologies for aerodynamics. This paper gives a general overview of objectives and approaches intended to use and a description of the recommended four-stage computer technology.

  19. Recent developments in the Green's function method. [for aerodynamics computer program

    NASA Technical Reports Server (NTRS)

    Tseng, K.; Puglise, J. A.; Morino, L.

    1977-01-01

    A recent computational development on the Green's function method (the method used in the computer program SOUSSA: Steady, Oscillatory and Unsteady Subsonic and Supersonic Aerodynamics) is presented. A scheme consisting of combined numerical (Gaussian quadrature) and analytical procedures for the evaluation of the source and doublet integrals used in the program is presented. This combination results in 80 to 90% reduction in computer time.

  20. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions 779.342 Methods of computing annual volume of sales. The tests as to whether an establishment qualifies for exemption under section...

  1. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions 779.342 Methods of computing annual volume of sales. The tests as to whether an establishment qualifies for exemption under section...

  2. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  3. Reforming the Social Studies Methods Course. SSEC Publication No. 155.

    ERIC Educational Resources Information Center

    Patrick, John J.

    Numerous criticisms of college social studies methods courses have generated various reform efforts. Three of these reforms are examined, including competency-based teacher education, the value analysis approach to teacher education, and the human relations approach to teacher education. Competency-based courses develop among future teachers

  4. Library Orientation Methods, Mental Maps, and Public Services Planning.

    ERIC Educational Resources Information Center

    Ridgeway, Trish

    Two library orientation methods, a self-guided cassette walking tour and a slide-tape program, were administered to 202 freshmen students to determine if moving through the library increased students' ability to develop a mental map of the library. An effort was made to ensure that the two orientation programs were equivalent. Results from the 148

  5. The Repeated Replacement Method: A Pure Lagrangian Meshfree Method for Computational Fluid Dynamics

    PubMed Central

    Walker, Wade A.

    2012-01-01

    In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly chops out fluid from active areas and replaces it with new flattened fluid cells with the same mass, momentum, and energy. We call the new cells flattened because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175

  6. Asronomical refraction: Computational methods for all zenith angles

    NASA Technical Reports Server (NTRS)

    Auer, L. H.; Standish, E. M.

    2000-01-01

    It is shown that the problem of computing astronomical refraction for any value of the zenith angle may be reduced to a simple, nonsingular, numerical quadrature when the proper choice is made for the independent variable of integration.

  7. Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.

    2011-01-01

    This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…

  8. Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.

    2011-01-01

    This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative

  9. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  10. One-to-One Computing in Public Schools: Lessons from "Laptops for All" Programs

    ERIC Educational Resources Information Center

    Abell Foundation, 2008

    2008-01-01

    The basic tenet of one-to-one computing is that the student and teacher have Internet-connected, wireless computing devices in the classroom and optimally at home as well. Also known as "ubiquitous computing," this strategy assumes that every teacher and student has her own computing device and obviates the need for moving classes to computer

  11. Small Scale Distance Education; "The Personal (Computer) Touch"; Tutorial Methods for TMA's Using a Computer.

    ERIC Educational Resources Information Center

    Fritsch, Helmut; And Others

    1989-01-01

    The authors present reports of current research on distance education at the FernUniversitat in West Germany. Fritsch discusses adapting distance education techniques for small classes. Kuffner describes procedures for providing feedback to students using personalized computer-generated letters. Klute discusses using a computer with tutorial…

  12. Method for simulating paint mixing on computer monitors

    NASA Astrophysics Data System (ADS)

    Carabott, Ferdinand; Lewis, Garth; Piehl, Simon

    2002-06-01

    Computer programs like Adobe Photoshop can generate a mixture of two 'computer' colors by using the Gradient control. However, the resulting colors diverge from the equivalent paint mixtures in both hue and value. This study examines why programs like Photoshop are unable to simulate paint or pigment mixtures, and offers a solution using Photoshops existing tools. The article discusses how a library of colors, simulating paint mixtures, is created from 13 artists' colors. The mixtures can be imported into Photoshop as a color swatch palette of 1248 colors and as 78 continuous or stepped gradient files, all accessed in a new software package, Chromafile.

  13. Progress Towards Computational Method for Circulation Control Airfoils

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rumsey, C. L.; Anders, S. G.

    2005-01-01

    The compressible Reynolds-averaged Navier-Stokes equations are solved for circulation control airfoil flows. Numerical solutions are computed with both structured and unstructured grid solvers. Several turbulence models are considered, including the Spalart-Allmaras model with and without curvature corrections, the shear stress transport model of Menter, and the k-enstrophy model. Circulation control flows with jet momentum coefficients of 0.03, 0.10, and 0.226 are considered. Comparisons are made between computed and experimental pressure distributions, velocity profiles, Reynolds stress profiles, and streamline patterns. Including curvature effects yields the closest agreement with the measured data.

  14. Do public reports of provider performance make their data and methods available and accessible?

    PubMed

    Damberg, Cheryl L; Hyman, David; France, Julie

    2014-10-01

    Public reports of provider performance are widespread and the methods used to generate the provider ratings differ across the sponsoring entities. We examined 115 hospital and 27 physician public reports to determine whether report sponsors made the methods used to score providers available and accessible. While nearly all websites made transparent some of the methods used to assess provider performance, we found substantial variation in the extent to which they fully adhered to recommended methods elements identified in the Consumer-Purchaser Disclosure Project's Patient Charter for performance reporting. Most public reports provided descriptions of the data sources, whether measures were endorsed, and the attribution approach. Least often made transparent were methods descriptions related to advanced provider review and reconsideration of results, reliability assessment, and case-mix adjustment. Future research should do more to identify the core elements that would lead consumer end users to have confidence in public reports. PMID:23838150

  15. Simple computer method provides contours for radiological images

    NASA Technical Reports Server (NTRS)

    Newell, J. D.; Keller, R. A.; Baily, N. A.

    1975-01-01

    Computer is provided with information concerning boundaries in total image. Gradient of each point in digitized image is calculated with aid of threshold technique; then there is invoked set of algorithms designed to reduce number of gradient elements and to retain only major ones for definition of contour.

  16. [Computation method for optimization of recipes for protein content].

    PubMed

    Kovalev, N I; Karzeva, N J; Fiterer, V O

    1987-01-01

    The authors propose a calculated protein utilization coefficient. This coefficient considers the difference between the utilization rates of the proteins being contained in the mixture and their amino-acid composition. The proposed formula allows calculations by computer. The data obtained show high correlations with the results received by biological tests with Tetrahymena cultures. PMID:3431579

  17. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    ERIC Educational Resources Information Center

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical

  18. The ijk forms of factorization methods. I - Vector computers

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1988-01-01

    This paper gives a detailed exposition of the 'ijk forms' of LU and Choleski factorization. Several aspects of these different organizations are discussed and their properties on vector computers are compared. Extensions of the ijk formalism to other algorithms is also given.

  19. Computed radiography imaging plates and associated methods of manufacture

    SciTech Connect

    Henry, Nathaniel F.; Moses, Alex K.

    2015-08-18

    Computed radiography imaging plates incorporating an intensifying material that is coupled to or intermixed with the phosphor layer, allowing electrons and/or low energy x-rays to impart their energy on the phosphor layer, while decreasing internal scattering and increasing resolution. The radiation needed to perform radiography can also be reduced as a result.

  20. New Methods of Mobile Computing: From Smartphones to Smart Education

    ERIC Educational Resources Information Center

    Sykes, Edward R.

    2014-01-01

    Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It

  1. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    ERIC Educational Resources Information Center

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  2. Verifying a computational method for predicting extreme ground motion

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, B.T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  3. New Methods of Mobile Computing: From Smartphones to Smart Education

    ERIC Educational Resources Information Center

    Sykes, Edward R.

    2014-01-01

    Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It…

  4. A method for computing the leading-edge suction in a higher-order panel method

    NASA Technical Reports Server (NTRS)

    Ehlers, F. E.; Manro, M. E.

    1984-01-01

    Experimental data show that the phenomenon of a separation induced leading edge vortex is influenced by the wing thickness and the shape of the leading edge. Both thickness and leading edge shape (rounded rather than point) delay the formation of a vortex. Existing computer programs used to predict the effect of a leading edge vortex do not include a procedure for determining whether or not a vortex actually exists. Studies under NASA Contract NAS1-15678 have shown that the vortex development can be predicted by using the relationship between the leading edge suction coefficient and the parabolic nose drag. The linear theory FLEXSTAB was used to calculate the leading edge suction coefficient. This report describes the development of a method for calculating leading edge suction using the capabilities of the higher order panel methods (exact boundary conditions). For a two dimensional case, numerical methods were developed using the double strength and downwash distribution along the chord. A Gaussian quadrature formula that directly incorporates the logarithmic singularity in the downwash distribution, at all panel edges, was found to be the best method.

  5. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  6. 3D modeling method for computer animate based on modified weak structured light method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  7. SOURCE WATER PROTECTION OF PUBLIC DRINKING WATER WELLS: COMPUTER MODELING OF ZONES CONTRIBUTING RECHARGE TO PUMPING WELLS

    EPA Science Inventory

    Computer technology to assist states, tribes, and clients in the design of wellhead and source water protection areas for public water supply wells is being developed through two distinct SubTasks: (Sub task 1) developing a web-based wellhead decision support system, WellHEDSS, t...

  8. Computer-Assisted Instruction Network Systems. Preliminary Findings on First Year State-Wide Implementation in Hawaii Public Education.

    ERIC Educational Resources Information Center

    Hawaii State Dept. of Education, Honolulu.

    This is a report on the status of computer-assisted instruction (CAI) networked systems utilizing microcomputer technology at the end of the 1987-88 school year, the first year of CAI network implementation in the public schools of Hawaii. Data for the study were collected via: site visitations to 12 of the 19 networked CAI schools; interviews;

  9. The Computer Experience Microvan Program: A Cooperative Endeavor to Improve University-Public School Relations through Technology.

    ERIC Educational Resources Information Center

    Amodeo, Luiza B.; Martin, Jeanette

    To a large extent the Southwest can be described as a rural area. Under these circumstances, programs for public understanding of technology become, first of all, exercises in logistics. In 1982, New Mexico State University introduced a program to inform teachers about computer technology. This program takes microcomputers into rural classrooms

  10. 78 FR 54453 - Notice of Public Meeting-Intersection of Cloud Computing and Mobility Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... National Institute of Standards and Technology Notice of Public Meeting--Intersection of Cloud Computing and Mobility Forum and Workshop AGENCY: National Institute of Standards and Technology, Department of... Technology (NIST) announces the Intersection of Cloud and Mobility Forum and Workshop to be held on...

  11. Multi-Level iterative methods in computational plasma physics

    SciTech Connect

    Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.

    1999-03-01

    Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD.

  12. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  13. On multigrid methods for the Navier-Stokes Computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Krist, S. E.; Zang, T. A.

    1988-01-01

    The overall architecture of the multipurpose parallel-processing Navier-Stokes Computer (NSC) being developed by Princeton and NASA Langley (Nosenchuck et al., 1986) is described and illustrated with extensive diagrams, and the NSC implementation of an elementary multigrid algorithm for simulating isotropic turbulence (based on solution of the incompressible time-dependent Navier-Stokes equations with constant viscosity) is characterized in detail. The present NSC design concept calls for 64 nodes, each with the performance of a class VI supercomputer, linked together by a fiber-optic hypercube network and joined to a front-end computer by a global bus. In this configuration, the NSC would have a storage capacity of over 32 Gword and a peak speed of over 40 Gflops. The multigrid Navier-Stokes code discussed would give sustained operation rates of about 25 Gflops.

  14. Computational Methods for the Analysis of Array Comparative Genomic Hybridization

    PubMed Central

    Chari, Raj; Lockwood, William W.; Lam, Wan L.

    2006-01-01

    Array comparative genomic hybridization (array CGH) is a technique for assaying the copy number status of cancer genomes. The widespread use of this technology has lead to a rapid accumulation of high throughput data, which in turn has prompted the development of computational strategies for the analysis of array CGH data. Here we explain the principles behind array image processing, data visualization and genomic profile analysis, review currently available software packages, and raise considerations for future software development. PMID:17992253

  15. Description of a method to support public health information management: organizational network analysis

    PubMed Central

    Merrill, Jacqueline; Bakken, Suzanne; Rockoff, Maxine; Gebbie, Kristine; Carley, Kathleen

    2007-01-01

    In this case study we describe a method that has potential to provide systematic support for public health information management. Public health agencies depend on specialized information that travels throughout an organization via communication networks among employees. Interactions that occur within these networks are poorly understood and are generally unmanaged. We applied organizational network analysis, a method for studying communication networks, to assess the methods utility to support decision making for public health managers, and to determine what links existed between information use and agency processes. Data on communication links among a health departments staff was obtained via survey with a 93% response rate, and analyzed using Organizational Risk Analyzer (ORA) software. The findings described the structure of information flow in the departments communication networks. The analysis succeeded in providing insights into organizational processes which informed public health managers strategies to address problems and to take advantage of network strengths. PMID:17098480

  16. Analysis of multigrid methods on massively parallel computers: Architectural implications

    NASA Technical Reports Server (NTRS)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  17. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    SciTech Connect

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria; Zhou, Ning; Huang, Zhenyu

    2012-07-26

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulation of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.

  18. Frequency response modeling and control of flexible structures: Computational methods

    NASA Technical Reports Server (NTRS)

    Bennett, William H.

    1989-01-01

    The dynamics of vibrations in flexible structures can be conventiently modeled in terms of frequency response models. For structural control such models capture the distributed parameter dynamics of the elastic structural response as an irrational transfer function. For most flexible structures arising in aerospace applications the irrational transfer functions which arise are of a special class of pseudo-meromorphic functions which have only a finite number of right half place poles. Computational algorithms are demonstrated for design of multiloop control laws for such models based on optimal Wiener-Hopf control of the frequency responses. The algorithms employ a sampled-data representation of irrational transfer functions which is particularly attractive for numerical computation. One key algorithm for the solution of the optimal control problem is the spectral factorization of an irrational transfer function. The basis for the spectral factorization algorithm is highlighted together with associated computational issues arising in optimal regulator design. Options for implementation of wide band vibration control for flexible structures based on the sampled-data frequency response models is also highlighted. A simple flexible structure control example is considered to demonstrate the combined frequency response modeling and control algorithms.

  19. 26 CFR 1.669(a)-3 - Tax computed by the exact throwback method.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 8 2014-04-01 2014-04-01 false Tax computed by the exact throwback method. 1... Applicable to Taxable Years Beginning Before January 1, 1969 1.669(a)-3 Tax computed by the exact throwback... elects to compute the tax, on amounts deemed distributed under section 666, by the exact throwback...

  20. 26 CFR 1.669(a)-3 - Tax computed by the exact throwback method.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 8 2012-04-01 2012-04-01 false Tax computed by the exact throwback method. 1... Applicable to Taxable Years Beginning Before January 1, 1969 1.669(a)-3 Tax computed by the exact throwback... elects to compute the tax, on amounts deemed distributed under section 666, by the exact throwback...

  1. 26 CFR 1.669(a)-3 - Tax computed by the exact throwback method.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 8 2013-04-01 2013-04-01 false Tax computed by the exact throwback method. 1... Applicable to Taxable Years Beginning Before January 1, 1969 1.669(a)-3 Tax computed by the exact throwback... elects to compute the tax, on amounts deemed distributed under section 666, by the exact throwback...

  2. 26 CFR 1.669(a)-3 - Tax computed by the exact throwback method.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 8 2011-04-01 2011-04-01 false Tax computed by the exact throwback method. 1... Applicable to Taxable Years Beginning Before January 1, 1969 1.669(a)-3 Tax computed by the exact throwback... elects to compute the tax, on amounts deemed distributed under section 666, by the exact throwback...

  3. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  4. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  5. ICRP Publication 116the first ICRP/ICRU application of the male and female adult reference computational phantoms

    NASA Astrophysics Data System (ADS)

    Petoussi-Henss, Nina; Bolch, Wesley E.; Eckerman, Keith F.; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G.; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria

    2014-09-01

    ICRP Publication 116 on Conversion coefficients for radiological protection quantities for external radiation exposures, provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantities revealed that the latter quantities continue to provide a good approximation of effective dose for photons, neutrons and electrons for the conventional energy ranges considered previously (ICRP 1996, ICRU 1998), but not at the higher energies of ICRP Publication 116.

  6. BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology

    PubMed Central

    Gilson, Michael K.; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny

    2016-01-01

    BindingDB, www.bindingdb.org, is a publicly accessible database of experimental protein-small molecule interaction data. Its collection of over a million data entries derives primarily from scientific articles and, increasingly, US patents. BindingDB provides many ways to browse and search for data of interest, including an advanced search tool, which can cross searches of multiple query types, including text, chemical structure, protein sequence and numerical affinities. The PDB and PubMed provide links to data in BindingDB, and vice versa; and BindingDB provides links to pathway information, the ZINC catalog of available compounds, and other resources. The BindingDB website offers specialized tools that take advantage of its large data collection, including ones to generate hypotheses for the protein targets bound by a bioactive compound, and for the compounds bound by a new protein of known sequence; and virtual compound screening by maximal chemical similarity, binary kernel discrimination, and support vector machine methods. Specialized data sets are also available, such as binding data for hundreds of congeneric series of ligands, drawn from BindingDB and organized for use in validating drug design methods. BindingDB offers several forms of programmatic access, and comes with extensive background material and documentation. Here, we provide the first update of BindingDB since 2007, focusing on new and unique features and highlighting directions of importance to the field as a whole. PMID:26481362

  7. BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology.

    PubMed

    Gilson, Michael K; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny

    2016-01-01

    BindingDB, www.bindingdb.org, is a publicly accessible database of experimental protein-small molecule interaction data. Its collection of over a million data entries derives primarily from scientific articles and, increasingly, US patents. BindingDB provides many ways to browse and search for data of interest, including an advanced search tool, which can cross searches of multiple query types, including text, chemical structure, protein sequence and numerical affinities. The PDB and PubMed provide links to data in BindingDB, and vice versa; and BindingDB provides links to pathway information, the ZINC catalog of available compounds, and other resources. The BindingDB website offers specialized tools that take advantage of its large data collection, including ones to generate hypotheses for the protein targets bound by a bioactive compound, and for the compounds bound by a new protein of known sequence; and virtual compound screening by maximal chemical similarity, binary kernel discrimination, and support vector machine methods. Specialized data sets are also available, such as binding data for hundreds of congeneric series of ligands, drawn from BindingDB and organized for use in validating drug design methods. BindingDB offers several forms of programmatic access, and comes with extensive background material and documentation. Here, we provide the first update of BindingDB since 2007, focusing on new and unique features and highlighting directions of importance to the field as a whole. PMID:26481362

  8. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of: (1) deterministic structural analyses with fine (convergent) finite element meshes, (2) probabilistic structural analyses with coarse finite element meshes, (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes, and (4) a probabilistic mapping. The results show that the scatter of the probabilistic structural responses and structural reliability can be accurately predicted using a coarse finite element model with proper mapping methods. Therefore, large structures can be analyzed probabilistically using finite element methods.

  9. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of the following: (1) deterministic structural analyses with fine (convergent) finite element meshes; (2) probabilistic structural analyses with coarse finite element meshes; (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes; and (4) a probabilistic mapping. The results show that the scatter in the probabilistic structural responses and structural reliability can be efficiently predicted using a coarse finite element model and proper mapping methods with good accuracy. Therefore, large structures can be efficiently analyzed probabilistically using finite element methods.

  10. Adaptive computational methods for SSME internal flow analysis

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1986-01-01

    Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.

  11. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    SciTech Connect

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  12. Introduction to Computational Methods for Stability and Control (COMSAC)

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Fremaux, C. Michael; Chambers, Joseph R.

    2004-01-01

    This Symposium is intended to bring together the often distinct cultures of the Stability and Control (S&C) community and the Computational Fluid Dynamics (CFD) community. The COMSAC program is itself a new effort by NASA Langley to accelerate the application of high end CFD methodologies to the demanding job of predicting stability and control characteristics of aircraft. This talk is intended to set the stage for needing a program like COMSAC. It is not intended to give details of the program itself. The topics include: 1) S&C Challenges; 2) Aero prediction methodology; 3) CFD applications; 4) NASA COMSAC planning; 5) Objectives of symposium; and 6) Closing remarks.

  13. Shielding analysis methods available in the scale computational system

    SciTech Connect

    Parks, C.V.; Tang, J.S.; Hermann, O.W.; Bucholz, J.A.; Emmett, M.B.

    1986-01-01

    Computational tools have been included in the SCALE system to allow shielding analysis to be performed using both discrete-ordinates and Monte Carlo techniques. One-dimensional discrete ordinates analyses are performed with the XSDRNPM-S module, and point dose rates outside the shield are calculated with the XSDOSE module. Multidimensional analyses are performed with the MORSE-SGC/S Monte Carlo module. This paper will review the above modules and the four Shielding Analysis Sequences (SAS) developed for the SCALE system. 7 refs., 8 figs.

  14. 76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ...NIST announces the Cloud Computing Forum & Workshop IV to be held on November 2, 3 and 4, 2011. This workshop will provide information on the U.S. Government (USG) Cloud Computing Technology Roadmap initiative. This workshop will also provide an updated status on NIST efforts to help develop open standards in interoperability, portability and security in cloud computing. This event is open to......

  15. Computational Method for Electrical Potential and Other Field Problems

    ERIC Educational Resources Information Center

    Hastings, David A.

    1975-01-01

    Proposes the finite differences relaxation method as a teaching tool in secondary and university level courses discussing electrical potential, temperature distribution in a region, and similar problems. Outlines the theory and operating procedures of the method, and discusses examples of teaching applications, including possible laboratory

  16. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  17. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    NASA Technical Reports Server (NTRS)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  18. Computational Methods for Continuum Models of Platelet Aggregation

    NASA Astrophysics Data System (ADS)

    Wang, Nien-Tzu; Fogelson, Aaron L.

    1999-05-01

    Platelet aggregation plays an important role in blood clotting. Robust numerical methods for simulating the behavior of Fogelson's continuum models of platelet aggregation have been developed which in particular involve a hybrid finite-difference and spectral method for the models' link evolution equation. This partial differential equation involvesfourspatial dimensions and time. The new methods are used to begin investigating the influence of new chemically induced activation, link formation, and shear-induced link breaking in determining when aggregates develop sufficient strength to remain intact and when they are broken apart by fluid stresses.

  19. Leading Computational Methods on Scalar and Vector HEC Platforms

    SciTech Connect

    Oliker, Leonid; Carter, Jonathan; Wehner, Michael; Canning, Andrew; Ethier, Stephane; Mirin, Arthur; Bala, Govindasamy; Parks, David; Worley, Patrick H; Kitawaki, Shigemune; Tsuda, Yoshinori

    2005-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: atmospheric modeling (CAM), magnetic fusion (GTC), plasma physics (LBMHD3D), and material science (PARATEC). We compare performance of the vector-based Cray X1, Earth Simulator, and newly-released NEC SX-8 and Cray X1E, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: the first reported vector performance results for CAM simulations utilizing a finite-volume dynamical core on a high-resolution atmospheric grid; a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ESpromodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors, with modern parallel vector systems: the Cray X1, Earth Simulator (ES), and the NEC SX-8. Additionally, we examine performance of CAM on the recently-released Cray X1E. Our research team was the first international group to conduct a performance evaluation study at the Earth Simulator Center; remote ES access is not available.Our work builds on our previous efforts [16, 17] and makes several significant contributions: the first reported vector performance results for CAM simulations utilizing a finite-volume dynamical core on a high-resolution atmospheric grid; a new datadecomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.

  20. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Method of computing annual volume of sales. 794.123 Section... STANDARDS ACT Exemption From Overtime Pay Requirements Under Section 7(b)(3) of the Act Annual Gross Volume of Sales 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during...

  1. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Method of computing annual volume of sales. 794.123 Section... STANDARDS ACT Exemption From Overtime Pay Requirements Under Section 7(b)(3) of the Act Annual Gross Volume of Sales 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during...

  2. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR... of Sales 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during...

  3. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR... of Sales 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during...

  4. 29 CFR 794.123 - Method of computing annual volume of sales.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR... of Sales 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during...

  5. A finite element method for the computation of transonic flow past airfoils

    NASA Technical Reports Server (NTRS)

    Eberle, A.

    1980-01-01

    A finite element method for the computation of the transonic flow with shocks past airfoils is presented using the artificial viscosity concept for the local supersonic regime. Generally, the classic element types do not meet the accuracy requirements of advanced numerical aerodynamics requiring special attention to the choice of an appropriate element. A series of computed pressure distributions exhibits the usefulness of the method.

  6. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  7. Computation of Spectroscopic Factors with the Coupled-Cluster Method

    SciTech Connect

    Jensen, O.; Hagen, Gaute; Papenbrock, T.; Dean, David Jarvis; Vaagen, J. S.

    2010-01-01

    We present a calculation of spectroscopic factors within coupled-cluster theory. Our derivation of algebraic equations for the one-body overlap functions are based on coupled-cluster equation-of-motion solutions for the ground and excited states of the doubly magic nucleus with mass number A and the odd-mass neighbor with mass A-1. As a proof-of-principle calculation, we consider ^{16}O and the odd neighbors ^{15}O and ^{15}N, and compute the spectroscopic factor for nucleon removal from ^{16}O. We employ a renormalized low-momentum interaction of the V_{low-k} type derived from a chiral interaction at next-to-next-to-next-to-leading order. We study the sensitivity of our results by variation of the momentum cutoff, and then discuss the treatment of the center of mass.

  8. Computer capillaroscopy as a new cardiological diagnostics method

    NASA Astrophysics Data System (ADS)

    Gurfinkel, Youri I.; Korol, Oleg A.; Kufal, George E.

    1998-04-01

    The blood flow in capillary vessels plays an important role in sustaining the vital activity of the human organism. The computerized capillaroscope is used for the investigations of nailfold (eponychium) capillary blood flow. An important advantage of the instrument is the possibility of performing non-invasive investigations, i.e., without damage to skin or vessels and causing no pain or unpleasant sensations. The high-class equipment and software allow direct observation of capillary blood flow dynamics on a computer screen at a 700 - 1300 times magnification. For the first time in the clinical practice, it has become possible to precisely measure the speed of capillary blood flow, as well as the frequency of aggregate formation (glued together in clots of blood particles). In addition, provision is made for automatic measurement of capillary size and wall thickness and automatic recording of blood aggregate images for further visual study, documentation, and electronic database management.

  9. Computer-assisted methods in chemical toxicity prediction.

    PubMed

    Mohan, C Gopi; Gandhi, Tamanna; Garg, Divita; Shinde, Ranajit

    2007-05-01

    In Silico predictive ADME/Tox screening of compounds is one of the hottest areas in drug discovery. To provide predictions of compound drug-like characteristics early in modern drug-discovery decision making, computational technologies have been widely accepted to develop rapid high throughput in silico ADMET analysis. It is widely perceived that the early screening of chemical entities can significantly reduce the expensive costs associated with late stage failures of drugs due to poor ADME/Tox properties. Drug toxic effects are broadly defined to include toxicity, mutagenicity, carcinogenicity, teratogenicity, neurotoxicity and immunotoxicity. Toxicity prediction techniques and structure-activity relationships relies on the accurate estimation and representation of physico-chemical and toxicological properties. This review highlights some of the freely and commercially available softwares for toxicity predictions. The information content can be utilized as a guide for the scientists involved in the drug discovery arena. PMID:17504185

  10. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  11. Computer-stored faculty publication file using the MT/ST in a medium-sized medical center library.

    PubMed Central

    Lee, S; Gratz, P; White, J

    1976-01-01

    The Bowman Gray School of Medicine Library has implemented a computerized faculty publication file adapted from an existing system that utilized a Magnetic Tape/Selectric Typewriter for catalog card production and computer storage. The faculty publication file has provided printouts for the school's annual report and monthly faculty bulletins. After the data for all faculty bibliographies have been stored in the file, it will be possible to retrieve complete author and departmental listings. The file will be continuously updated by adding current citations and the bibliographies of new faculty members and by deleting data when faculty members leave the staff. PMID:1247706

  12. Vectorization on the star computer of several numerical methods for a fluid flow problem

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  13. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  14. A locally refined rectangular grid finite element method - Application to computational fluid dynamics and computational physics

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1991-01-01

    The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.

  15. An efficient method for computing unsteady transonic aerodynamics of swept wings with control surfaces

    NASA Technical Reports Server (NTRS)

    Liu, D. D.; Kao, Y. F.; Fung, K. Y.

    1989-01-01

    A transonic equivalent strip (TES) method was further developed for unsteady flow computations of arbitrary wing planforms. The TES method consists of two consecutive correction steps to a given nonlinear code such as LTRAN2; namely, the chordwise mean flow correction and the spanwise phase correction. The computation procedure requires direct pressure input from other computed or measured data. Otherwise, it does not require airfoil shape or grid generation for given planforms. To validate the computed results, four swept wings of various aspect ratios, including those with control surfaces, are selected as computational examples. Overall trends in unsteady pressures are established with those obtained by XTRAN3S codes, Isogai's full potential code and measured data by NLR and RAE. In comparison with these methods, the TES has achieved considerable saving in computer time and reasonable accuracy which suggests immediate industrial applications.

  16. Large-Scale Automated Analysis of News Media: A Novel Computational Method for Obesity Policy Research

    PubMed Central

    Hamad, Rita; Pomeranz, Jennifer L.; Siddiqi, Arjumand; Basu, Sanjay

    2015-01-01

    Objective Analyzing news media allows obesity policy researchers to understand popular conceptions about obesity, which is important for targeting health education and policies. A persistent dilemma is that investigators have to read and manually classify thousands of individual news articles to identify how obesity and obesity-related policy proposals may be described to the public in the media. We demonstrate a novel method called “automated content analysis” that permits researchers to train computers to “read” and classify massive volumes of documents. Methods We identified 14,302 newspaper articles that mentioned the word “obesity” during 2011–2012. We examined four states that vary in obesity prevalence and policy (Alabama, California, New Jersey, and North Carolina). We tested the reliability of an automated program to categorize the media’s “framing” of obesity as an individual-level problem (e.g., diet) and/or an environmental-level problem (e.g., obesogenic environment). Results The automated program performed similarly to human coders. The proportion of articles with individual-level framing (27.7–31.0%) was higher than the proportion with neutral (18.0–22.1%) or environmental-level framing (16.0–16.4%) across all states and over the entire study period (p<0.05). Conclusion We demonstrate a novel approach to the study of how obesity concepts are communicated and propagated in news media. PMID:25522013

  17. Methods of Conserving Heating Energy Utilized in Thirty-One Public School Systems.

    ERIC Educational Resources Information Center

    Davis, Kathy Eggers

    The Memphis City School System was notified by Memphis Light, Gas, and Water that it was necessary to reduce its consumption of natural gas during the winter of 1975-76. A survey was developed and sent to 44 large public school systems to determine which methods of heating energy conservation were used most frequently and which methods were most…

  18. Methods of Conserving Heating Energy Utilized in Thirty-One Public School Systems.

    ERIC Educational Resources Information Center

    Davis, Kathy Eggers

    The Memphis City School System was notified by Memphis Light, Gas, and Water that it was necessary to reduce its consumption of natural gas during the winter of 1975-76. A survey was developed and sent to 44 large public school systems to determine which methods of heating energy conservation were used most frequently and which methods were most

  19. Using Computers in Relation to Learning Climate in CLIL Method

    ERIC Educational Resources Information Center

    Binterov, Helena; Komnkov, Olga

    2013-01-01

    The main purpose of the work is to present a successful implementation of CLIL method in Mathematics lessons in elementary schools. Nowadays at all types of schools (elementary schools, high schools and universities) all over the world every school subject tends to be taught in a foreign language. In 2003, a document called Action plan for

  20. Combination of Thin Lenses--A Computer Oriented Method.

    ERIC Educational Resources Information Center

    Flerackers, E. L. M.; And Others

    1984-01-01

    Suggests a method treating geometric optics using a microcomputer to do the calculations of image formation. Calculations are based on the connection between the composition of lenses and the mathematics of fractional linear equations. Logic of the analysis and an example problem are included. (JM)

  1. Computational methods for estimation of parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.; Murphy, K. A.

    1983-01-01

    Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.

  2. Method and Apparatus for Computed Imaging Backscatter Radiography

    NASA Technical Reports Server (NTRS)

    Shedlock, Daniel (Inventor); Meng, Christopher (Inventor); Sabri, Nissia (Inventor); Dugan, Edward T. (Inventor); Jacobs, Alan M. (Inventor)

    2013-01-01

    Systems and methods of x-ray backscatter radiography are provided. A single-sided, non-destructive imaging technique utilizing x-ray radiation to image subsurface features is disclosed, capable of scanning a region using a fan beam aperture and gathering data using rotational motion.

  3. Limitations of the current methods used to compute meteors orbits

    NASA Astrophysics Data System (ADS)

    Egal, A.; Gural, P.; Vaubaillon, J.; Colas, F.; Thuillot, W.

    2015-10-01

    The Cameras for BEtter Resolution NETwork (CABERNET) project aims to provide the most accurate meteoroid orbits achievable working with digital recordings of night sky imagery. The level of performance obtained is governed by the technical attributes of the collection systems and having both accurate and robust data processing. The technical challenges have been met by employing three cameras, each with a field of view of 40x26 and a spatial (angular) resolution of 0.01/pixel. The single image snapshots of meteors achieve temporal discrimination along the track through the use of an electronic shutter coupled to the cameras, operating at a sample rate between 100Hz and 200Hz. The numerical processing of meteor trajectories has already been explored by many authors. This included an examination of the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-fit parameterization method published by Gural (2012). After a comparison of these three techniques, we chose to implement Gural 's method, employing several non-linear minimization techniques and trying to match the modeling as close as possible to the basic data measured, i.e. the meteor space-time positions in the sequence of images. This approach results in a more precise and reliable way to determine both the meteor trajectory and velocity through the atmosphere.

  4. Recursive method for computing matrix elements for two-body interactions

    NASA Astrophysics Data System (ADS)

    Hyvärinen, Juhani; Suhonen, Jouni

    2015-05-01

    A recursive method for the efficient computation of two-body matrix elements is presented. The method consists of a set of recursion relations for the computationally demanding radial integral and adds one more tool to the set of computational methods introduced by Horie and Sasaki [H. Horie and K. Sasaki, Prog. Theor. Phys. 25, 475 (1961), 10.1143/PTP.25.475]. The neutrinoless double-β decay will serve as the primary application and example, but the method is general and can be applied equally well to other kinds of nuclear structure calculations involving matrix elements of two-body interactions.

  5. Fast computation method for a Fresnel hologram using three-dimensional affine transformations in real space.

    PubMed

    Sakata, Hironobu; Sakamoto, Yuji

    2009-12-01

    Calculating computer-generated holograms takes a tremendous amount of computation time. We propose a fast method for calculating object lights for Fresnel holograms without the use of a Fourier transform. This method generates object lights of variously shaped patches from a basic object light for a fixed-shape patch by using three-dimensional affine transforms. It can thus calculate holograms that display complex objects including patches of various shapes. Computer simulations and optical experiments demonstrate the effectiveness of this method. The results show that it performs twice as fast as a method that uses a Fourier transform. PMID:19956293

  6. Fast calculation method of computer-generated cylindrical hologram using wave-front recording surface.

    PubMed

    Zhao, Yu; Piao, Mei-lan; Li, Gang; Kim, Nam

    2015-07-01

    Fast calculation method for a computer-generated cylindrical hologram (CGCH) is proposed. The method consists of two steps: the first step is a calculation of a virtual wave-front recording surface (WRS), which is located between the 3D object and CGCH. In the second step, in order to obtain a CGCH, we execute the diffraction calculation based on the fast Fourier transform (FFT) from the WRS to the CGCH, which are in the same concentric arrangement. The computational complexity is dramatically reduced in comparison with direct integration method. The simulation results confirm that our proposed method is able to improve the computational speed of CGCH. PMID:26125356

  7. Pragmatic approaches to using computational methods to predict xenobiotic metabolism.

    PubMed

    Piechota, Przemyslaw; Cronin, Mark T D; Hewitt, Mark; Madden, Judith C

    2013-06-24

    In this study the performance of a selection of computational models for the prediction of metabolites and/or sites of metabolism was investigated. These included models incorporated in the MetaPrint2D-React, Meteor, and SMARTCyp software. The algorithms were assessed using two data sets: one a homogeneous data set of 28 Non-Steroidal Anti-Inflammatory Drugs (NSAIDs) and paracetamol (DS1) and the second a diverse data set of 30 top-selling drugs (DS2). The prediction of metabolites for the diverse data set (DS2) was better than for the more homogeneous DS1 for each model, indicating that some areas of chemical space may be better represented than others in the data used to develop and train the models. The study also identified compounds for which none of the packages could predict metabolites, again indicating areas of chemical space where more information is needed. Pragmatic approaches to using metabolism prediction software have also been proposed based on the results described here. These approaches include using cutoff values instead of restrictive reasoning settings in Meteor to reduce the output with little loss of sensitivity and for directing metabolite prediction by preselection based on likely sites of metabolism. PMID:23718189

  8. An Improved Computer Vision Method for White Blood Cells Detection

    PubMed Central

    Cuevas, Erik; Daz, Margarita; Manzanares, Miguel; Zaldivar, Daniel; Perez-Cisneros, Marco

    2013-01-01

    The automatic detection of white blood cells (WBCs) still remains as an unsolved issue in medical imaging. The analysis of WBC images has engaged researchers from fields of medicine and computer vision alike. Since WBC can be approximated by an ellipsoid form, an ellipse detector algorithm may be successfully applied in order to recognize such elements. This paper presents an algorithm for the automatic detection of WBC embedded in complicated and cluttered smear images that considers the complete process as a multiellipse detection problem. The approach, which is based on the differential evolution (DE) algorithm, transforms the detection task into an optimization problem whose individuals represent candidate ellipses. An objective function evaluates if such candidate ellipses are actually present in the edge map of the smear image. Guided by the values of such function, the set of encoded candidate ellipses (individuals) are evolved using the DE algorithm so that they can fit into the WBCs which are enclosed within the edge map of the smear image. Experimental results from white blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique in terms of its accuracy and robustness. PMID:23762178

  9. An improved computer vision method for white blood cells detection.

    PubMed

    Cuevas, Erik; Daz, Margarita; Manzanares, Miguel; Zaldivar, Daniel; Perez-Cisneros, Marco

    2013-01-01

    The automatic detection of white blood cells (WBCs) still remains as an unsolved issue in medical imaging. The analysis of WBC images has engaged researchers from fields of medicine and computer vision alike. Since WBC can be approximated by an ellipsoid form, an ellipse detector algorithm may be successfully applied in order to recognize such elements. This paper presents an algorithm for the automatic detection of WBC embedded in complicated and cluttered smear images that considers the complete process as a multiellipse detection problem. The approach, which is based on the differential evolution (DE) algorithm, transforms the detection task into an optimization problem whose individuals represent candidate ellipses. An objective function evaluates if such candidate ellipses are actually present in the edge map of the smear image. Guided by the values of such function, the set of encoded candidate ellipses (individuals) are evolved using the DE algorithm so that they can fit into the WBCs which are enclosed within the edge map of the smear image. Experimental results from white blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique in terms of its accuracy and robustness. PMID:23762178

  10. NMR quantum computing: applying theoretical methods to designing enhanced systems.

    PubMed

    Mawhinney, Robert C; Schreckenbach, Georg

    2004-10-01

    Density functional theory results for chemical shifts and spin-spin coupling constants are presented for compounds currently used in NMR quantum computing experiments. Specific design criteria were examined and numerical guidelines were assessed. Using a field strength of 7.0 T, protons require a coupling constant of 4 Hz with a chemical shift separation of 0.3 ppm, whereas carbon needs a coupling constant of 25 Hz for a chemical shift difference of 10 ppm, based on the minimal coupling approximation. Using these guidelines, it was determined that 2,3-dibromothiophene is limited to only two qubits; the three qubit system bromotrifluoroethene could be expanded to five qubits and the three qubit system 2,3-dibromopropanoic acid could also be used as a six qubit system. An examination of substituent effects showed that judiciously choosing specific groups could increase the number of available qubits by removing rotational degeneracies in addition to introducing specific conformational preferences that could increase (or decrease) the magnitude of the couplings. The introduction of one site of unsaturation can lead to a marked improvement in spectroscopic properties, even increasing the number of active nuclei. PMID:15366045

  11. The Role of Analytic Methods in Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Posey, J. W.

    2003-01-01

    As air traffic grows, annoyance produced by aircraft noise will grow unless new aircraft produce no objectionable noise outside airport boundaries. Such ultra-quiet aircraft must be of revolutionary design, having unconventional planforms and most likely with propulsion systems highly integrated with the airframe. Sophisticated source and propagation modeling will be required to properly account for effects of the airframe on noise generation, reflection, scattering, and radiation. It is tempting to say that since all the effects are included in the Navier-Stokes equations, time-accurate CFD can provide all the answers. Unfortunately, the computational time required to solve a full aircraft noise problem will be prohibitive for many years to come. On the other hand, closed form solutions are not available for such complicated problems. Therefore, a hybrid approach is recommended in which analysis is taken as far as possible without omitting relevant physics or geometry. Three examples are given of recently reported work in broadband noise prediction, ducted fan noise propagation and radiation, and noise prediction for complex three-dimensional jets.

  12. Using an Interactive Computer System to Teach Descriptive Numerical Analysis and Geographic Research Methods to Undergraduate Students.

    ERIC Educational Resources Information Center

    Rivizzigno, Victoria L.

    A method is proposed for using computer systems to introduce students in geography courses on the college level to quantitative methods. Two computer systems are discussed--Interactive Computer Systems (computer packages which enhance student learning by providing instantaneous feedback) and Computer Enhancement of Instruction, CEI, (standard

  13. Methods for design and evaluation of integrated hardware-software systems for concurrent computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed.

  14. Estimating cost-effectiveness in public health: a summary of modelling and valuation methods.

    PubMed

    Marsh, Kevin; Phillips, Ceri J; Fordham, Richard; Bertranou, Evelina; Hale, Janine

    2012-01-01

    It is acknowledged that economic evaluation methods as they have been developed for Health Technology Assessment do not capture all the costs and benefits relevant to the assessment of public health interventions. This paper reviews methods that could be employed to measure and value the broader set of benefits generated by public health interventions. It is proposed that two key developments are required if this vision is to be achieved. First, there is a trend to modelling approaches that better capture the effects of public health interventions. This trend needs to continue, and economists need to consider a broader range of modelling techniques than are currently employed to assess public health interventions. The selection and implementation of alternative modelling techniques should be facilitated by the production of better data on the behavioural outcomes generated by public health interventions. Second, economists are currently exploring a number of valuation paradigms that hold the promise of more appropriate valuation of public health interventions outcomes. These include the capabilities approach and the subjective well-being approach, both of which offer the possibility of broader measures of value than the approaches currently employed by health economists. These developments should not, however, be made by economists alone. These questions, in particular what method should be used to value public health outcomes, require social value judgements that are beyond the capacity of economists. This choice will require consultation with policy makers, and perhaps even the general public. Such collaboration would have the benefit of ensuring that the methods developed are useful for decision makers. PMID:22943762

  15. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  16. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  17. Computation of separated flow on a ramp using the space marching conservative supra-characteristics method

    NASA Technical Reports Server (NTRS)

    Stookesberry, D. C.; Tannehill, J. C.

    1986-01-01

    Steady, hypersonic viscous flows over compression corners with streamwise separation have been computed using the space marching Conservative Supra-Characteristics Method (CSCM-S) of Lombard. The CSCM-S method permits stable space marching of the parabolized Navier-Stokes (PNS) equations through large separated flow regions. The present method has been used to compute surface pressure, heat transfer, and skin friction coefficients for two compression corner cases studied experimentally by Holden and Moselle. The computed results compare favorably with previous Navier-Stokes results and the experimental data. The present method has also been compared with the conventional Beam-Warming scheme for solving the PNS equations. Comparison are made for accuracy, computer time, and computer storage.

  18. New Computational Methods for the Prediction and Analysis of Helicopter Noise

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper describes several new methods to predict and analyze rotorcraft noise. These methods are: 1) a combined computational fluid dynamics and Kirchhoff scheme for far-field noise predictions, 2) parallel computer implementation of the Kirchhoff integrations, 3) audio and visual rendering of the computed acoustic predictions over large far-field regions, and 4) acoustic tracebacks to the Kirchhoff surface to pinpoint the sources of the rotor noise. The paper describes each method and presents sample results for three test cases. The first case consists of in-plane high-speed impulsive noise and the other two cases show idealized parallel and oblique blade-vortex interactions. The computed results show good agreement with available experimental data but convey much more information about the far-field noise propagation. When taken together, these new analysis methods exploit the power of new computer technologies and offer the potential to significantly improve our prediction and understanding of rotorcraft noise.

  19. Development of supersonic computational aerodynamic program using panel method

    NASA Technical Reports Server (NTRS)

    Maruyama, Y.; Akishita, S.; Nakamura, A.

    1987-01-01

    An aerodynamic program for steady supersonic linearized potential flow using a higher order panel method was developed. Boundary surface is divided into planar triangular panels on each of which a linearly varying doublet and a constant or linearly varying source are distributed. Distributions of source and doublet on the panel assemblies of the panels can be determined by their strengths at nodal points, which are placed at the vertices of the panels for linear distribution or on each panel for constant distribution.

  20. Theoretical studies of potential energy surfaces and computational methods.

    SciTech Connect

    Shepard, R.

    2006-01-01

    This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces (PES) involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. Most of our work focuses on general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of molecular geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.