Science.gov

Sample records for methods publications computer

  1. A computational method for drug repositioning using publicly available gene expression data

    PubMed Central

    2015-01-01

    Motivation The identification of new therapeutic uses of existing drugs, or drug repositioning, offers the possibility of faster drug development, reduced risk, lesser cost and shorter paths to approval. The advent of high throughput microarray technology has enabled comprehensive monitoring of transcriptional response associated with various disease states and drug treatments. This data can be used to characterize disease and drug effects and thereby give a measure of the association between a given drug and a disease. Several computational methods have been proposed in the literature that make use of publicly available transcriptional data to reposition drugs against diseases. Method In this work, we carry out a data mining process using publicly available gene expression data sets associated with a few diseases and drugs, to identify the existing drugs that can be used to treat genes causing lung cancer and breast cancer. Results Three strong candidates for repurposing have been identified- Letrozole and GDC-0941 against lung cancer, and Ribavirin against breast cancer. Letrozole and GDC-0941 are drugs currently used in breast cancer treatment and Ribavirin is used in the treatment of Hepatitis C. PMID:26679199

  2. Exploration of Preterm Birth Rates Using the Public Health Exposome Database and Computational Analysis Methods

    PubMed Central

    Kershenbaum, Anne D.; Langston, Michael A.; Levine, Robert S.; Saxton, Arnold M.; Oyana, Tonny J.; Kilbourne, Barbara J.; Rogers, Gary L.; Gittner, Lisaann S.; Baktash, Suzanne H.; Matthews-Juarez, Patricia; Juarez, Paul D.

    2014-01-01

    Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother’s age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

  3. Publication patterns in HEP computing

    NASA Astrophysics Data System (ADS)

    Pia, M. G.; Basaglia, T.; Bell, Z. W.; Dressendorfer, P. V.

    2012-12-01

    An overview of the evolution of computing-oriented publications in high energy physics following the start of operation of LHC. Quantitative analyses are illustrated, which document the production of scholarly papers on computing-related topics by high energy physics experiments and core tools projects, and the citations they receive. Several scientometric indicators are analyzed to characterize the role of computing in high energy physics literature. Distinctive features of software-oriented and hardware-oriented scholarly publications are highlighted. Current patterns and trends are compared to the situation in previous generations’ experiments.

  4. BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability

    SciTech Connect

    Sellers, C.; Fox, B.; Paulz, J.

    1996-03-01

    The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

  5. Computer Science and Technology Publications. NBS Publications List 84.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.

    This bibliography lists publications of the Institute for Computer Sciences and Technology of the National Bureau of Standards. Publications are listed by subject in the areas of computer security, computer networking, and automation technology. Sections list publications of: (1) current Federal Information Processing Standards; (2) computer…

  6. Computers in Public Broadcasting: Who, What, Where.

    ERIC Educational Resources Information Center

    Yousuf, M. Osman

    This handbook offers guidance to public broadcasting managers on computer acquisition and development activities. Based on a 1981 survey of planned and current computer uses conducted by the Corporation for Public Broadcasting (CPB) Information Clearinghouse, computer systems in public radio and television broadcasting stations are listed by…

  7. Assessing Tax Form Distribution Costs: A Proposed Method for Computing the Dollar Value of Tax Form Distribution in a Public Library.

    ERIC Educational Resources Information Center

    Casey, James B.

    1998-01-01

    Explains how a public library can compute the actual cost of distributing tax forms to the public by listing all direct and indirect costs and demonstrating the formulae and necessary computations. Supplies directions for calculating costs involved for all levels of staff as well as associated public relations efforts, space, and utility costs.…

  8. Computer Availability, Computer Experience and Technophobia among Public School Teachers.

    ERIC Educational Resources Information Center

    Rosen, Larry D.; Weil, Michelle M.

    1995-01-01

    Describes a study that examined technophobia in elementary and secondary public school teachers as an explanation for low levels of computer utilization. Highlights include empirical studies of technophobia; technophobia interventions; demographic differences; computer availability and use; computer anxiety; computer attitudes; and predictive…

  9. Some Uses of Computers in Rhetoric and Public Address.

    ERIC Educational Resources Information Center

    Clevenger, Theodore, Jr.

    1969-01-01

    The author discusses the impact of the "computer revolution" on the field of rhetoric and public address in terms of the potential applications of computer methods to rhetorical problems. He first discusses the computer as a very fast calculator, giving the example of a study that probably would not have been undertaken if the calculations had had…

  10. Educational Computing in the Andover Public Schools.

    ERIC Educational Resources Information Center

    Mitsakos, Charles L.

    A rationale for computers in education in the Andover (Massachusetts) public schools, a curricular scope and sequence, a computer acquisitions plan, and a staff development summary are presented. The report is a result of an 18-month study of computers in education; pilot programs in the schools; and input from specialists in business, education,…

  11. Computational Methods for Crashworthiness

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Carden, Huey D. (Compiler)

    1993-01-01

    Presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Crashworthiness held at Langley Research Center on 2-3 Sep. 1992 are included. The presentations addressed activities in the area of impact dynamics. Workshop attendees represented NASA, the Army and Air Force, the Lawrence Livermore and Sandia National Laboratories, the aircraft and automotive industries, and academia. The workshop objectives were to assess the state-of-technology in the numerical simulation of crash and to provide guidelines for future research.

  12. Publication Bias in Methodological Computational Research

    PubMed Central

    Boulesteix, Anne-Laure; Stierle, Veronika; Hapfelmeier, Alexander

    2015-01-01

    The problem of publication bias has long been discussed in research fields such as medicine. There is a consensus that publication bias is a reality and that solutions should be found to reduce it. In methodological computational research, including cancer informatics, publication bias may also be at work. The publication of negative research findings is certainly also a relevant issue, but has attracted very little attention to date. The present paper aims at providing a new formal framework to describe the notion of publication bias in the context of methodological computational research, facilitate and stimulate discussions on this topic, and increase awareness in the scientific community. We report an exemplary pilot study that aims at gaining experiences with the collection and analysis of information on unpublished research efforts with respect to publication bias, and we outline the encountered problems. Based on these experiences, we try to formalize the notion of publication bias. PMID:26508827

  13. Public Databases Supporting Computational Toxicology

    EPA Science Inventory

    A major goal of the emerging field of computational toxicology is the development of screening-level models that predict potential toxicity of chemicals from a combination of mechanistic in vitro assay data and chemical structure descriptors. In order to build these models, resea...

  14. Public Relations, Computers, and Election Success.

    ERIC Educational Resources Information Center

    Banach, William J.; Westley, Lawrence

    This paper describes a successful financial election campaign that used a combination of computer technology and public relations techniques. Analysis, determination of needs, development of strategy, organization, finance, communication, and evaluation are given as the steps to be taken for a successful school financial campaign. The authors…

  15. Protecting Public-Access Computers in Libraries.

    ERIC Educational Resources Information Center

    King, Monica

    1999-01-01

    Describes one public library's development of a computer-security plan, along with helpful products used. Discussion includes Internet policy, physical protection of hardware, basic protection of the operating system and software on the network, browser dilemmas and maintenance, creating clear intuitive interface, and administering fair use and…

  16. On computational methods for crashworthiness

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  17. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective...

  18. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective...

  19. Computational Methods Development at Ames

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  20. Computational Methods For Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1988-01-01

    Selected methods of computation for simulation of mechanical behavior of fiber/matrix composite materials described in report. For each method, report describes significance of behavior to be simulated, procedure for simulation, and representative results. Following applications discussed: effects of progressive degradation of interply layers on responses of composite structures, dynamic responses of notched and unnotched specimens, interlaminar fracture toughness, progressive fracture, thermal distortions of sandwich composite structure, and metal-matrix composite structures for use at high temperatures. Methods demonstrate effectiveness of computational simulation as applied to complex composite structures in general and aerospace-propulsion structural components in particular.

  1. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  2. Computational Modeling Method for Superalloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John

    1997-01-01

    Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.

  3. Closing the "Digital Divide": Building a Public Computing Center

    ERIC Educational Resources Information Center

    Krebeck, Aaron

    2010-01-01

    The public computing center offers an economical and environmentally friendly model for providing additional public computer access when and where it is needed. Though not intended to be a replacement for a full-service branch, the public computing center does offer a budget-friendly option for quickly expanding high-demand services into the…

  4. Cepstral methods in computational vision

    NASA Astrophysics Data System (ADS)

    Bandari, Esfandiar; Little, James J.

    1993-05-01

    Many computational vision routines can be regarded as recognition and retrieval of echoes in space or time. Cepstral analysis is a powerful nonlinear adaptive signal processing methodology widely used in many areas such as: echo retrieval and removal, speech processing and phoneme chunking, radar and sonar processing, seismology, medicine, image deblurring and restoration, and signal recovery. The aim of this paper is: (1) To provide a brief mathematical and historical review of cepstral techniques. (2) To introduce computational and performance improvements to power and differential cepstrum for use in detection of echoes; and to provide a comparison between these methods and the traditional cepstral techniques. (3) To apply cepstrum to visual tasks such as motion analysis and trinocular vision. And (4) to draw a brief comparison between cepstrum and other matching techniques. The computational and performance improvements introduced in this paper can e applied in other areas that frequently utilize cepstrum.

  5. Computational methods for stellerator configurations

    NASA Astrophysics Data System (ADS)

    Betancourt, O.

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings.

  6. Computer Usage by Speech-Language Pathologists in Public Schools.

    ERIC Educational Resources Information Center

    Houle, Gail Ruppert

    1988-01-01

    Investigation of factors influencing public school speech-language pathologists' acceptance and/or resistance to computer technology found differences between frequent computer users and rare users which were attributed to differences in attitudes toward computers, available funding for computers, in-service training, and physical facilities.…

  7. Beyond Theory: Improving Public Relations Writing through Computer Technology.

    ERIC Educational Resources Information Center

    Neff, Bonita Dostal

    Computer technology (primarily word processing) enables the student of public relations writing to improve the writing process through increased flexibility in writing, enhanced creativity, increased support of management skills and team work. A new instructional model for computer use in public relations courses at Purdue University Calumet…

  8. SEMINAR ON COMPUTATIONAL LINGUISTICS. PUBLIC HEALTH SERVICE PUBLICATION NUMBER 1716.

    ERIC Educational Resources Information Center

    PRATT, ARNOLD W.; AND OTHERS, Eds.

    IN OCTOBER 1966 A SEMINAR WAS HELD IN BETHESDA, MARYLAND ON THE USE OF COMPUTERS IN LANGUAGE RESEARCH. THE ORGANIZERS OF THE CONFERENCE, THE CENTER FOR APPLIED LINGUISTICS AND THE NATIONAL INSTITUTES OF HEALTH, ATTEMPTED TO BRING TOGETHER EMINENT REPRESENTATIVES OF THE MAJOR SCHOOLS OF CURRENT LINGUISTIC RESEARCH. THE PAPERS PRESENTED AT THE…

  9. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Method of computing coverage. 80.771 Section 80.771 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771...

  10. Systems Science Methods in Public Health

    PubMed Central

    Luke, Douglas A.; Stamatakis, Katherine A.

    2012-01-01

    Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example in preparing for global pandemics. However, systems science study designs and analytic methods remain underutilized and are not widely featured in public health curricula or training. In this review we present an argument for the utility of systems science methods in public health, introduce three important systems science methods (system dynamics, network analysis, and agent-based modeling), and provide three case studies where these methods have been used to answer important public health science questions in the areas of infectious disease, tobacco control, and obesity. PMID:22224885

  11. Computational methods for stealth design

    SciTech Connect

    Cable, V.P. )

    1992-08-01

    A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.

  12. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-01

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods. PMID:26549429

  13. Optimization Methods for Computer Animation.

    ERIC Educational Resources Information Center

    Donkin, John Caldwell

    Emphasizing the importance of economy and efficiency in the production of computer animation, this master's thesis outlines methodologies that can be used to develop animated sequences with the highest quality images for the least expenditure. It is assumed that if computer animators are to be able to fully exploit the available resources, they…

  14. Public Relations Writing Methods by Objectives.

    ERIC Educational Resources Information Center

    Pearson, Ron

    1987-01-01

    Analyzes public relations writing as an activity which is uniquely related to organizational goals and objectives. Provides a basis for theory by linking rhetorical theory and communication management and describes a method for framing public relations writing objectives. Notes that objective statements are useful because, among other things, they…

  15. Computers and Public Policy. Proceedings of the Symposium Man and the Computer.

    ERIC Educational Resources Information Center

    Oden, Teresa, Ed.; Thompson, Christine, Ed.

    Experts from the fields of law, business, government, and research were invited to a symposium sponsored by Dartmouth College to examine public policies which are challenged by the advent of computer technology. Eleven papers were delivered addressing such critical social issues related to computing and public policies as the man-computer…

  16. Computer-Assisted Management of Instruction in Veterinary Public Health

    ERIC Educational Resources Information Center

    Holt, Elsbeth; And Others

    1975-01-01

    Reviews a course in Food Hygiene and Public Health at the University of Illinois College of Veterinary Medicine in which students are sequenced through a series of computer-based lessons or autotutorial slide-tape lessons, the computer also being used to route, test, and keep records. Since grades indicated mastery of the subject, the course will…

  17. How You Can Protect Public Access Computers "and" Their Users

    ERIC Educational Resources Information Center

    Huang, Phil

    2007-01-01

    By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and…

  18. Computers, Schools and Families: A Radical Vision for Public Education.

    ERIC Educational Resources Information Center

    Debenham, Jerry; Smith, Gerald R.

    1994-01-01

    Discusses computer-aided instruction from home computers as an alternative or an adjunct to public school education. Topics addressed include the partnership between parents and schools; new roles for parents and schools; financial considerations; educational software; a program to help teachers develop their own software; and field testing in…

  19. Computational methods for probability of instability calculations

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Burnside, O. H.

    1990-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.

  20. Wildlife software: procedures for publication of computer software

    USGS Publications Warehouse

    Samuel, M.D.

    1990-01-01

    Computers and computer software have become an integral part of the practice of wildlife science. Computers now play an important role in teaching, research, and management applications. Because of the specialized nature of wildlife problems, specific computer software is usually required to address a given problem (e.g., home range analysis). This type of software is not usually available from commercial vendors and therefore must be developed by those wildlife professionals with particular skill in computer programming. Current journal publication practices generally prevent a detailed description of computer software associated with new techniques. In addition, peer review of journal articles does not usually include a review of associated computer software. Thus, many wildlife professionals are usually unaware of computer software that would meet their needs or of major improvements in software they commonly use. Indeed most users of wildlife software learn of new programs or important changes only by word of mouth.

  1. A Computer-Assisted Instruction in Teaching Abstract Statistics to Public Affairs Undergraduates

    ERIC Educational Resources Information Center

    Ozturk, Ali Osman

    2012-01-01

    This article attempts to demonstrate the applicability of a computer-assisted instruction supported with simulated data in teaching abstract statistical concepts to political science and public affairs students in an introductory research methods course. The software is called the Elaboration Model Computer Exercise (EMCE) in that it takes a great…

  2. Computational methods for unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Thomas, James L.

    1987-01-01

    Computational methods for unsteady transonic flows are surveyed with emphasis upon applications to aeroelastic analysis and flutter prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

  3. Computational methods for unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Thomas, J. L.

    1987-01-01

    Computational methods for unsteady transonic flows are surveyed with emphasis on prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed, and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

  4. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  5. Computational Methods in Nanostructure Design

    NASA Astrophysics Data System (ADS)

    Bellesia, Giovanni; Lampoudi, Sotiria; Shea, Joan-Emma

    Self-assembling peptides can serve as building blocks for novel biomaterials. Replica exchange molecular dynamics simulations are a powerful means to probe the conformational space of these peptides. We discuss the theoretical foundations of this enhanced sampling method and its use in biomolecular simulations. We then apply this method to determine the monomeric conformations of the Alzheimer amyloid-β(12-28) peptide that can serve as initiation sites for aggregation.

  6. Combinatorial protein design strategies using computational methods.

    PubMed

    Kono, Hidetoshi; Wang, Wei; Saven, Jeffery G

    2007-01-01

    Computational methods continue to facilitate efforts in protein design. Most of this work has focused on searching sequence space to identify one or a few sequences compatible with a given structure and functionality. Probabilistic computational methods provide information regarding the range of amino acid variability permitted by desired functional and structural constraints. Such methods may be used to guide the construction of both individual sequences and combinatorial libraries of proteins. PMID:17041256

  7. Computational Methods to Model Persistence.

    PubMed

    Vandervelde, Alexandra; Loris, Remy; Danckaert, Jan; Gelens, Lendert

    2016-01-01

    Bacterial persister cells are dormant cells, tolerant to multiple antibiotics, that are involved in several chronic infections. Toxin-antitoxin modules play a significant role in the generation of such persister cells. Toxin-antitoxin modules are small genetic elements, omnipresent in the genomes of bacteria, which code for an intracellular toxin and its neutralizing antitoxin. In the past decade, mathematical modeling has become an important tool to study the regulation of toxin-antitoxin modules and their relation to the emergence of persister cells. Here, we provide an overview of several numerical methods to simulate toxin-antitoxin modules. We cover both deterministic modeling using ordinary differential equations and stochastic modeling using stochastic differential equations and the Gillespie method. Several characteristics of toxin-antitoxin modules such as protein production and degradation, negative autoregulation through DNA binding, toxin-antitoxin complex formation and conditional cooperativity are gradually integrated in these models. Finally, by including growth rate modulation, we link toxin-antitoxin module expression to the generation of persister cells. PMID:26468111

  8. 77 FR 4568 - Annual Computational Science Symposium; Public Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-30

    ... Drug Administration (FDA), in cosponsorship with the Pharmaceutical Users Software Exchange (PhUSE), is announcing a public conference entitled ``The FDA/PhUSE Annual Computational Science Symposium.'' The purpose...-5300. Contact: Chris Decker, U.S. Regional Director, Pharmaceutical Users Software Exchange (PhUSE),...

  9. Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

    ERIC Educational Resources Information Center

    Jayakar, Krishna; Park, Eun-A

    2012-01-01

    The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

  10. Computer-Based Test Interpretation and the Public Interest.

    ERIC Educational Resources Information Center

    Mitchell, James V., Jr.

    Computer-based test interpretation (CBTI) is discussed in terms of its potential dangers to the public interest, problems with professional review of CBTI systems, and needed policies for these systems. Several problems with CBTI systems are outlined: (1) they may be nicely packaged, but it is difficult to establish their value; (2) they do not…

  11. Computational Methods for Ideal Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Kercher, Andrew D.

    Numerical schemes for the ideal magnetohydrodynamics (MHD) are widely used for modeling space weather and astrophysical flows. They are designed to resolve the different waves that propagate through a magnetohydro fluid, namely, the fast, Alfven, slow, and entropy waves. Numerical schemes for ideal magnetohydrodynamics that are based on the standard finite volume (FV) discretization exhibit pseudo-convergence in which non-regular waves no longer exist only after heavy grid refinement. A method is described for obtaining solutions for coplanar and near coplanar cases that consist of only regular waves, independent of grid refinement. The method, referred to as Compound Wave Modification (CWM), involves removing the flux associated with non-regular structures and can be used for simulations in two- and three-dimensions because it does not require explicitly tracking an Alfven wave. For a near coplanar case, and for grids with 213 points or less, we find root-mean-square-errors (RMSEs) that are as much as 6 times smaller. For the coplanar case, in which non-regular structures will exist at all levels of grid refinement for standard FV schemes, the RMSE is as much as 25 times smaller. A multidimensional ideal MHD code has been implemented for simulations on graphics processing units (GPUs). Performance measurements were conducted for both the NVIDIA GeForce GTX Titan and Intel Xeon E5645 processor. The GPU is shown to perform one to two orders of magnitude greater than the CPU when using a single core, and two to three times greater than when run in parallel with OpenMP. Performance comparisons are made for two methods of storing data on the GPU. The first approach stores data as an Array of Structures (AoS), e.g., a point coordinate array of size 3 x n is iterated over. The second approach stores data as a Structure of Arrays (SoA), e.g. three separate arrays of size n are iterated over simultaneously. For an AoS, coalescing does not occur, reducing memory efficiency

  12. Distributed Data Mining using a Public Resource Computing Framework

    NASA Astrophysics Data System (ADS)

    Cesario, Eugenio; de Caria, Nicola; Mastroianni, Carlo; Talia, Domenico

    The public resource computing paradigm is often used as a successful and low cost mechanism for the management of several classes of scientific and commercial applications that require the execution of a large number of independent tasks. Public computing frameworks, also known as “Desktop Grids”, exploit the computational power and storage facilities of private computers, or “workers”. Despite the inherent decentralized nature of the applications for which they are devoted, these systems often adopt a centralized mechanism for the assignment of jobs and distribution of input data, as is the case for BOINC, the most popular framework in this realm. We present a decentralized framework that aims at increasing the flexibility and robustness of public computing applications, thanks to two basic features: (i) the adoption of a P2P protocol for dynamically matching the job specifications with the worker characteristics, without relying on centralized resources; (ii) the use of distributed cache servers for an efficient dissemination and reutilization of data files. This framework is exploitable for a wide set of applications. In this work, we describe how a Java prototype of the framework was used to tackle the problem of mining frequent itemsets from a transactional dataset, and show some preliminary yet interesting performance results that prove the efficiency improvements that can derive from the presented architecture.

  13. Transonic wing analysis using advanced computational methods

    NASA Technical Reports Server (NTRS)

    Henne, P. A.; Hicks, R. M.

    1978-01-01

    This paper discusses the application of three-dimensional computational transonic flow methods to several different types of transport wing designs. The purpose of these applications is to evaluate the basic accuracy and limitations associated with such numerical methods. The use of such computational methods for practical engineering problems can only be justified after favorable evaluations are completed. The paper summarizes a study of both the small-disturbance and the full potential technique for computing three-dimensional transonic flows. Computed three-dimensional results are compared to both experimental measurements and theoretical results. Comparisons are made not only of pressure distributions but also of lift and drag forces. Transonic drag rise characteristics are compared. Three-dimensional pressure distributions and aerodynamic forces, computed from the full potential solution, compare reasonably well with experimental results for a wide range of configurations and flow conditions.

  14. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  15. Computational Chemistry Using Modern Electronic Structure Methods

    ERIC Educational Resources Information Center

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  16. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  17. [Computer-assisted control of a public health inventory].

    PubMed

    Beckmann, W; Dörr, M; Höhmann, J

    1989-04-01

    To cope with its tasks more efficiently, the Public Health Office of the "Märkische Kreis" in 1985 installed an information system on the basis of electronic data processing, the so-called "hygiene inventory". Initially, the introduction of this system into the local Public Health Office is described. The structure and organization of the programme and its performance are then discussed and exemplified by the control of drinking water supply plants. The disadvantages of computer use are by no means overlooked. The latter include the necessity to initially put in a considerable number of data and to constantly store new results, initial acceptance problems and the poor autonomy of the system. The most important advantages of computer-aided processing are optimum evaluation possibilities, centralised scheduling, automated production of letters, efficient drafting of the annual health report and the possibility of exchanging data media. PMID:2525684

  18. Computational Methods for Rough Classification and Discovery.

    ERIC Educational Resources Information Center

    Bell, D. A.; Guan, J. W.

    1998-01-01

    Rough set theory is a new mathematical tool to deal with vagueness and uncertainty. Computational methods are presented for using rough sets to identify classes in datasets, finding dependencies in relations, and discovering rules which are hidden in databases. The methods are illustrated with a running example from a database of car test results.…

  19. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... PROCEDURES GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures...

  20. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  1. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  2. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  3. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... GENERAL PROVISIONS § 201.26 Recordation of documents pertaining to computer shareware and donation of public domain computer software. (a) General. This section prescribes the procedures for submission...

  4. Updated Panel-Method Computer Program

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1995-01-01

    Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

  5. Computing discharge using the index velocity method

    USGS Publications Warehouse

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  6. Computational stoning method for surface defect detection

    NASA Astrophysics Data System (ADS)

    Ma, Ninshu; Zhu, Xinhai

    2013-12-01

    Surface defects on outer panels of automotive bodies must be controlled in order to improve the surface quality. The detection and quantitative evaluation of surface defects are quite difficult because the deflection of surface defects is very small. One of detecting methods for surface defects used in factories is a stoning method in which a stone block is moved on the surface of a stamped panel. The computational stoning method was developed to detect surface low defect by authors based on a geometry contact algorithm between a stone block and a stamped panel. If the surface is convex, the stone block always contacts with the convex surface of a stamped panel and the contact gap between them is zero. If there is a surface low, the stone block does not contact to the surface and the contact gap can be computed based on contact algorithm. The convex surface defect can also be detected by applying computational stoning method to the back surface of a stamped panel. By performing two way stoning computations from both the normal surface and the back surface, not only the depth of surface low defect but also the height of convex surface defect can be detected. The surface low defect and convex surface defect can also be detected through multi-directions. Surface defects on the handle emboss of outer panels were accurately detected using the computational stoning method and compared with the real shape. A very good accuracy was obtained.

  7. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  8. Distributed sequence alignment applications for the public computing architecture.

    PubMed

    Pellicer, S; Chen, G; Chan, K C C; Pan, Y

    2008-03-01

    The public computer architecture shows promise as a platform for solving fundamental problems in bioinformatics such as global gene sequence alignment and data mining with tools such as the basic local alignment search tool (BLAST). Our implementation of these two problems on the Berkeley open infrastructure for network computing (BOINC) platform demonstrates a runtime reduction factor of 1.15 for sequence alignment and 16.76 for BLAST. While the runtime reduction factor of the global gene sequence alignment application is modest, this value is based on a theoretical sequential runtime extrapolated from the calculation of a smaller problem. Because this runtime is extrapolated from running the calculation in memory, the theoretical sequential runtime would require 37.3 GB of memory on a single system. With this in mind, the BOINC implementation not only offers the reduced runtime, but also the aggregation of the available memory of all participant nodes. If an actual sequential run of the problem were compared, a more drastic reduction in the runtime would be seen due to an additional secondary storage I/O overhead for a practical system. Despite the limitations of the public computer architecture, most notably in communication overhead, it represents a practical platform for grid- and cluster-scale bioinformatics computations today and shows great potential for future implementations. PMID:18334454

  9. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  10. Semiempirical methods for computing turbulent flows

    NASA Technical Reports Server (NTRS)

    Belov, I. A.; Ginzburg, I. P.

    1986-01-01

    Two semiempirical theories which provide a basis for determining the turbulent friction and heat exchange near a wall are presented: (1) the Prandtl-Karman theory, and (2) the theory utilizing an equation for the energy of turbulent pulsations. A comparison is made between exact numerical methods and approximate integral methods for computing the turbulent boundary layers in the presence of pressure, blowing, or suction gradients. Using the turbulent flow around a plate as an example, it is shown that, when computing turbulent flows with external turbulence, it is preferable to construct a turbulence model based on the equation for energy of turbulent pulsations.

  11. Computational Methods for Failure Analysis and Life Prediction

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Harris, Charles E. (Compiler); Housner, Jerrold M. (Compiler); Hopkins, Dale A. (Compiler)

    1993-01-01

    This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

  12. An Efficient Method for Computing All Reducts

    NASA Astrophysics Data System (ADS)

    Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro

    In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.

  13. Soft computing methods for geoidal height transformation

    NASA Astrophysics Data System (ADS)

    Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.

    2009-07-01

    Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.

  14. Applying Human Computation Methods to Information Science

    ERIC Educational Resources Information Center

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  15. Efficient Methods to Compute Genomic Predictions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  16. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  17. Computational methods for inlet airframe integration

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.

    1988-01-01

    Fundamental equations encountered in computational fluid dynamics (CFD), and analyses used for internal flow are introduced. Irrotational flow; Euler equations; boundary layers; parabolized Navier-Stokes equations; and time averaged Navier-Stokes equations are treated. Assumptions made and solution methods are outlined, with examples. The overall status of CFD in propulsion is indicated.

  18. Experience of public procurement of Open Compute servers

    NASA Astrophysics Data System (ADS)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  19. Referees Often Miss Obvious Errors in Computer and Electronic Publications

    NASA Astrophysics Data System (ADS)

    de Gloucester, Paul Colin

    2013-05-01

    Misconduct is extensive and damaging. So-called science is prevalent. Articles resulting from so-called science are often cited in other publications. This can have damaging consequences for society and for science. The present work includes a scientometric study of 350 articles (published by the Association for Computing Machinery; Elsevier; The Institute of Electrical and Electronics Engineers, Inc.; John Wiley; Springer; Taylor & Francis; and World Scientific Publishing Co.). A lower bound of 85.4% articles are found to be incongruous. Authors cite inherently self-contradictory articles more than valid articles. Incorrect informational cascades ruin the literature's signal-to-noise ratio even for uncomplicated cases.

  20. 47 CFR 61.14 - Method of filing publications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Method of filing publications. 61.14 Section 61...) TARIFFS Rules for Electronic Filing § 61.14 Method of filing publications. (a) Publications filed... date of a publication received by the Electronic Tariff Filing System will be determined by the...

  1. 47 CFR 61.14 - Method of filing publications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Method of filing publications. 61.14 Section 61...) TARIFFS Rules for Electronic Filing § 61.14 Method of filing publications. (a) Publications filed... date of a publication received by the Electronic Tariff Filing System will be determined by the...

  2. Shifted power method for computing tensor eigenvalues.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-07-01

    Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

  3. Shifted power method for computing tensor eigenpairs.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  4. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  5. Computational methods for industrial radiation measurement applications

    SciTech Connect

    Gardner, R.P.; Guo, P.; Ao, Q.

    1996-12-31

    Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a {open_quotes}black box{close_quotes} mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments.

  6. Teacher Perspectives on the Current State of Computer Technology Integration into the Public School Classroom

    ERIC Educational Resources Information Center

    Zuniga, Ramiro

    2009-01-01

    Since the introduction of computers into the public school arena over forty years ago, educators have been convinced that the integration of computer technology into the public school classroom will transform education. Joining educators are state and federal governments. Public schools and others involved in the process of computer technology…

  7. Computational methods for ideal compressible flow

    NASA Technical Reports Server (NTRS)

    Vanleer, B.

    1983-01-01

    Conservative dissipative difference schemes for computing one dimensional flow are introduced, and the recognition and representation of flow discontinuities are discussed. Multidimensional methods are outlined. Second order finite volume schemes are introduced. Conversion of difference schemes for a single linear convection equation into schemes for the hyperbolic system of the nonlinear conservation laws of ideal compressible flow is explained. Approximate Riemann solvers are presented. Monotone initial value interpolation; and limiters, switches, and artificial dissipation are considered.

  8. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  9. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  10. Teaching Practical Public Health Evaluation Methods

    ERIC Educational Resources Information Center

    Davis, Mary V.

    2006-01-01

    Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…

  11. Probabilistic Computational Methods in Structural Failure Analysis

    NASA Astrophysics Data System (ADS)

    Krejsa, Martin; Kralik, Juraj

    2015-12-01

    Probabilistic methods are used in engineering where a computational model contains random variables. Each random variable in the probabilistic calculations contains uncertainties. Typical sources of uncertainties are properties of the material and production and/or assembly inaccuracies in the geometry or the environment where the structure should be located. The paper is focused on methods for the calculations of failure probabilities in structural failure and reliability analysis with special attention on newly developed probabilistic method: Direct Optimized Probabilistic Calculation (DOProC), which is highly efficient in terms of calculation time and the accuracy of the solution. The novelty of the proposed method lies in an optimized numerical integration that does not require any simulation technique. The algorithm has been implemented in mentioned software applications, and has been used several times in probabilistic tasks and probabilistic reliability assessments.

  12. Numerical methods for problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Mead, Jodi Lorraine

    1998-12-01

    A goal of computational aeroacoustics is the accurate calculation of noise from a jet in the far field. This work concerns the numerical aspects of accurately calculating acoustic waves over large distances and long time. More specifically, the stability, efficiency, accuracy, dispersion and dissipation in spatial discretizations, time stepping schemes, and absorbing boundaries for the direct solution of wave propagation problems are determined. Efficient finite difference methods developed by Tam and Webb, which minimize dispersion and dissipation, are commonly used for the spatial and temporal discretization. Alternatively, high order pseudospectral methods can be made more efficient by using the grid transformation introduced by Kosloff and Tal-Ezer. Work in this dissertation confirms that the grid transformation introduced by Kosloff and Tal-Ezer is not spectrally accurate because, in the limit, the grid transformation forces zero derivatives at the boundaries. If a small number of grid points are used, it is shown that approximations with the Chebyshev pseudospectral method with the Kosloff and Tal-Ezer grid transformation are as accurate as with the Chebyshev pseudospectral method. This result is based on the analysis of the phase and amplitude errors of these methods, and their use for the solution of a benchmark problem in computational aeroacoustics. For the grid transformed Chebyshev method with a small number of grid points it is, however, more appropriate to compare its accuracy with that of high- order finite difference methods. This comparison, for an order of accuracy 10-3 for a benchmark problem in computational aeroacoustics, is performed for the grid transformed Chebyshev method and the fourth order finite difference method of Tam. Solutions with the finite difference method are as accurate. and the finite difference method is more efficient than, the Chebyshev pseudospectral method with the grid transformation. The efficiency of the Chebyshev

  13. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. PMID:23877155

  14. Delamination detection using methods of computational intelligence

    NASA Astrophysics Data System (ADS)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  15. Review of Computational Stirling Analysis Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  16. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  17. 47 CFR 61.14 - Method of filing publications.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Method of filing publications. 61.14 Section 61.14 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) TARIFFS Rules for Electronic Filing § 61.14 Method of filing publications. (a) Publications filed electronically must be addressed to...

  18. 47 CFR 61.14 - Method of filing publications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Method of filing publications. 61.14 Section 61.14 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) TARIFFS Rules for Electronic Filing § 61.14 Method of filing publications. (a) Publications...

  19. 47 CFR 61.20 - Method of filing publications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Method of filing publications. 61.20 Section 61...) TARIFFS General Rules for Nondominant Carriers § 61.20 Method of filing publications. (a) All issuing carriers that file tariffs shall file all tariff publications and associated documents, such as...

  20. 47 CFR 61.20 - Method of filing publications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Method of filing publications. 61.20 Section 61...) TARIFFS General Rules for Nondominant Carriers § 61.20 Method of filing publications. (a) All issuing carriers that file tariffs shall file all tariff publications and associated documents, such as...

  1. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  2. The Contingent Valuation Method in Public Libraries

    ERIC Educational Resources Information Center

    Chung, Hye-Kyung

    2008-01-01

    This study aims to present a new model measuring the economic value of public libraries, combining the dissonance minimizing (DM) and information bias minimizing (IBM) format in the contingent valuation (CV) surveys. The possible biases which are tied to the conventional CV surveys are reviewed. An empirical study is presented to compare the model…

  3. Computational methods for optical molecular imaging

    PubMed Central

    Chen, Duan; Wei, Guo-Wei; Cong, Wen-Xiang; Wang, Ge

    2010-01-01

    Summary A new computational technique, the matched interface and boundary (MIB) method, is presented to model the photon propagation in biological tissue for the optical molecular imaging. Optical properties have significant differences in different organs of small animals, resulting in discontinuous coefficients in the diffusion equation model. Complex organ shape of small animal induces singularities of the geometric model as well. The MIB method is designed as a dimension splitting approach to decompose a multidimensional interface problem into one-dimensional ones. The methodology simplifies the topological relation near an interface and is able to handle discontinuous coefficients and complex interfaces with geometric singularities. In the present MIB method, both the interface jump condition and the photon flux jump conditions are rigorously enforced at the interface location by using only the lowest-order jump conditions. This solution near the interface is smoothly extended across the interface so that central finite difference schemes can be employed without the loss of accuracy. A wide range of numerical experiments are carried out to validate the proposed MIB method. The second-order convergence is maintained in all benchmark problems. The fourth-order convergence is also demonstrated for some three-dimensional problems. The robustness of the proposed method over the variable strength of the linear term of the diffusion equation is also examined. The performance of the present approach is compared with that of the standard finite element method. The numerical study indicates that the proposed method is a potentially efficient and robust approach for the optical molecular imaging. PMID:20485461

  4. Survey of the Computer Users of the Upper Arlington Public Library.

    ERIC Educational Resources Information Center

    Tsardoulias, L. Sevim

    The Computer Services Department of the Upper Arlington Public Library in Franklin County, Ohio, provides microcomputers for public use, including IBM compatible and Macintosh computers, a laser printer, and dot-matrix printers. Circulation statistics provide data regarding the frequency and amount of computer use, but these statistics indicate…

  5. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 2 2013-07-01 2013-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  6. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 2 2014-07-01 2014-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  7. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 2 2010-07-01 2010-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  8. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 2 2011-07-01 2011-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  9. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 2 2012-07-01 2012-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify...

  10. Computational electromagnetic methods for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3

  11. Computational predictive methods for fracture and fatigue

    NASA Astrophysics Data System (ADS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  12. Computational predictive methods for fracture and fatigue

    NASA Technical Reports Server (NTRS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-01-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  13. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1992-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  14. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  15. Computational simulation methods for composite fracture mechanics

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  16. Modules and methods for all photonic computing

    DOEpatents

    Schultz, David R.; Ma, Chao Hung

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  17. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  18. Predicting the Number of Public Computer Terminals Needed for an On-Line Catalog: A Queuing Theory Approach.

    ERIC Educational Resources Information Center

    Knox, A. Whitney; Miller, Bruce A.

    1980-01-01

    Describes a method for estimating the number of cathode ray tube terminals needed for public use of an online library catalog. Authors claim method could also be used to estimate needed numbers of microform readers for a computer output microform (COM) catalog. Formulae are included. (Author/JD)

  19. 47 CFR 61.20 - Method of filing publications.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Method of filing publications. 61.20 Section 61.20 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) TARIFFS General Rules for Nondominant Carriers § 61.20 Method of filing publications. (a)...

  20. 47 CFR 61.20 - Method of filing publications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Method of filing publications. 61.20 Section 61.20 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) TARIFFS General Rules for Nondominant Carriers § 61.20 Method of filing publications. (a) All...

  1. Object Orientated Methods in Computational Fluid Dynamics.

    NASA Astrophysics Data System (ADS)

    Tabor, Gavin; Weller, Henry; Jasak, Hrvoje; Fureby, Christer

    1997-11-01

    We outline the aims of the FOAM code, a Finite Volume Computational Fluid Dynamics code written in C++, and discuss the use of Object Orientated Programming (OOP) methods to achieve these aims. The intention when writing this code was to make it as easy as possible to alter the modelling : this was achieved by making the top level syntax of the code as close as possible to conventional mathematical notation for tensors and partial differential equations. Object orientation enables us to define classes for both types of objects, and the operator overloading possible in C++ allows normal symbols to be used for the basic operations. The introduction of features such as automatic dimension checking of equations helps to enforce correct coding of models. We also discuss the use of OOP techniques such as data encapsulation and code reuse. As examples of the flexibility of this approach, we discuss the implementation of turbulence modelling using RAS and LES. The code is used to simulate turbulent flow for a number of test cases, including fully developed channel flow and flow around obstacles. We also demonstrate the use of the code for solving structures calculations and magnetohydrodynamics.

  2. Radiological Protection in Cone Beam Computed Tomography (CBCT). ICRP Publication 129.

    PubMed

    Rehani, M M; Gupta, R; Bartling, S; Sharp, G C; Pauwels, R; Berris, T; Boone, J M

    2015-07-01

    The objective of this publication is to provide guidance on radiological protection in the new technology of cone beam computed tomography (CBCT). Publications 87 and 102 dealt with patient dose management in computed tomography (CT) and multi-detector CT. The new applications of CBCT and the associated radiological protection issues are substantially different from those of conventional CT. The perception that CBCT involves lower doses was only true in initial applications. CBCT is now used widely by specialists who have little or no training in radiological protection. This publication provides recommendations on radiation dose management directed at different stakeholders, and covers principles of radiological protection, training, and quality assurance aspects. Advice on appropriate use of CBCT needs to be made widely available. Advice on optimisation of protection when using CBCT equipment needs to be strengthened, particularly with respect to the use of newer features of the equipment. Manufacturers should standardise radiation dose displays on CBCT equipment to assist users in optimisation of protection and comparisons of performance. Additional challenges to radiological protection are introduced when CBCT-capable equipment is used for both fluoroscopy and tomography during the same procedure. Standardised methods need to be established for tracking and reporting of patient radiation doses from these procedures. The recommendations provided in this publication may evolve in the future as CBCT equipment and applications evolve. As with previous ICRP publications, the Commission hopes that imaging professionals, medical physicists, and manufacturers will use the guidelines and recommendations provided in this publication for implementation of the Commission's principle of optimisation of protection of patients and medical workers, with the objective of keeping exposures as low as reasonably achievable, taking into account economic and societal factors, and

  3. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

    EPA Science Inventory

    The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

  4. Methods for Improving the User-Computer Interface. Technical Report.

    ERIC Educational Resources Information Center

    McCann, Patrick H.

    This summary of methods for improving the user-computer interface is based on a review of the pertinent literature. Requirements of the personal computer user are identified and contrasted with computer designer perspectives towards the user. The user's psychological needs are described, so that the design of the user-computer interface may be…

  5. Saving lives: a computer simulation game for public education about emergencies

    SciTech Connect

    Morentz, J.W.

    1985-01-01

    One facet of the Information Revolution in which the nation finds itself involves the utilization of computers, video systems, and a variety of telecommunications capabilities by those who must cope with emergency situations. Such technologies possess a significant potential for performing emergency public education and transmitting key information that is essential for survival. An ''Emergency Public Information Competitive Challenge Grant,'' under the aegis of the Federal Emergency Management Agency (FEMA), has sponsored an effort to use computer technology - both large, time-sharing systems and small personal computers - to develop computer games which will help teach techniques of emergency management to the public at large. 24 references.

  6. Computational methods in sequence and structure prediction

    NASA Astrophysics Data System (ADS)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed

  7. Reliability of cephalometric analysis using manual and interactive computer methods.

    PubMed

    Davis, D N; Mackay, F

    1991-05-01

    This study compares the results of cephalometric analyses using manual and interactive computer graphics methods. Results are statistically in favour of the interactive computer system. This study provides a basis for ongoing research into alternative methods of cephalometric analyses, such as digitization and automatic landmark identification using sophisticated computer vision systems. PMID:1911687

  8. Computer Competencies for All Educators in North Carolina Public Schools.

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh.

    To assist school systems in establishing computer competencies for inservice teacher training and personnel hiring guidelines, the North Carolina State Board of Education in 1985 approved the recommendations of a state task force, and identified three levels of computer competencies for teachers (K-12), i.e., competencies needed by all educators,…

  9. Universal Tailored Access: Automating Setup of Public and Classroom Computers.

    ERIC Educational Resources Information Center

    Whittaker, Stephen G.; Young, Ted; Toth-Cohen, Susan

    2002-01-01

    This article describes a setup smart access card that enables users with visual impairments to customize magnifiers and screen readers on computers by loading the floppy disk into the computer and finding and pressing two successive keys. A trial with four elderly users found instruction took about 15 minutes. (Contains 3 references.) (CR)

  10. Strengthening Computer Technology Programs. Special Publication Series No. 49.

    ERIC Educational Resources Information Center

    McKinney, Floyd L., Comp.

    Three papers present examples of strategies used by developing institutions and historically black colleges to strengthen computer technology programs. "Promoting Industry Support in Developing a Computer Technology Program" (Albert D. Robinson) describes how the Washtenaw Community College (Ann Arbor, Michigan) Electrical/Electronics Department…

  11. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  12. EPA National Center for Computational Toxicology UPDATE (ICCVAM public forum)

    EPA Science Inventory

    A presentation to the ICCVAM Public Forum on several new and exciting activities at NCCT, including Chemical library update, Chemistry Dashboard, Retrofitting in vitro assays with metabolic competence and In vitro PK.

  13. 77 FR 26509 - Notice of Public Meeting-Cloud Computing Forum & Workshop V

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-04

    ... National Institute of Standards and Technology Notice of Public Meeting--Cloud Computing Forum & Workshop V... announces the Cloud Computing Forum & Workshop V to be held on Tuesday, Wednesday and Thursday, June 5, 6... provide information on the U.S. Government (USG) Cloud Computing Technology Roadmap initiative....

  14. 77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-18

    ... National Institute of Standards and Technology Notice of Public Meeting--Cloud Computing and Big Data Forum...) announces a Cloud Computing and Big Data Forum and Workshop to be held on Tuesday, January 15, Wednesday... workshop. The NIST Cloud Computing and Big Data Forum and Workshop will bring together leaders...

  15. Awareness of Accessibility Barriers in Computer-Based Instructional Materials and Faculty Demographics at South Dakota Public Universities

    ERIC Educational Resources Information Center

    Olson, Christopher

    2013-01-01

    Advances in technology and course delivery methods have enabled persons with disabilities to enroll in higher education at an increasing rate. Federal regulations state persons with disabilities must be granted equal access to the information contained in computer-based instructional materials, but faculty at the six public universities in South…

  16. Excellence in Computational Biology and Informatics — EDRN Public Portal

    Cancer.gov

    9th Early Detection Research Network (EDRN) Scientific Workshop. Excellence in Computational Biology and Informatics: Sponsored by the EDRN Data Sharing Subcommittee Moderator: Daniel Crichton, M.S., NASA Jet Propulsion Laboratory

  17. Proposed standards for peer-reviewed publication of computer code

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...

  18. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  19. Wing analysis using a transonic potential flow computational method

    NASA Technical Reports Server (NTRS)

    Henne, P. A.; Hicks, R. M.

    1978-01-01

    The ability of the method to compute wing transonic performance was determined by comparing computed results with both experimental data and results computed by other theoretical procedures. Both pressure distributions and aerodynamic forces were evaluated. Comparisons indicated that the method is a significant improvement in transonic wing analysis capability. In particular, the computational method generally calculated the correct development of three-dimensional pressure distributions from subcritical to transonic conditions. Complicated, multiple shocked flows observed experimentally were reproduced computationally. The ability to identify the effects of design modifications was demonstrated both in terms of pressure distributions and shock drag characteristics.

  20. A method of billing third generation computer users

    NASA Technical Reports Server (NTRS)

    Anderson, P. N.; Hyter, D. R.

    1973-01-01

    A method is presented for charging users for the processing of their applications on third generation digital computer systems is presented. For background purposes, problems and goals in billing on third generation systems are discussed. Detailed formulas are derived based on expected utilization and computer component cost. These formulas are then applied to a specific computer system (UNIVAC 1108). The method, although possessing some weaknesses, is presented as a definite improvement over use of second generation billing methods.

  1. Computational Methods to Predict Protein Interaction Partners

    NASA Astrophysics Data System (ADS)

    Valencia, Alfonso; Pazos, Florencio

    In the new paradigm for studying biological phenomena represented by Systems Biology, cellular components are not considered in isolation but as forming complex networks of relationships. Protein interaction networks are among the first objects studied from this new point of view. Deciphering the interactome (the whole network of interactions for a given proteome) has been shown to be a very complex task. Computational techniques for detecting protein interactions have become standard tools for dealing with this problem, helping and complementing their experimental counterparts. Most of these techniques use genomic or sequence features intuitively related with protein interactions and are based on "first principles" in the sense that they do not involve training with examples. There are also other computational techniques that use other sources of information (i.e. structural information or even experimental data) or are based on training with examples.

  2. The Battle to Secure Our Public Access Computers

    ERIC Educational Resources Information Center

    Sendze, Monique

    2006-01-01

    Securing public access workstations should be a significant part of any library's network and information-security strategy because of the sensitive information patrons enter on these workstations. As the IT manager for the Johnson County Library in Kansas City, Kan., this author is challenged to make sure that thousands of patrons get the access…

  3. The Computer as an Aid to Public Relations Writing.

    ERIC Educational Resources Information Center

    Rayfield, Robert E.

    Teachers of public relations and other communication areas, with endorsement from the Association for Education in Journalism and Mass Communication (AEJMC), should request the data processing industry to develop assisted instruction programs in journalistic writing. Such action would provide a clearly defined need for a significant market and…

  4. Communication and Computation Skills for Blind Students Attending Public Schools.

    ERIC Educational Resources Information Center

    Suffolk County Board of Cooperative Educational Services 3, Dix Hills, NY.

    Outlined are evaluative and instructional procedures used by itinerant teachers of blind children in public schools to teach readiness for braille reading and writing, as well as braille reading and writing, signature writing, and the Nemeth Code of braille mathematics and scientific notation. Readiness for braille reading and writing is…

  5. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  6. Computational methods for physical mapping of chromosomes

    SciTech Connect

    Torney, D.C.; Schenk, K.R. ); Whittaker, C.C. Los Alamos National Lab., NM ); White, S.W. )

    1990-01-01

    A standard technique for mapping a chromosome is to randomly select pieces, to use restriction enzymes to cut these pieces into fragments, and then to use the fragments for estimating the probability of overlap of these pieces. Typically, the order of the fragments within a piece is not determined, and the observed fragment data from each pair of pieces must be permuted N1 {times} N2 ways to evaluate the probability of overlap, N1 and N2 being the observed number of fragments in the two selected pieces. We will describe computational approaches used to substantially reduce the computational complexity of the calculation of overlap probability from fragment data. Presently, about 10{sup {minus}4} CPU seconds on one processor of an IBM 3090 is required for calculation of overlap probability from the fragment data of two randomly selected pieces, with an average of ten fragments per piece. A parallel version has been written using IBM clustered FORTRAN. Parallel measurements for 1, 6, and 12 processors will be presented. This approach has proven promising in the mapping of chromosome 16 at Los Alamos National Laboratory. We will also describe other computational challenges presented by physical mapping. 4 refs., 4 figs., 1 tab.

  7. Computational Methods for Analyzing Health News Coverage

    ERIC Educational Resources Information Center

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…

  8. Atomistic Method Applied to Computational Modeling of Surface Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Abel, Phillip B.

    2000-01-01

    The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on

  9. Integral Deferred Correction methods for scientific computing

    NASA Astrophysics Data System (ADS)

    Morton, Maureen Marilla

    Since high order numerical methods frequently can attain accurate solutions more efficiently than low order methods, we develop and analyze new high order numerical integrators for the time discretization of ordinary and partial differential equations. Our novel methods address some of the issues surrounding high order numerical time integration, such as the difficulty of many popular methods' construction and handling the effects of disparate behaviors produce by different terms in the equations to be solved. We are motivated by the simplicity of how Deferred Correction (DC) methods achieve high order accuracy [72, 27]. DC methods are numerical time integrators that, rather than calculating tedious coefficients for order conditions, instead construct high order accurate solutions by iteratively improving a low order preliminary numerical solution. With each iteration, an error equation is solved, the error decreases, and the order of accuracy increases. Later, DC methods were adjusted to include an integral formulation of the residual, which stabilizes the method. These Spectral Deferred Correction (SDC) methods [25] motivated Integral Deferred Corrections (IDC) methods. Typically, SDC methods are limited to increasing the order of accuracy by one with each iteration due to smoothness properties imposed by the gridspacing. However, under mild assumptions, explicit IDC methods allow for any explicit rth order Runge-Kutta (RK) method to be used within each iteration, and then an order of accuracy increase of r is attained after each iteration [18]. We extend these results to the construction of implicit IDC methods that use implicit RK methods, and we prove analogous results for order of convergence. One means of solving equations with disparate parts is by semi-implicit integrators, handling a "fast" part implicitly and a "slow" part explicitly. We incorporate additive RK (ARK) integrators into the iterations of IDC methods in order to construct new arbitrary order

  10. The ACLS Survey of Scholars: Views on Publications, Computers, Libraries.

    ERIC Educational Resources Information Center

    Morton, Herbert C.; Price, Anne Jamieson

    1986-01-01

    Reviews results of a survey by the American Council of Learned Societies (ACLS) of 3,835 scholars in the humanities and social sciences who are working both in colleges and universities and outside the academic community. Areas highlighted include professional reading, authorship patterns, computer use, and library use. (LRW)

  11. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  12. Public Experiments and their Analysis with the Replication Method

    NASA Astrophysics Data System (ADS)

    Heering, Peter

    2007-06-01

    One of those who failed to establish himself as a natural philosopher in 18th century Paris was the future revolutionary Jean Paul Marat. He did not only publish several monographs on heat, optics and electricity in which he attempted to characterise his work as being purely empirical but he also tried to establish himself as a public lecturer. From the analysis of his experiments using the replication method it became obvious that the written description is missing several relevant aspects of the experiments. In my paper, I am going to discuss the experiences made in analysing these experiments and will suggest possible relations between these publications and the public demonstrations.

  13. A public data hub for benchmarking common brain-computer interface algorithms.

    PubMed

    Zander, Thorsten O; Ihme, Klas; Gärtner, Matti; Rötting, Matthias

    2011-04-01

    Methods of statistical machine learning have recently proven to be very useful in contemporary brain-computer interface (BCI) research based on the discrimination of electroencephalogram (EEG) patterns. Because of this, many research groups develop new algorithms for both feature extraction and classification. However, until now, no large-scale comparison of these algorithms has been accomplished due to the fact that little EEG data is publicly available. Therefore, we at Team PhyPA recorded 32-channel EEGs, electromyograms and electrooculograms of 36 participants during a simple finger movement task. The data are published on our website www.phypa.org and are freely available for downloading. We encourage BCI researchers to test their algorithms on these data and share their results. This work also presents exemplary benchmarking procedures of common feature extraction methods for slow cortical potentials and event-related desynchronization as well as for classification algorithms based on these features. PMID:21436533

  14. Computation of Transonic Flows Using Potential Methods

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Kwak, Dochan (Technical Monitor)

    1997-01-01

    The proposed paper will describe the state of the art associated with numerical solution of the full or exact velocity potential equation for solving transonic, external-aerodynamic flows. The presentation will begin with a review of the literature emphasizing research activities of the past decade. Next, the various forms of the full or exact velocity potential equation, the equation's corresponding mathematical characteristics, and the derivation assumptions will be presented and described in detail. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, will be presented and discussed relative to the more complete Euler or Navier-Stokes formulations. The technical presentation will continue with a description of recently developed full potential numerical approach characteristics. This description will include governing equation nondimensionalization, physical-to-computational-domain mapping procedures, a limited description of grid generation requirements, the spatial discretization scheme, numerical implementation of boundary conditions, and the iteration scheme. The next portion of the presentation will present and discuss numerical results for several two- and three-dimensional aerodynamic applications. Included in the results section will be a discussion and demonstration of a typical grid refinement analysis for determining spatial convergence of the numerical solution and level of solution accuracy. Computer timings for a variety of full potential applications will be compared and contrasted with similar results for the Euler equation formulation. Finally. the presentation will end with concluding remarks and recommendations for future work.

  15. Computational Methods for Jet Noise Simulation

    NASA Technical Reports Server (NTRS)

    Goodrich, John W. (Technical Monitor); Hagstrom, Thomas

    2003-01-01

    The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.

  16. Discontinuous Galerkin Methods: Theory, Computation and Applications

    SciTech Connect

    Cockburn, B.; Karniadakis, G. E.; Shu, C-W

    2000-12-31

    This volume contains a survey article for Discontinuous Galerkin Methods (DGM) by the editors as well as 16 papers by invited speakers and 32 papers by contributed speakers of the First International Symposium on Discontinuous Galerkin Methods. It covers theory, applications, and implementation aspects of DGM.

  17. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  18. Ecological validity and the study of publics: The case for organic public engagement methods.

    PubMed

    Gehrke, Pat J

    2014-01-01

    This essay argues for a method of public engagement grounded in the criteria of ecological validity. Motivated by what Hammersly called the responsibility that comes with intellectual authority: "to seek, as far as possible, to ensure the validity of their conclusions and to participate in rational debate about those conclusions" (1993: 29), organic public engagement follows the empirical turn in citizenship theory and in rhetorical studies of actually existing publics. Rather than shaping citizens into either the compliant subjects of the cynical view or the deliberatively disciplined subjects of the idealist view, organic public engagement instead takes Asen's advice that "we should ask: how do people enact citizenship?" (2004: 191). In short, organic engagement methods engage publics in the places where they already exist and through those discourses and social practices by which they enact their status as publics. Such engagements can generate practical middle-range theories that facilitate future actions and decisions that are attentive to the local ecologies of diverse publics. PMID:23887250

  19. Three parallel computation methods for structural vibration analysis

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf; Bostic, Susan; Patrick, Merrell; Mahajan, Umesh; Ma, Shing

    1988-01-01

    The Lanczos (1950), multisectioning, and subspace iteration sequential methods for vibration analysis presently used as bases for three parallel algorithms are noted, in the aftermath of three example problems, to maintain reasonable accuracy in the computation of vibration frequencies. Significant computation time reductions are obtained as the number of processors increases. An analysis is made of the performance of each method, in order to characterize relative strengths and weaknesses as well as to identify those parameters that most strongly affect computation efficiency.

  20. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  1. 12 CFR 227.25 - Unfair balance computation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... under 12 CFR 226.12 or 12 CFR 226.13; or (2) Adjustments to finance charges as a result of the return of... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in...

  2. Overview of computational structural methods for modern military aircraft

    NASA Technical Reports Server (NTRS)

    Kudva, J. N.

    1992-01-01

    Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

  3. Classical versus Computer Algebra Methods in Elementary Geometry

    ERIC Educational Resources Information Center

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  4. Method and device for computed tomography

    SciTech Connect

    Lux, P.W.; Op De Beek, J.C.A.; Van Leiden, H.F.

    1983-09-06

    A computer tomography device in which the detectors are asymmetrically arranged with respect to the connecting line between the X-ray source, the center of rotation of the source, and the detectors is disclosed. The detector device produces an incomplete profile of measuring values which are supplemented with ''zeros'' during processing in order to form a number of measuring values of a complete profile. In order to avoid artefacts which are produced by the acute transients between measuring values and ''zeros'', a number of measuring values adjoining the acute transients are projected around the center of rotation and multipled by a factor so that from the zeros a smoothly increasing series of adapted measuring values is obtained.

  5. The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

    ERIC Educational Resources Information Center

    DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

    2013-01-01

    Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…

  6. COMSAC: Computational Methods for Stability and Control. Part 1

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

  7. Computational method for analysis of polyethylene biodegradation

    NASA Astrophysics Data System (ADS)

    Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro

    2003-12-01

    In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to β-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the β-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.

  8. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  9. Computer-Based National Information Systems. Technology and Public Policy Issues.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    A general introduction to computer based national information systems, and the context and basis for future studies are provided in this report. Chapter One, the introduction, summarizes computers and information systems and their relation to society, the structure of information policy issues, and public policy issues. Chapter Two describes the…

  10. Adding It Up: Is Computer Use Associated with Higher Achievement in Public Elementary Mathematics Classrooms?

    ERIC Educational Resources Information Center

    Kao, Linda Lee

    2009-01-01

    Despite support for technology in schools, there is little evidence indicating whether using computers in public elementary mathematics classrooms is associated with improved outcomes for students. This exploratory study examined data from the Early Childhood Longitudinal Study, investigating whether students' frequency of computer use was related…

  11. Small Towns and Small Computers: Can a Match Be Made? A Public Policy Seminar.

    ERIC Educational Resources Information Center

    National Association of Towns and Townships, Washington, DC.

    A public policy seminar discussed how to match small towns and small computers. James K. Coyne, Special Assistant to the President and Director of the White House Office of Private Sector Initiatives, offered opening remarks and described a database system developed by his office to link organizations and communities with small computers to…

  12. Who's in the Queue? A Demographic Analysis of Public Access Computer Users and Uses in U.S. Public Libraries. Research Brief Number 4

    ERIC Educational Resources Information Center

    Manjarrez, Carlos A.; Schoembs, Kyle

    2011-01-01

    Over the past decade, policy discussions about public access computing in libraries have focused on the role that these institutions play in bridging the digital divide. In these discussions, public access computing services are generally targeted at individuals who either cannot afford a computer and Internet access, or have never received formal…

  13. Transonic Flow Computations Using Nonlinear Potential Methods

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.

  14. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  15. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  16. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  17. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  18. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks, Forests, and Public... of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available... use personally owned diskettes on NARA personal computers. You may not load files or any type...

  19. Method for transferring data from an unsecured computer to a secured computer

    DOEpatents

    Nilsen, Curt A.

    1997-01-01

    A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

  20. Statistical and Computational Methods for Genetic Diseases: An Overview

    PubMed Central

    Di Taranto, Maria Donata

    2015-01-01

    The identification of causes of genetic diseases has been carried out by several approaches with increasing complexity. Innovation of genetic methodologies leads to the production of large amounts of data that needs the support of statistical and computational methods to be correctly processed. The aim of the paper is to provide an overview of statistical and computational methods paying attention to methods for the sequence analysis and complex diseases. PMID:26106440

  1. Multiscale methods for computational RNA enzymology

    PubMed Central

    Panteva, Maria T.; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K.; Kuechler, Erich R.; Giambaşu, George M.; Lee, Tai-Sung; York, Darrin M.

    2016-01-01

    RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multi-dimensional “problem space” of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues and bond breaking/forming in the chemical steps of the reaction. The goal of this article is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics (MD) simulations, reference interaction site model (RISM) calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics (HREMD) and quantum mechanical/molecular mechanical (QM/MM) simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme (HDVr) and RNase A. PMID:25726472

  2. Coarse-graining methods for computational biology.

    PubMed

    Saunders, Marissa G; Voth, Gregory A

    2013-01-01

    Connecting the molecular world to biology requires understanding how molecular-scale dynamics propagate upward in scale to define the function of biological structures. To address this challenge, multiscale approaches, including coarse-graining methods, become necessary. We discuss here the theoretical underpinnings and history of coarse-graining and summarize the state of the field, organizing key methodologies based on an emerging paradigm for multiscale theory and modeling of biomolecular systems. This framework involves an integrated, iterative approach to couple information from different scales. The primary steps, which coincide with key areas of method development, include developing first-pass coarse-grained models guided by experimental results, performing numerous large-scale coarse-grained simulations, identifying important interactions that drive emergent behaviors, and finally reconnecting to the molecular scale by performing all-atom molecular dynamics simulations guided by the coarse-grained results. The coarse-grained modeling can then be extended and refined, with the entire loop repeated iteratively if necessary. PMID:23451897

  3. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  4. Naive vs. Sophisticated Methods of Forecasting Public Library Circulations.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1984-01-01

    Two sophisticated--autoregressive integrated moving average (ARIMA), straight-line regression--and two naive--simple average, monthly average--forecasting techniques were used to forecast monthly circulation totals of 34 public libraries. Comparisons of forecasts and actual totals revealed that ARIMA and monthly average methods had smallest mean…

  5. A Method for Fast Computation of FTLE Fields

    NASA Astrophysics Data System (ADS)

    Brunton, Steven; Rowley, Clarence

    2008-11-01

    An efficient method for computing finite time Lyapunov exponent (FTLE) fields is investigated. FTLE fields, which measure the stretching between nearby particles, are important in determining transport mechanisms in unsteady flows. Ridges of the FTLE field are Lagrangian Coherent Structures (LCS) and provide an unsteady analogue of invariant manifolds from dynamical systems theory. FTLE field computations are expensive because of the large number of particle trajectories which must be integrated. However, when computing a time series of fields, it is possible to use the integrated trajectories at a previous time to compute an approximation of the integrated trajectories initialized at a later time, resulting in significant computational savings. This work provides analytic estimates for accumulated error and computation time as well as simulations comparing exact results with the approximate method for a number of interesting flows.

  6. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  7. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  8. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation.

    PubMed

    Hu, Hongzhi; Mao, Huajuan; Hu, Xiaohua; Hu, Feng; Sun, Xuemin; Jing, Zaiping; Duan, Yunsuo

    2015-01-01

    Due to the extensive social influence, public health emergency has attracted great attention in today's society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event's social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution) provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback) based on ACP simulation system which was successfully applied to the analysis of A (H1N1) Flu emergency. PMID:26609303

  9. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation

    PubMed Central

    Hu, Hongzhi; Mao, Huajuan; Hu, Xiaohua; Hu, Feng; Sun, Xuemin; Jing, Zaiping; Duan, Yunsuo

    2015-01-01

    Due to the extensive social influence, public health emergency has attracted great attention in today's society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event's social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution) provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback) based on ACP simulation system which was successfully applied to the analysis of A (H1N1) Flu emergency. PMID:26609303

  10. Low-Rank Incremental Methods for Computing Dominant Singular Subspaces

    SciTech Connect

    Baker, Christopher G; Gallivan, Dr. Kyle A; Van Dooren, Dr. Paul

    2012-01-01

    Computing the singular values and vectors of a matrix is a crucial kernel in numerous scientific and industrial applications. As such, numerous methods have been proposed to handle this problem in a computationally efficient way. This paper considers a family of methods for incrementally computing the dominant SVD of a large matrix A. Specifically, we describe a unification of a number of previously disparate methods for approximating the dominant SVD via a single pass through A. We tie the behavior of these methods to that of a class of optimization-based iterative eigensolvers on A'*A. An iterative procedure is proposed which allows the computation of an accurate dominant SVD via multiple passes through A. We present an analysis of the convergence of this iteration, and provide empirical demonstration of the proposed method on both synthetic and benchmark data.

  11. Developing a multimodal biometric authentication system using soft computing methods.

    PubMed

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision. PMID:25502384

  12. Review - Computational methods for internal flows with emphasis on turbomachinery

    NASA Technical Reports Server (NTRS)

    Mcnally, W. D.; Sockol, P. M.

    1985-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler approaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  13. Computational methods for internal flows with emphasis on turbomachinery

    NASA Technical Reports Server (NTRS)

    Mcnally, W. D.; Sockol, P. M.

    1981-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  14. The Importance of Computer Science for Public Health Training: An Opportunity and Call to Action.

    PubMed

    Kunkle, Sarah; Christie, Gillian; Yach, Derek; El-Sayed, Abdulrahman M

    2016-01-01

    A century ago, the Welch-Rose Report established a public health education system in the United States. Since then, the system has evolved to address emerging health needs and integrate new technologies. Today, personalized health technologies generate large amounts of data. Emerging computer science techniques, such as machine learning, present an opportunity to extract insights from these data that could help identify high-risk individuals and tailor health interventions and recommendations. As these technologies play a larger role in health promotion, collaboration between the public health and technology communities will become the norm. Offering public health trainees coursework in computer science alongside traditional public health disciplines will facilitate this evolution, improving public health's capacity to harness these technologies to improve population health. PMID:27227145

  15. The Importance of Computer Science for Public Health Training: An Opportunity and Call to Action

    PubMed Central

    Christie, Gillian; Yach, Derek; El-Sayed, Abdulrahman M

    2016-01-01

    A century ago, the Welch-Rose Report established a public health education system in the United States. Since then, the system has evolved to address emerging health needs and integrate new technologies. Today, personalized health technologies generate large amounts of data. Emerging computer science techniques, such as machine learning, present an opportunity to extract insights from these data that could help identify high-risk individuals and tailor health interventions and recommendations. As these technologies play a larger role in health promotion, collaboration between the public health and technology communities will become the norm. Offering public health trainees coursework in computer science alongside traditional public health disciplines will facilitate this evolution, improving public health’s capacity to harness these technologies to improve population health. PMID:27227145

  16. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  17. GAP Noise Computation By The CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2001-01-01

    A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

  18. A typology of health marketing research methods--combining public relations methods with organizational concern.

    PubMed

    Rotarius, Timothy; Wan, Thomas T H; Liberman, Aaron

    2007-01-01

    Research plays a critical role throughout virtually every conduit of the health services industry. The key terms of research, public relations, and organizational interests are discussed. Combining public relations as a strategic methodology with the organizational concern as a factor, a typology of four different research methods emerges. These four health marketing research methods are: investigative, strategic, informative, and verification. The implications of these distinct and contrasting research methods are examined. PMID:19042536

  19. Platform-independent method for computer aided schematic drawings

    DOEpatents

    Vell, Jeffrey L.; Siganporia, Darius M.; Levy, Arthur J.

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  20. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  1. Key management of the double random-phase-encoding method using public-key encryption

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2010-03-01

    Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.

  2. Computer Simulation Methods for Defect Configurations and Nanoscale Structures

    SciTech Connect

    Gao, Fei

    2010-01-01

    This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.

  3. Panel-Method Computer Code For Potential Flow

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steven K.

    1992-01-01

    Low-order panel method used to reduce computation time. Panel code PMARC (Panel Method Ames Research Center) numerically simulates flow field around or through complex three-dimensional bodies such as complete aircraft models or wind tunnel. Based on potential-flow theory. Facilitates addition of new features to code and tailoring of code to specific problems and computer-hardware constraints. Written in standard FORTRAN 77.

  4. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  5. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  6. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  7. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  8. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  9. Public health surveillance: historical origins, methods and evaluation.

    PubMed Central

    Declich, S.; Carter, A. O.

    1994-01-01

    In the last three decades, disease surveillance has grown into a complete discipline, quite distinct from epidemiology. This expansion into a separate scientific area within public health has not been accompanied by parallel growth in the literature about its principles and methods. The development of the fundamental concepts of surveillance systems provides a basis on which to build a better understanding of the subject. In addition, the concepts have practical value as they can be used in designing new systems as well as understanding or evaluating currently operating systems. This article reviews the principles of surveillance, beginning with a historical survey of the roots and evolution of surveillance, and discusses the goals of public health surveillance. Methods for data collection, data analysis, interpretation, and dissemination are presented, together with proposed procedures for evaluating and improving a surveillance system. Finally, some points to be considered in establishing a new surveillance system are presented. PMID:8205649

  10. A Comparative Assessment of Computer Literacy of Private and Public Secondary School Students in Lagos State, Nigeria

    ERIC Educational Resources Information Center

    Osunwusi, Adeyinka Olumuyiwa; Abifarin, Michael Segun

    2013-01-01

    The aim of this study was to conduct a comparative assessment of computer literacy of private and public secondary school students. Although the definition of computer literacy varies widely, this study treated computer literacy in terms of access to, and use of, computers and the internet, basic knowledge and skills required to use computers and…

  11. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    ERIC Educational Resources Information Center

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

  12. Method for computing the optimal signal distribution and channel capacity.

    PubMed

    Shapiro, E G; Shapiro, D A; Turitsyn, S K

    2015-06-15

    An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet. PMID:26193496

  13. Comparison of methods for computing streamflow statistics for Pennsylvania streams

    USGS Publications Warehouse

    Ehlke, Marla H.; Reed, Lloyd A.

    1999-01-01

    Methods for computing streamflow statistics intended for use on ungaged locations on Pennsylvania streams are presented and compared to frequency distributions of gaged streamflow data. The streamflow statistics used in the comparisons include the 7-day 10-year low flow, 50-year flood flow, and the 100-year flood flow; additional statistics are presented. Streamflow statistics for gaged locations on streams in Pennsylvania were computed using three methods for the comparisons: 1) Log-Pearson type III frequency distribution (Log-Pearson) of continuous-record streamflow data, 2) regional regression equations developed by the U.S. Geological Survey in 1982 (WRI 82-21), and 3) regional regression equations developed by the Pennsylvania State University in 1981 (PSU-IV). Log-Pearson distribution was considered the reference method for evaluation of the regional regression equations. Low-flow statistics were computed using the Log-Pearson distribution and WRI 82-21, whereas flood-flow statistics were computed using all three methods. The urban adjustment for PSU-IV was modified from the recommended computation to exclude Philadelphia and the surrounding areas (region 1) from the adjustment. Adjustments for storage area for PSU-IV were also slightly modified. A comparison of the 7-day 10-year low flow computed from Log-Pearson distribution and WRI-82- 21 showed that the methods produced significantly different values for about 7 percent of the state. The same methods produced 50-year and 100-year flood flows that were significantly different for about 24 percent of the state. Flood-flow statistics computed using Log-Pearson distribution and PSU-IV were not significantly different in any regions of the state. These findings are based on a statistical comparison using the t-test on signed ranks and graphical methods.

  14. Integrating Publicly Available Data to Generate Computationally Predicted Adverse Outcome Pathways for Fatty Liver.

    PubMed

    Bell, Shannon M; Angrish, Michelle M; Wood, Charles E; Edwards, Stephen W

    2016-04-01

    Newin vitrotesting strategies make it possible to design testing batteries for large numbers of environmental chemicals. Full utilization of the results requires knowledge of the underlying biological networks and the adverse outcome pathways (AOPs) that describe the route from early molecular perturbations to an adverse outcome. Curation of a formal AOP is a time-intensive process and a rate-limiting step to designing these test batteries. Here, we describe a method for integrating publicly available data in order to generate computationally predicted AOP (cpAOP) scaffolds, which can be leveraged by domain experts to shorten the time for formal AOP development. A network-based workflow was used to facilitate the integration of multiple data types to generate cpAOPs. Edges between graph entities were identified through direct experimental or literature information, or computationally inferred using frequent itemset mining. Data from the TG-GATEs and ToxCast programs were used to channel large-scale toxicogenomics information into a cpAOP network (cpAOPnet) of over 20 000 relationships describing connections between chemical treatments, phenotypes, and perturbed pathways as measured by differential gene expression and high-throughput screening targets. The resulting fatty liver cpAOPnet is available as a resource to the community. Subnetworks of cpAOPs for a reference chemical (carbon tetrachloride, CCl4) and outcome (fatty liver) were compared with published mechanistic descriptions. In both cases, the computational approaches approximated the manually curated AOPs. The cpAOPnet can be used for accelerating expert-curated AOP development and to identify pathway targets that lack genomic markers or high-throughput screening tests. It can also facilitate identification of key events for designing test batteries and for classification and grouping of chemicals for follow up testing. PMID:26895641

  15. Computational Methods for Protein Identification from Mass Spectrometry Data

    PubMed Central

    McHugh, Leo; Arthur, Jonathan W

    2008-01-01

    Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology. PMID:18463710

  16. Method for implementation of recursive hierarchical segmentation on parallel computers

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  17. Artificial Intelligence Methods: Challenge in Computer Based Polymer Design

    NASA Astrophysics Data System (ADS)

    Rusu, Teodora; Pinteala, Mariana; Cartwright, Hugh

    2009-08-01

    This paper deals with the use of Artificial Intelligence Methods (AI) in the design of new molecules possessing desired physical, chemical and biological properties. This is an important and difficult problem in the chemical, material and pharmaceutical industries. Traditional methods involve a laborious and expensive trial-and-error procedure, but computer-assisted approaches offer many advantages in the automation of molecular design.

  18. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  19. Calculating PI Using Historical Methods and Your Personal Computer.

    ERIC Educational Resources Information Center

    Mandell, Alan

    1989-01-01

    Provides a software program for determining PI to the 15th place after the decimal. Explores the history of determining the value of PI from Archimedes to present computer methods. Investigates Wallis's, Liebniz's, and Buffon's methods. Written for Tandy GW-BASIC (IBM compatible) with 384K. Suggestions for Apple II's are given. (MVL)

  20. Checklist and Pollard Walk butterfly survey methods on public lands

    USGS Publications Warehouse

    Royer, R.A.; Austin, J.E.; Newton, W.E.

    1998-01-01

    Checklist and Pollard Walk butterfly survey methods were contemporaneously applied to seven public sites in North Dakota during the summer of 1995. Results were compared for effect of method and site on total number of butterflies and total number of species detected per hour. Checklist searching produced significantly more butterfly detections per hour than Pollard Walks at all sites. Number of species detected per hour did not differ significantly either among sites or between methods. Many species were detected by only one method, and at most sites generalist and invader species were more likely to be observed during checklist searches than during Pollard Walks. Results indicate that checklist surveys are a more efficient means for initial determination of a species list for a site, whereas for long-term monitoring the Pollard Walk is more practical and statistically manageable. Pollard Walk transects are thus recommended once a prairie butterfly fauna has been defined for a site by checklist surveys.

  1. Probability computations using the SIGMA-PI method on a personal computer

    SciTech Connect

    Haskin, F.E.; Lazo, M.S.; Heger, A.S.

    1990-09-30

    The SIGMA-PI ({Sigma}{Pi}) method as implemented in the SIGPI computer code, is designed to accurately and efficiently evaluate the probability of Boolean expressions in disjunctive normal form given the base event probabilities. The method is not limited to problems in which base event probabilities are small, nor to Boolean expressions that exclude the compliments of base events, nor to problems in which base events are independent. The feasibility of implementing the {Sigma}{Pi} method on a personal computer has been evaluated, and a version of the SIGPI code capable of quantifying simple Boolean expressions with independent base events on the personal computer has been developed. Tasks required for a fully functional personal computer version of SIGPI have been identified together with enhancements that could be implemented to improve the utility and efficiency of the code.

  2. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  3. Computational methods for structural load and resistance modeling

    NASA Technical Reports Server (NTRS)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  4. Managing expectations when publishing tools and methods for computational proteomics.

    PubMed

    Martens, Lennart; Kohlbacher, Oliver; Weintraub, Susan T

    2015-05-01

    Computational tools are pivotal in proteomics because they are crucial for identification, quantification, and statistical assessment of data. The gateway to finding the best choice of a tool or approach for a particular problem is frequently journal articles, yet there is often an overwhelming variety of options that makes it hard to decide on the best solution. This is particularly difficult for nonexperts in bioinformatics. The maturity, reliability, and performance of tools can vary widely because publications may appear at different stages of development. A novel idea might merit early publication despite only offering proof-of-principle, while it may take years before a tool can be considered mature, and by that time it might be difficult for a new publication to be accepted because of a perceived lack of novelty. After discussions with members of the computational mass spectrometry community, we describe here proposed recommendations for organization of informatics manuscripts as a way to set the expectations of readers (and reviewers) through three different manuscript types that are based on existing journal designations. Brief Communications are short reports describing novel computational approaches where the implementation is not necessarily production-ready. Research Articles present both a novel idea and mature implementation that has been suitably benchmarked. Application Notes focus on a mature and tested tool or concept and need not be novel but should offer advancement from improved quality, ease of use, and/or implementation. Organizing computational proteomics contributions into these three manuscript types will facilitate the review process and will also enable readers to identify the maturity and applicability of the tool for their own workflows. PMID:25764342

  5. A Lanczos eigenvalue method on a parallel computer

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.; Fulton, Robert E.

    1987-01-01

    Eigenvalue analyses of complex structures is a computationally intensive task which can benefit significantly from new and impending parallel computers. This study reports on a parallel computer implementation of the Lanczos method for free vibration analysis. The approach used here subdivides the major Lanczos calculation tasks into subtasks and introduces parallelism down to the subtask levels such as matrix decomposition and forward/backward substitution. The method was implemented on a commercial parallel computer and results were obtained for a long flexible space structure. While parallel computing efficiency for the Lanczos method was good for a moderate number of processors for the test problem, the greatest reduction in time was realized for the decomposition of the stiffness matrix, a calculation which took 70 percent of the time in the sequential program and which took 25 percent of the time on eight processors. For a sample calculation of the twenty lowest frequencies of a 486 degree of freedom problem, the total sequential computing time was reduced by almost a factor of ten using 16 processors.

  6. Lifelong Learning through Computer-Mediated Communication: Potential Roles for UK Public Libraries.

    ERIC Educational Resources Information Center

    Kendall, Margaret

    2000-01-01

    Computer-mediated communication (CMC) in public libraries can contribute to lifelong learning and social inclusion. Its use in libraries is hindered by concerns about privacy, limited resources, and beliefs that free Internet access is justifiable only for information, not communication. CMC in fiction/reader services, family history, community…

  7. An Exploratory Study of Malaysian Publication Productivity in Computer Science and Information Technology.

    ERIC Educational Resources Information Center

    Gu, Yinian

    2002-01-01

    Explores the Malaysian computer science and information technology publication productivity as indicated by data collected from three Web-based databases. Relates possible reasons for the amount and pattern of contributions to the size of researcher population, the availability of refereed scholarly journals, and the total expenditure allocated to…

  8. Advanced Telecommunications and Computer Technologies in Georgia Public Elementary School Library Media Centers.

    ERIC Educational Resources Information Center

    Rogers, Jackie L.

    The purpose of this study was to determine what recent progress had been made in Georgia public elementary school library media centers regarding access to advanced telecommunications and computer technologies as a result of special funding. A questionnaire addressed the following areas: automation and networking of the school library media center…

  9. Selection and Integration of a Computer Simulation for Public Budgeting and Finance (PBS 116).

    ERIC Educational Resources Information Center

    Banas, Ed Jr.

    1998-01-01

    Describes the development of a course on public budgeting and finance, which integrated the use of SimCity Classic, a computer-simulation software, with traditional lecture, guest speakers, and collaborative-learning activities. Explains the rationale for the course design and discusses the results from the first semester of teaching the course.…

  10. The Ever-Present Demand for Public Computing Resources. CDS Spotlight

    ERIC Educational Resources Information Center

    Pirani, Judith A.

    2014-01-01

    This Core Data Service (CDS) Spotlight focuses on public computing resources, including lab/cluster workstations in buildings, virtual lab/cluster workstations, kiosks, laptop and tablet checkout programs, and workstation access in unscheduled classrooms. The findings are derived from 758 CDS 2012 participating institutions. A dataset of 529…

  11. Trends in Access to Computing Technology and Its Use in Chicago Public Schools, 2001-2005

    ERIC Educational Resources Information Center

    Coca, Vanessa; Allensworth, Elaine M.

    2007-01-01

    Five years after Consortium on Chicago School Research (CCSR) research revealed a "digital divide" among Chicago Public Schools (CPS) and limited computer usage by staff and students, this new study shows that district schools have overcome many of these obstacles, particularly in terms of technology access and use among teachers and…

  12. Computational methods to obtain time optimal jet engine control

    NASA Technical Reports Server (NTRS)

    Basso, R. J.; Leake, R. J.

    1976-01-01

    Dynamic Programming and the Fletcher-Reeves Conjugate Gradient Method are two existing methods which can be applied to solve a general class of unconstrained fixed time, free right end optimal control problems. New techniques are developed to adapt these methods to solve a time optimal control problem with state variable and control constraints. Specifically, they are applied to compute a time optimal control for a jet engine control problem.

  13. A stochastic method for computing hadronic matrix elements

    DOE PAGESBeta

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; Drach, Vincent; Jansen, Karl; Hadjiyiannakou, Kyriakos; Renner, Dru B.

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  14. Fully consistent CFD methods for incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kolmogorov, D. K.; Shen, W. Z.; Sørensen, N. N.; Sørensen, J. N.

    2014-06-01

    Nowadays collocated grid based CFD methods are one of the most efficient tools for computations of the flows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure-velocity coupling on collocated grids use the so-called momentum interpolation method of Rhie and Chow [1]. As known, the method and some of its widely spread modifications result in solutions, which are dependent of time step at convergence. In this paper the magnitude of the dependence is shown to contribute about 0.5% into the total error in a typical turbulent flow computation. Nevertheless if coarse grids are used, the standard interpolation methods result in much higher non-consistent behavior. To overcome the problem, a recently developed interpolation method, which is independent of time step, is used. It is shown that in comparison to other time step independent method, the method may enhance the convergence rate of the SIMPLEC algorithm up to 25 %. The method is verified using turbulent flow computations around a NACA 64618 airfoil and the roll-up of a shear layer, which may appear in wind turbine wake.

  15. Practical Use of Computationally Frugal Model Analysis Methods

    DOE PAGESBeta

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  16. Practical Use of Computationally Frugal Model Analysis Methods

    SciTech Connect

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugal methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts

  17. Geometrical MTF computation method based on the irradiance model

    NASA Astrophysics Data System (ADS)

    Lin, P.-D.; Liu, C.-S.

    2011-01-01

    The Modulation Transfer Function (MTF) is a measure of an optical system's ability to transfer contrast from the specimen to the image plane at a specific resolution. It can be computed either numerically by geometrical optics or measured experimentally by imaging a knife edge or a bar-target pattern of varying spatial frequency. Previously, MTF accuracy was generally affected by the size of the mesh on the image plane. This paper presents a new MTF computation method based on the irradiance model, without counting the number of rays hitting each grid. To verify the method, the MTF in the sagittal and meridional directions of an axis-symmetrical optical system is computed by both the ray-counting and the proposed methods. It is found that the grid size meshed on the image plane significantly affects the MTF of the ray-counting method, sometimes with significantly negative results. The proposed irradiance method is immune to issues of grid size. The CPU computation time for the two methods is approximately the same.

  18. Applications of computer-intensive statistical methods to environmental research.

    PubMed

    Pitt, D G; Kreutzweiser, D P

    1998-02-01

    Conventional statistical approaches rely heavily on the properties of the central limit theorem to bridge the gap between the characteristics of a sample and some theoretical sampling distribution. Problems associated with nonrandom sampling, unknown population distributions, heterogeneous variances, small sample sizes, and missing data jeopardize the assumptions of such approaches and cast skepticism on conclusions. Conventional nonparametric alternatives offer freedom from distribution assumptions, but design limitations and loss of power can be serious drawbacks. With the data-processing capacity of today's computers, a new dimension of distribution-free statistical methods has evolved that addresses many of the limitations of conventional parametric and nonparametric methods. Computer-intensive statistical methods involve reshuffling, resampling, or simulating a data set thousands of times to empirically define a sampling distribution for a chosen test statistic. The only assumption necessary for valid results is the random assignment of experimental units to the test groups or treatments. Application to a real data set illustrates the advantages of these methods, including freedom from distribution assumptions without loss of power, complete choice over test statistics, easy adaptation to design complexities and missing data, and considerable intuitive appeal. The illustrations also reveal that computer-intensive methods can be more time consuming than conventional methods and the amount of computer code required to orchestrate reshuffling, resampling, or simulation procedures can be appreciable. PMID:9515080

  19. Software for computing eigenvalue bounds for iterative subspace matrix methods

    NASA Astrophysics Data System (ADS)

    Shepard, Ron; Minkoff, Michael; Zhou, Yunkai

    2005-07-01

    This paper describes software for computing eigenvalue bounds to the standard and generalized hermitian eigenvalue problem as described in [Y. Zhou, R. Shepard, M. Minkoff, Computing eigenvalue bounds for iterative subspace matrix methods, Comput. Phys. Comm. 167 (2005) 90-102]. The software discussed in this manuscript applies to any subspace method, including Lanczos, Davidson, SPAM, Generalized Davidson Inverse Iteration, Jacobi-Davidson, and the Generalized Jacobi-Davidson methods, and it is applicable to either outer or inner eigenvalues. This software can be applied during the subspace iterations in order to truncate the iterative process and to avoid unnecessary effort when converging specific eigenvalues to a required target accuracy, and it can be applied to the final set of Ritz values to assess the accuracy of the converged results. Program summaryTitle of program: SUBROUTINE BOUNDS_OPT Catalogue identifier: ADVE Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVE Computers: any computer that supports a Fortran 90 compiler Operating systems: any computer that supports a Fortran 90 compiler Programming language: Standard Fortran 90 High speed storage required:5m+5 working-precision and 2m+7 integer for m Ritz values No. of bits in a word: The floating point working precision is parameterized with the symbolic constant WP No. of lines in distributed program, including test data, etc.: 2452 No. of bytes in distributed program, including test data, etc.: 281 543 Distribution format: tar.gz Nature of physical problem: The computational solution of eigenvalue problems using iterative subspace methods has widespread applications in the physical sciences and engineering as well as other areas of mathematical modeling (economics, social sciences, etc.). The accuracy of the solution of such problems and the utility of those errors is a fundamental problem that is of

  20. Integration of computational methods into automotive wind tunnel testing

    SciTech Connect

    Katz, J.

    1989-01-01

    This paper discusses the aerodynamics of a generic, enclosed-wheel racing-car shape without wheels investigated numerically and compared with one-quarter scale wind-tunnel data. Because both methods lack perfection in simulating actual road conditions, a complementary application of these methods was studied. The computations served for correcting the high-blockage wind-tunnel results and provided detailed pressure data which improved the physical understanding of the flow field. The experimental data was used here mainly to provide information on the location of flow-separation lines and on the aerodynamic loads; these in turn were used to validate and to calibrate the computations.

  1. Methods to enhance capacity of medical teachers for research publications.

    PubMed

    Asokan, Neelakandhan; Shaji, Kunnukattil S

    2016-01-01

    In 2009, the Medical Council of India (MCI) made a certain number of research publications mandatory for the promotion to higher posts of medical teachers. Responding to this, there was a series of workshops on research and scientific writing for faculty members of a medical college. We decided to explore the opinions and perceptions of the participants on the need and relevance of such efforts, using qualitative methods such as focus-group discussions (FGDs) and semi-structured interview. The main themes that emerged from the study were as follows: a) presently, there are several hurdles for research and publication; b) recent attempts to upgrade skills of research methodology and scientific writing are encouraging, but need to be sustained; c) the traditional role of clinician - teacher is being replaced with that of clinician-teacher-researcher. Suggestions for future included - a) combined workshops on research methodology and scientific writing skills, b) continuous institutional support system for research and publication, and c) effective mentorship. PMID:27350712

  2. Learning From Engineering and Computer Science About Communicating The Field To The Public

    NASA Astrophysics Data System (ADS)

    Moore, S. L.; Tucek, K.

    2014-12-01

    The engineering and computer science community has taken the lead in actively informing the public about their discipline, including the societal contributions and career opportunities. These efforts have been intensified in regards to informing underrepresented populations in STEM about engineering and computer science. Are there lessons to be learned by the geoscience community in communicating the societal impacts and career opportunities in the geosciences, especially in regards to broadening participation and meeting Next Generation Science Standards? An estimated 35 percent increase in the number of geoscientist jobs in the United States forecasted for the period between 2008 and 2018, combined with majority populations becoming minority populations, make it imperative that we improve how we increase the public's understanding of the geosciences and how we present our message to targeted populations. This talk will look at recommendations from the National Academy of Engineering's Changing the Conversation: Messages for Improving the Public Understanding of Engineering, and communication strategies by organizations such as Code.org, to highlight practices that the geoscience community can adopt to increase public awareness of the societal contributions of the geosciences, the career opportunities in the geosciences, and the importance of the geosciences in the Next Generation Science Standards. An effort to communicate geoscience to the public, Earth is Calling, will be compared and contrasted to these efforts, and used as an example of how geological societies and other organizations can engage the general public and targeted groups about the geosciences.

  3. Practical Use of Computationally Frugal Model Analysis Methods.

    PubMed

    Hill, Mary C; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2016-03-01

    Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities-this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts. PMID:25810333

  4. Automatic detection of lung nodules in computed tomography images: training and validation of algorithms using public research databases

    NASA Astrophysics Data System (ADS)

    Camarlinghi, Niccolò

    2013-09-01

    Lung cancer is one of the main public health issues in developed countries. Lung cancer typically manifests itself as non-calcified pulmonary nodules that can be detected reading lung Computed Tomography (CT) images. To assist radiologists in reading images, researchers started, a decade ago, the development of Computer Aided Detection (CAD) methods capable of detecting lung nodules. In this work, a CAD composed of two CAD subprocedures is presented: , devoted to the identification of parenchymal nodules, and , devoted to the identification of the nodules attached to the pleura surface. Both CADs are an upgrade of two methods previously presented as Voxel Based Neural Approach CAD . The novelty of this paper consists in the massive training using the public research Lung International Database Consortium (LIDC) database and on the implementation of new features for classification with respect to the original VBNA method. Finally, the proposed CAD is blindly validated on the ANODE09 dataset. The result of the validation is a score of 0.393, which corresponds to the average sensitivity of the CAD computed at seven predefined false positive rates: 1/8, 1/4, 1/2, 1, 2, 4, and 8 FP/CT.

  5. A comparative study of computational methods in cosmic gas dynamics

    NASA Technical Reports Server (NTRS)

    Van Albada, G. D.; Van Leer, B.; Roberts, W. W., Jr.

    1982-01-01

    Many theoretical investigations of fluid flows in astrophysics require extensive numerical calculations. The selection of an appropriate computational method is, therefore, important for the astronomer who has to solve an astrophysical flow problem. The present investigation has the objective to provide an informational basis for such a selection by comparing a variety of numerical methods with the aid of a test problem. The test problem involves a simple, one-dimensional model of the gas flow in a spiral galaxy. The numerical methods considered include the beam scheme, Godunov's method (G), the second-order flux-splitting method (FS2), MacCormack's method, and the flux corrected transport methods of Boris and Book (1973). It is found that the best second-order method (FS2) outperforms the best first-order method (G) by a huge margin.

  6. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Z.; Falkowski, P.

    1990-07-17

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.

  7. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Zbigniew; Falkowski, Paul

    1990-01-01

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

  8. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  9. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  10. A computational method for automated characterization of genetic components.

    PubMed

    Yordanov, Boyan; Dalchau, Neil; Grant, Paul K; Pedersen, Michael; Emmott, Stephen; Haseloff, Jim; Phillips, Andrew

    2014-08-15

    The ability to design and construct synthetic biological systems with predictable behavior could enable significant advances in medical treatment, agricultural sustainability, and bioenergy production. However, to reach a stage where such systems can be reliably designed from biological components, integrated experimental and computational techniques that enable robust component characterization are needed. In this paper we present a computational method for the automated characterization of genetic components. Our method exploits a recently developed multichannel experimental protocol and integrates bacterial growth modeling, Bayesian parameter estimation, and model selection, together with data processing steps that are amenable to automation. We implement the method within the Genetic Engineering of Cells modeling and design environment, which enables both characterization and design to be integrated within a common software framework. To demonstrate the application of the method, we quantitatively characterize a synthetic receiver device that responds to the 3-oxohexanoyl-homoserine lactone signal, across a range of experimental conditions. PMID:24628037

  11. PSD computations using Welch's method. [Power Spectral Density (PSD)

    SciTech Connect

    Solomon, Jr, O M

    1991-12-01

    This report describes Welch's method for computing Power Spectral Densities (PSDs). We first describe the bandpass filter method which uses filtering, squaring, and averaging operations to estimate a PSD. Second, we delineate the relationship of Welch's method to the bandpass filter method. Third, the frequency domain signal-to-noise ratio for a sine wave in white noise is derived. This derivation includes the computation of the noise floor due to quantization noise. The signal-to-noise ratio and noise flood depend on the FFT length and window. Fourth, the variance the Welch's PSD is discussed via chi-square random variables and degrees of freedom. This report contains many examples, figures and tables to illustrate the concepts. 26 refs.

  12. Decluttering Methods for Computer-Generated Graphic Displays

    NASA Technical Reports Server (NTRS)

    Schultz, E. Eugene, Jr.

    1986-01-01

    Symbol simplification and contrasting enhance viewer's ability to detect particular symbol. Report describes experiments designed to indicate how various decluttering methods affect viewer's abilities to distinguish essential from nonessential features on computer-generated graphic displays. Results indicate partial removal of nonessential graphic features through symbol simplification effective in decluttering as total removal of nonessential graphic features.

  13. A Probabilistic Method for Computing Term-by-Term Relationships.

    ERIC Educational Resources Information Center

    Wong, S. K. M.; Yao, Y. Y.

    1993-01-01

    Suggests a probabilistic method to compute the term relationships from relevance information, which complements the studies on a nonprobabilistic technique called pseudo-classification. A quadratic ranking function is derived by incorporating the term-by-term relationships. Procedures for estimating the required parameters are provided by…

  14. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  15. Micro-computed tomography: an alternative method for shark ageing.

    PubMed

    Geraghty, P T; Jones, A S; Stewart, J; Macbeth, W G

    2012-04-01

    Micro-computed tomography (microCT) produced 3D reconstructions of shark Carcharhinus brevipinna vertebrae that could be virtually sectioned along any desired plane, and upon which growth bands were readily visible. When compared to manual sectioning, it proved to be a valid and repeatable means of ageing and offers several distinct advantages over other ageing methods. PMID:22497384

  16. pyro: Python-based tutorial for computational methods for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  17. EQUILIBRIUM AND NONEQUILIBRIUM FOUNDATIONS OF FREE ENERGY COMPUTATIONAL METHODS

    SciTech Connect

    C. JARZYNSKI

    2001-03-01

    Statistical mechanics provides a rigorous framework for the numerical estimation of free energy differences in complex systems such as biomolecules. This paper presents a brief review of the statistical mechanical identities underlying a number of techniques for computing free energy differences. Both equilibrium and nonequilibrium methods are covered.

  18. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  19. A Higher Order Iterative Method for Computing the Drazin Inverse

    PubMed Central

    Soleymani, F.; Stanimirović, Predrag S.

    2013-01-01

    A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747

  20. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  1. Stress intensity estimates by a computer assisted photoelastic method

    NASA Technical Reports Server (NTRS)

    Smith, C. W.

    1977-01-01

    Following an introductory history, the frozen stress photoelastic method is reviewed together with analytical and experimental aspects of cracks in photoelastic models. Analytical foundations are then presented upon which a computer assisted frozen stress photoelastic technique is based for extracting estimates of stress intensity factors from three-dimensional cracked body problems. The use of the method is demonstrated for two currently important three-dimensional crack problems.

  2. Public library computer training for older adults to access high-quality Internet health information

    PubMed Central

    Xie, Bo; Bugg, Julie M.

    2010-01-01

    An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at a state university. One hundred and thirty-one older adults aged 54–89 participated in the study between September 2007 and July 2008. Key findings include: a) participants had overwhelmingly positive perceptions of the training program; b) after learning about two NIH websites (http://nihseniorhealth.gov and http://medlineplus.gov) from the training, many participants started using these online resources to find high quality health and medical information and, further, to guide their decision-making regarding a health- or medically-related matter; and c) computer anxiety significantly decreased (p < .001) while computer interest and efficacy significantly increased (p = .001 and p < .001, respectively) from pre- to post-training, suggesting statistically significant improvements in computer attitudes between pre- and post-training. The findings have implications for public libraries, LIS academic programs, and other organizations interested in providing similar programs in their communities. PMID:20161649

  3. Public consultation. Up and ATAM (aims, timing, audience, method).

    PubMed

    Khan, U

    1998-04-30

    Although the NHS has some shining examples of public and user involvement, many still view it as an optional extra. Policy makers need to adopt a broader strategy for involving users, carers, staff and the wider public. Badly done public consultation will cause problems for policy makers, alienate participants and fuel public cynicism. PMID:10180417

  4. [Public health systems and methods of their financing].

    PubMed

    Kim, S V

    2001-01-01

    A correlation between type of the state as regards public consciousness (authoritarian, liberal, democratic) and type of public health is disclosed. The type of public health determines the ways of its financing (centralized management, tariff regulation, and free prices) and forms of regulation of financial flows in public health. PMID:11593814

  5. A Parallel Iterative Method for Computing Molecular Absorption Spectra.

    PubMed

    Koval, Peter; Foerster, Dietrich; Coulaud, Olivier

    2010-09-14

    We describe a fast parallel iterative method for computing molecular absorption spectra within TDDFT linear response and using the LCAO method. We use a local basis of "dominant products" to parametrize the space of orbital products that occur in the LCAO approach. In this basis, the dynamic polarizability is computed iteratively within an appropriate Krylov subspace. The iterative procedure uses a matrix-free GMRES method to determine the (interacting) density response. The resulting code is about 1 order of magnitude faster than our previous full-matrix method. This acceleration makes the speed of our TDDFT code comparable with codes based on Casida's equation. The implementation of our method uses hybrid MPI and OpenMP parallelization in which load balancing and memory access are optimized. To validate our approach and to establish benchmarks, we compute spectra of large molecules on various types of parallel machines. The methods developed here are fairly general, and we believe they will find useful applications in molecular physics/chemistry, even for problems that are beyond TDDFT, such as organic semiconductors, particularly in photovoltaics. PMID:26616067

  6. Three-dimensional cardiac computational modelling: methods, features and applications.

    PubMed

    Lopez-Perez, Alejandro; Sebastian, Rafael; Ferrero, Jose M

    2015-01-01

    The combination of computational models and biophysical simulations can help to interpret an array of experimental data and contribute to the understanding, diagnosis and treatment of complex diseases such as cardiac arrhythmias. For this reason, three-dimensional (3D) cardiac computational modelling is currently a rising field of research. The advance of medical imaging technology over the last decades has allowed the evolution from generic to patient-specific 3D cardiac models that faithfully represent the anatomy and different cardiac features of a given alive subject. Here we analyse sixty representative 3D cardiac computational models developed and published during the last fifty years, describing their information sources, features, development methods and online availability. This paper also reviews the necessary components to build a 3D computational model of the heart aimed at biophysical simulation, paying especial attention to cardiac electrophysiology (EP), and the existing approaches to incorporate those components. We assess the challenges associated to the different steps of the building process, from the processing of raw clinical or biological data to the final application, including image segmentation, inclusion of substructures and meshing among others. We briefly outline the personalisation approaches that are currently available in 3D cardiac computational modelling. Finally, we present examples of several specific applications, mainly related to cardiac EP simulation and model-based image analysis, showing the potential usefulness of 3D cardiac computational modelling into clinical environments as a tool to aid in the prevention, diagnosis and treatment of cardiac diseases. PMID:25928297

  7. Analysis and optimization of cyclic methods in orbit computation

    NASA Technical Reports Server (NTRS)

    Pierce, S.

    1973-01-01

    The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

  8. Computational Methods for Structural Mechanics and Dynamics, part 1

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

  9. A computationally efficient particle-simulation method suited to vector-computer architectures

    SciTech Connect

    McDonald, J.D.

    1990-01-01

    Recent interest in a National Aero-Space Plane (NASP) and various Aero-assisted Space Transfer Vehicles (ASTVs) presents the need for a greater understanding of high-speed rarefied flight conditions. Particle simulation techniques such as the Direct Simulation Monte Carlo (DSMC) method are well suited to such problems, but the high cost of computation limits the application of the methods to two-dimensional or very simple three-dimensional problems. This research re-examines the algorithmic structure of existing particle simulation methods and re-structures them to allow efficient implementation on vector-oriented supercomputers. A brief overview of the DSMC method and the Cray-2 vector computer architecture are provided, and the elements of the DSMC method that inhibit substantial vectorization are identified. One such element is the collision selection algorithm. A complete reformulation of underlying kinetic theory shows that this may be efficiently vectorized for general gas mixtures. The mechanics of collisions are vectorizable in the DSMC method, but several optimizations are suggested that greatly enhance performance. Also this thesis proposes a new mechanism for the exchange of energy between vibration and other energy modes. The developed scheme makes use of quantized vibrational states and is used in place of the Borgnakke-Larsen model. Finally, a simplified representation of physical space and boundary conditions is utilized to further reduce the computational cost of the developed method. Comparison to solutions obtained from the DSMC method for the relaxation of internal energy modes in a homogeneous gas, as well as single and multiple specie shock wave profiles, are presented. Additionally, a large scale simulation of the flow about the proposed Aeroassisted Flight Experiment (AFE) vehicle is included as an example of the new computational capability of the developed particle simulation method.

  10. Computing the Casimir energy using the point-matching method

    SciTech Connect

    Lombardo, F. C.; Mazzitelli, F. D.; Vazquez, M.; Villar, P. I.

    2009-09-15

    We use a point-matching approach to numerically compute the Casimir interaction energy for a two perfect-conductor waveguide of arbitrary section. We present the method and describe the procedure used to obtain the numerical results. At first, our technique is tested for geometries with known solutions, such as concentric and eccentric cylinders. Then, we apply the point-matching technique to compute the Casimir interaction energy for new geometries such as concentric corrugated cylinders and cylinders inside conductors with focal lines.

  11. The ensemble switch method for computing interfacial tensions.

    PubMed

    Schmitz, Fabian; Virnau, Peter

    2015-04-14

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension. PMID:25877563

  12. Digital data storage systems, computers, and data verification methods

    DOEpatents

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  13. The ensemble switch method for computing interfacial tensions

    SciTech Connect

    Schmitz, Fabian; Virnau, Peter

    2015-04-14

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.

  14. Methods for the computation of detailed geoids and their accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.; Rummel, R.

    1975-01-01

    Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.

  15. An effective method for computing the noise in biochemical networks

    NASA Astrophysics Data System (ADS)

    Zhang, Jiajun; Nie, Qing; He, Miao; Zhou, Tianshou

    2013-02-01

    We present a simple yet effective method, which is based on power series expansion, for computing exact binomial moments that can be in turn used to compute steady-state probability distributions as well as the noise in linear or nonlinear biochemical reaction networks. When the method is applied to representative reaction networks such as the ON-OFF models of gene expression, gene models of promoter progression, gene auto-regulatory models, and common signaling motifs, the exact formulae for computing the intensities of noise in the species of interest or steady-state distributions are analytically given. Interestingly, we find that positive (negative) feedback does not enlarge (reduce) noise as claimed in previous works but has a counter-intuitive effect and that the multi-OFF (or ON) mechanism always attenuates the noise in contrast to the common ON-OFF mechanism and can modulate the noise to the lowest level independently of the mRNA mean. Except for its power in deriving analytical expressions for distributions and noise, our method is programmable and has apparent advantages in reducing computational cost.

  16. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  17. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  18. Method of computer-aided measurement in a shooting range

    NASA Astrophysics Data System (ADS)

    Liu, Chanlao; Zhang, Yun; Xiong, Rensheng; Sun, Yishang

    2000-10-01

    In the view of the blindness of photoelectric measurement scheme argument and the danger of live shell measurement in shooting range, this paper provided a computer aided measurement method guiding the measurement scheme argument and equipment researching and producing and driving the measurement process visiblization and standardization. The computer aided measurement in shooting range can be divided into the mathematical simulation of targets moving, the mathematical simulation of measurement method, the mathematical simulation of photoelectric system, the animated displaying of measurement process, and so on. By adding random jamming, Gaussian white noise and so on, the live measurement environment and condition was built. By using mathematical discretization, the time series pictures was obtained. By controlling the time changing and time unifying of several equipment, the animated displaying of measurement process was built. The programming language was MATLAB. The method was proved through simulating the intersection measurement trajectory of antiaircraft gun's shell successfully.

  19. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  20. Computer-aided methods of determining thyristor thermal transients

    SciTech Connect

    Lu, E.; Bronner, G.

    1988-08-01

    An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs.

  1. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  2. Computational methods in metabolic engineering for strain design.

    PubMed

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. PMID:25576846

  3. Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

    PubMed Central

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  4. Novel Methods for Communicating Plasma Science to the General Public

    NASA Astrophysics Data System (ADS)

    Zwicker, Andrew; Merali, Aliya; Wissel, S. A.; Delooper, John

    2012-10-01

    The broader implications of Plasma Science remains an elusive topic that the general public rarely discusses, regardless of their relevance to energy, the environment, and technology. Recently, we have looked beyond print media for methods to reach large numbers of people in creative and informative ways. These have included video, art, images, and music. For example, our submission to the ``What is a Flame?'' contest was ranked in the top 15 out of 800 submissions. Images of plasmas have won 3 out of 5 of the Princeton University ``Art of Science'' competitions. We use a plasma speaker to teach students of all ages about sound generation and plasma physics. We report on the details of each of these and future videos and animations under development.

  5. SAR/QSAR methods in public health practice

    SciTech Connect

    Demchuk, Eugene Ruiz, Patricia; Chou, Selene; Fowler, Bruce A.

    2011-07-15

    Methods of (Quantitative) Structure-Activity Relationship ((Q)SAR) modeling play an important and active role in ATSDR programs in support of the Agency mission to protect human populations from exposure to environmental contaminants. They are used for cross-chemical extrapolation to complement the traditional toxicological approach when chemical-specific information is unavailable. SAR and QSAR methods are used to investigate adverse health effects and exposure levels, bioavailability, and pharmacokinetic properties of hazardous chemical compounds. They are applied as a part of an integrated systematic approach in the development of Health Guidance Values (HGVs), such as ATSDR Minimal Risk Levels, which are used to protect populations exposed to toxic chemicals at hazardous waste sites. (Q)SAR analyses are incorporated into ATSDR documents (such as the toxicological profiles and chemical-specific health consultations) to support environmental health assessments, prioritization of environmental chemical hazards, and to improve study design, when filling the priority data needs (PDNs) as mandated by Congress, in instances when experimental information is insufficient. These cases are illustrated by several examples, which explain how ATSDR applies (Q)SAR methods in public health practice.

  6. Practical methods to improve the development of computational software

    SciTech Connect

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-07-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  7. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  8. Informed public choices for low-carbon electricity portfolios using a computer decision tool.

    PubMed

    Mayer, Lauren A Fleishman; Bruine de Bruin, Wändi; Morgan, M Granger

    2014-04-01

    Reducing CO2 emissions from the electricity sector will likely require policies that encourage the widespread deployment of a diverse mix of low-carbon electricity generation technologies. Public discourse informs such policies. To make informed decisions and to productively engage in public discourse, citizens need to understand the trade-offs between electricity technologies proposed for widespread deployment. Building on previous paper-and-pencil studies, we developed a computer tool that aimed to help nonexperts make informed decisions about the challenges faced in achieving a low-carbon energy future. We report on an initial usability study of this interactive computer tool. After providing participants with comparative and balanced information about 10 electricity technologies, we asked them to design a low-carbon electricity portfolio. Participants used the interactive computer tool, which constrained portfolio designs to be realistic and yield low CO2 emissions. As they changed their portfolios, the tool updated information about projected CO2 emissions, electricity costs, and specific environmental impacts. As in the previous paper-and-pencil studies, most participants designed diverse portfolios that included energy efficiency, nuclear, coal with carbon capture and sequestration, natural gas, and wind. Our results suggest that participants understood the tool and used it consistently. The tool may be downloaded from http://cedmcenter.org/tools-for-cedm/informing-the-public-about-low-carbon-technologies/ . PMID:24564708

  9. Advanced Computational Aeroacoustics Methods for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane (Technical Monitor); Tam, Christopher

    2003-01-01

    Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.

  10. Demand forecasting using fuzzy neural computation, with special emphasis on weekend and public holiday forecasting

    SciTech Connect

    Srinivasan, D.; Chang, C.S.; Liew, A.C.

    1995-11-01

    This paper describes the implementation and forecasting results of a hybrid fuzzy neural technique, which combines neural network modeling, and techniques from fuzzy logic and fuzzy set theory for electric load forecasting. The strengths of this powerful technique lie in its ability to forecast accurately on weekdays, as well as, on weekends, public holidays, and days before and after public holidays. Furthermore, use of fuzzy logic effectively handles the load variations due to special events. The Fuzzy-Neural Network (FNN) has been extensively tested on actual data obtained from a power system for 24-hour ahead prediction based on forecast weather information. Very impressive results, with an average error of 0.62% on weekdays, 0.83% on Saturdays and 1.17% on Sundays and public holidays have been obtained. This approach avoids complex mathematical calculations and training on many years of data, and is simple to implement on a personal computer.

  11. A random walk method for computing genetic location scores.

    PubMed Central

    Lange, K; Sobel, E

    1991-01-01

    Calculation of location scores is one of the most computationally intensive tasks in modern genetics. Since these scores are crucial in placing disease loci on marker maps, there is ample incentive to pursue such calculations with large numbers of markers. However, in contrast to the simple, standardized pedigrees used in making marker maps, disease pedigrees are often graphically complex and sparsely phenotyped. These complications can present insuperable barriers to exact likelihood calculations with more than a few markers simultaneously. To overcome these barriers we introduce in the present paper a random walk method for computing approximate location scores with large numbers of biallelic markers. Sufficient mathematical theory is developed to explain the method. Feasibility is checked by small-scale simulations for two applications permitting exact calculation of location scores. PMID:1746559

  12. Computational methods. [Calculation of dynamic loading to offshore platforms

    SciTech Connect

    Maeda, H. . Inst. of Industrial Science)

    1993-02-01

    With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine hydrodynamics covers, not only hydro, but also aerodynamics such as wind load or current-wave-wind interaction, hydrodynamics such as cavitation, underwater noise, multi-phase flow such as two-phase flow in pipes or air bubble in water or surface and internal waves, and magneto-hydrodynamics such as propulsion due to super conductivity. Among them, two key words are focused on as the identification of marine hydrodynamics in offshore technology; they are free surface and vortex shedding.

  13. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  14. Characterization of Meta-Materials Using Computational Electromagnetic Methods

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Shin, Joon

    2005-01-01

    An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

  15. Computer processing improves hydraulics optimization with new methods

    SciTech Connect

    Gavignet, A.A.; Wick, C.J.

    1987-12-01

    In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configuration system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.

  16. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    SciTech Connect

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  17. A hierarchical method for molecular docking using cloud computing.

    PubMed

    Kang, Ling; Guo, Quan; Wang, Xicheng

    2012-11-01

    Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs. PMID:23017886

  18. On computer-intensive simulation and estimation methods for rare-event analysis in epidemic models.

    PubMed

    Clémençon, Stéphan; Cousien, Anthony; Felipe, Miraine Dávila; Tran, Viet Chi

    2015-12-10

    This article focuses, in the context of epidemic models, on rare events that may possibly correspond to crisis situations from the perspective of public health. In general, no close analytic form for their occurrence probabilities is available, and crude Monte Carlo procedures fail. We show how recent intensive computer simulation techniques, such as interacting branching particle methods, can be used for estimation purposes, as well as for generating model paths that correspond to realizations of such events. Applications of these simulation-based methods to several epidemic models fitted from real datasets are also considered and discussed thoroughly. PMID:26242476

  19. A literature review of neck pain associated with computer use: public health implications

    PubMed Central

    Green, Bart N

    2008-01-01

    Prolonged use of computers during daily work activities and recreation is often cited as a cause of neck pain. This review of the literature identifies public health aspects of neck pain as associated with computer use. While some retrospective studies support the hypothesis that frequent computer operation is associated with neck pain, few prospective studies reveal causal relationships. Many risk factors are identified in the literature. Primary prevention strategies have largely been confined to addressing environmental exposure to ergonomic risk factors, since to date, no clear cause for this work-related neck pain has been acknowledged. Future research should include identifying causes of work related neck pain so that appropriate primary prevention strategies may be developed and to make policy recommendations pertaining to prevention. PMID:18769599

  20. A survey of synchronization methods for parallel computers

    SciTech Connect

    Dinning, A. )

    1989-07-01

    This article examines how traditional synchronization methods influence the design of MIMD multiprocessors. This particular class of architectures is one in which high-level synchronization plays an important role. Although vector processors, dataflow machines, and single instruction, multiple-data (SIMD) computers are highly synchronized, their synchronization is generally an explicit part of the control flow and is executed as part of every instruction. In MIMD multiprocessors, synchronization must occur on demand, so more sophisticated schemes are needed.

  1. 78 FR 54453 - Notice of Public Meeting-Intersection of Cloud Computing and Mobility Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... National Institute of Standards and Technology Notice of Public Meeting--Intersection of Cloud Computing...-mobility.cfm . SUPPLEMENTARY INFORMATION: NIST hosted six prior Cloud Computing Forum & Workshop events in..., portability, and security, discuss the Federal Government's experience with cloud computing, report on...

  2. Investigation of the "Convince Me" Computer Environment as a Tool for Critical Argumentation about Public Policy Issues

    ERIC Educational Resources Information Center

    Adams, Stephen T.

    2003-01-01

    The "Convince Me" computer environment supports critical thinking by allowing users to create and evaluate computer-based representations of arguments. This study investigates theoretical and design considerations pertinent to using "Convince Me" as an educational tool to support reasoning about public policy issues. Among computer environments…

  3. Public involvement in multi-objective water level regulation development projects-evaluating the applicability of public involvement methods

    SciTech Connect

    Vaentaenen, Ari . E-mail: armiva@utu.fi; Marttunen, Mika . E-mail: Mika.Marttunen@ymparisto.fi

    2005-04-15

    Public involvement is a process that involves the public in the decision making of an organization, for example a municipality or a corporation. It has developed into a widely accepted and recommended policy in environment altering projects. The EU Water Framework Directive (WFD) took force in 2000 and stresses the importance of public involvement in composing river basin management plans. Therefore, the need to develop public involvement methods for different situations and circumstances is evident. This paper describes how various public involvement methods have been applied in a development project involving the most heavily regulated lake in Finland. The objective of the project was to assess the positive and negative impacts of regulation and to find possibilities for alleviating the adverse impacts on recreational use and the aquatic ecosystem. An exceptional effort was made towards public involvement, which was closely connected to planning and decision making. The applied methods were (1) steering group work, (2) survey, (3) dialogue, (4) theme interviews, (5) public meeting and (6) workshops. The information gathered using these methods was utilized in different stages of the project, e.g., in identifying the regulation impacts, comparing alternatives and compiling the recommendations for regulation development. After describing our case and the results from the applied public involvement methods, we will discuss our experiences and the feedback from the public. We will also critically evaluate our own success in coping with public involvement challenges. In addition to that, we present general recommendations for dealing with these problematic issues based on our experiences, which provide new insights for applying various public involvement methods in multi-objective decision making projects.

  4. Computational Catalysis Using the Artificial Force Induced Reaction Method.

    PubMed

    Sameera, W M C; Maeda, Satoshi; Morokuma, Keiji

    2016-04-19

    The artificial force induced reaction (AFIR) method in the global reaction route mapping (GRRM) strategy is an automatic approach to explore all important reaction paths of complex reactions. Most traditional methods in computational catalysis require guess reaction paths. On the other hand, the AFIR approach locates local minima (LMs) and transition states (TSs) of reaction paths without a guess, and therefore finds unanticipated as well as anticipated reaction paths. The AFIR method has been applied for multicomponent organic reactions, such as the aldol reaction, Passerini reaction, Biginelli reaction, and phase-transfer catalysis. In the presence of several reactants, many equilibrium structures are possible, leading to a number of reaction pathways. The AFIR method in the GRRM strategy determines all of the important equilibrium structures and subsequent reaction paths systematically. As the AFIR search is fully automatic, exhaustive trial-and-error and guess-and-check processes by the user can be eliminated. At the same time, the AFIR search is systematic, and therefore a more accurate and comprehensive description of the reaction mechanism can be determined. The AFIR method has been used for the study of full catalytic cycles and reaction steps in transition metal catalysis, such as cobalt-catalyzed hydroformylation and iron-catalyzed carbon-carbon bond formation reactions in aqueous media. Some AFIR applications have targeted the selectivity-determining step of transition-metal-catalyzed asymmetric reactions, including stereoselective water-tolerant lanthanide Lewis acid-catalyzed Mukaiyama aldol reactions. In terms of establishing the selectivity of a reaction, systematic sampling of the transition states is critical. In this direction, AFIR is very useful for performing a systematic and automatic determination of TSs. In the presence of a comprehensive description of the transition states, the selectivity of the reaction can be calculated more accurately

  5. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1971-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  6. Integration of viscous effects into inviscid computational methods

    NASA Technical Reports Server (NTRS)

    Katz, Joseph

    1990-01-01

    A variety of practical fluid dynamic problems related to the low-speed, high Reynolds number flow over aircraft and ground vehicles fall in a category where some simplified mathematical models become applicable. This provides the fluid dynamicists with a more economical computational tool, compared to the alternative solution of the Navier Stokes equations. The objective was to provide a brief survey of some of the viscous boundary layer solution methods and to propose a method for coupling between the inviscid outer flow and the viscous boundary layer solutions. Results of this survey and details of the viscous/inviscid flow coupling efforts are presented.

  7. Graphics processing unit acceleration of computational electromagnetic methods

    NASA Astrophysics Data System (ADS)

    Inman, Matthew

    The use of Graphical Processing Units (GPU's) for scientific applications has been evolving and expanding for the decade. GPU's provide an alternative to the CPU in the creation and execution of the numerical codes that are often relied upon in to perform simulations in computational electromagnetics. While originally designed purely to display graphics on the users monitor, GPU's today are essentially powerful floating point co-processors that can be programmed not only to render complex graphics, but also perform the complex mathematical calculations often encountered in scientific computing. Currently the GPU's being produced often contain hundreds of separate cores able to access large amounts of high-speed dedicated memory. By utilizing the power offered by such a specialized processor, it is possible to drastically speed up the calculations required in computational electromagnetics. This increase in speed allows for the use of GPU based simulations in a variety of situations that the computational time has heretofore been a limiting factor in, such as in educational courses. Many situations in teaching electromagnetics often rely upon simple examples of problems due to the simulation times needed to analyze more complex problems. The use of GPU based simulations will be shown to allow demonstrations of more advanced problems than previously allowed by adapting the methods for use on the GPU. Modules will be developed for a wide variety of teaching situations utilizing the speed of the GPU to demonstrate various techniques and ideas previously unrealizable.

  8. Automated uncertainty analysis methods in the FRAP computer codes. [PWR

    SciTech Connect

    Peck, S O

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts.

  9. On a method computing transient wave propagation in ionospheric regions

    NASA Technical Reports Server (NTRS)

    Gray, K. G.; Bowhill, S. A.

    1978-01-01

    A consequence of an exoatmospheric nuclear burst is an electromagnetic pulse (EMP) radiated from it. In a region far enough away from the burst, where nonlinear effects can be ignored, the EMP can be represented by a large-amplitude narrow-time-width plane-wave pulse. If the ionosphere intervenes the origin and destination of the EMP, frequency dispersion can cause significant changes in the original pulse upon reception. A method of computing these dispersive effects of transient wave propagation is summarized. The method described is different from the standard transform techniques and provides physical insight into the transient wave process. The method, although exact, can be used in approximating the early-time transient response of an ionospheric region by a simple integration with only explicit knowledge of the electron density, electron collision frequency, and electron gyrofrequency required. As an illustration of the method, it is applied to a simple example and contrasted with the corresponding transform solution.

  10. Evolutionary computational methods to predict oral bioavailability QSPRs.

    PubMed

    Bains, William; Gilbert, Richard; Sviridenko, Lilya; Gascon, Jose-Miguel; Scoffin, Robert; Birchall, Kris; Harvey, Inman; Caldwell, John

    2002-01-01

    This review discusses evolutionary and adaptive methods for predicting oral bioavailability (OB) from chemical structure. Genetic Programming (GP), a specific form of evolutionary computing, is compared with some other advanced computational methods for OB prediction. The results show that classifying drugs into 'high' and 'low' OB classes on the basis of their structure alone is solvable, and initial models are already producing output that would be useful for pharmaceutical research. The results also suggest that quantitative prediction of OB will be tractable. Critical aspects of the solution will involve the use of techniques that can: (i) handle problems with a very large number of variables (high dimensionality); (ii) cope with 'noisy' data; and (iii) implement binary choices to sub-classify molecules with behavior that are qualitatively different. Detailed quantitative predictions will emerge from more refined models that are hybrids derived from mechanistic models of the biology of oral absorption and the power of advanced computing techniques to predict the behavior of the components of those models in silico. PMID:11865672

  11. ALFRED: A Practical Method for Alignment-Free Distance Computation.

    PubMed

    Thankachan, Sharma V; Chockalingam, Sriram P; Liu, Yongchao; Apostolico, Alberto; Aluru, Srinivas

    2016-06-01

    Alignment-free approaches are gaining persistent interest in many sequence analysis applications such as phylogenetic inference and metagenomic classification/clustering, especially for large-scale sequence datasets. Besides the widely used k-mer methods, the average common substring (ACS) approach has emerged to be one of the well-known alignment-free approaches. Two recent works further generalize this ACS approach by allowing a bounded number k of mismatches in the common substrings, relying on approximation (linear time) and exact computation, respectively. Albeit having a good worst-case time complexity [Formula: see text], the exact approach is complex and unlikely to be efficient in practice. Herein, we present ALFRED, an alignment-free distance computation method, which solves the generalized common substring search problem via exact computation. Compared to the theoretical approach, our algorithm is easier to implement and more practical to use, while still providing highly competitive theoretical performances with an expected run-time of [Formula: see text]. By applying our program to phylogenetic inference as a case study, we find that our program facilitates to exactly reconstruct the topology of the reference phylogenetic tree for a set of 27 primate mitochondrial genomes, at reasonably acceptable speed. ALFRED is implemented in C++ programming language and the source code is freely available online. PMID:27138275

  12. Optimum threshold selection method of centroid computation for Gaussian spot

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; Wang, Caixia

    2015-10-01

    Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.

  13. Approximation method to compute domain related integrals in structural studies

    NASA Astrophysics Data System (ADS)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2015-11-01

    Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the

  14. Tensor product decomposition methods for plasmas physics computations

    NASA Astrophysics Data System (ADS)

    Del-Castillo-Negrete, D.

    2012-03-01

    Tensor product decomposition (TPD) methods are a powerful linear algebra technique for the efficient representation of high dimensional data sets. In the simplest 2-dimensional case, TPD reduces to the singular value decomposition (SVD) of matrices. These methods, which are closely related to proper orthogonal decomposition techniques, have been extensively applied in signal and image processing, and to some fluid mechanics problems. However, their use in plasma physics computation is relatively new. Some recent applications include: data compression of 6-dimensional gyrokinetic plasma turbulence data sets,footnotetextD. R. Hatch, D. del-Castillo-Negrete, and P. W. Terry. Submitted to Journal Comp. Phys. (2011). noise reduction in particle methods,footnotetextR. Nguyen, D. del-Castillo-Negrete, K. Schneider, M. Farge, and G. Chen: Journal of Comp. Phys. 229, 2821-2839 (2010). and multiscale analysis of plasma turbulence.footnotetextS. Futatani, S. Benkadda, and D. del-Castillo-Negrete: Phys. of Plasmas, 16, 042506 (2009) The goal of this presentation is to discuss a novel application of TPD methods to projective integration of particle-based collisional plasma transport computations.

  15. New developments in the multiscale hybrid energy density computational method

    NASA Astrophysics Data System (ADS)

    Min, Sun; Shanying, Wang; Dianwu, Wang; Chongyu, Wang

    2016-01-01

    Further developments in the hybrid multiscale energy density method are proposed on the basis of our previous papers. The key points are as follows. (i) The theoretical method for the determination of the weight parameter in the energy coupling equation of transition region in multiscale model is given via constructing underdetermined equations. (ii) By applying the developed mathematical method, the weight parameters have been given and used to treat some problems in homogeneous charge density systems, which are directly related with multiscale science. (iii) A theoretical algorithm has also been presented for treating non-homogeneous systems of charge density. The key to the theoretical computational methods is the decomposition of the electrostatic energy in the total energy of density functional theory for probing the spanning characteristic at atomic scale, layer by layer, by which the choice of chemical elements and the defect complex effect can be understood deeply. (iv) The numerical computational program and design have also been presented. Project supported by the National Basic Research Program of China (Grant No. 2011CB606402) and the National Natural Science Foundation of China (Grant No. 51071091).

  16. COMSAC: Computational Methods for Stability and Control. Part 2

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  17. An analytical method for computing atomic contact areas in biomolecules.

    PubMed

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. PMID:22965816

  18. Domain decomposition methods for the parallel computation of reacting flows

    NASA Astrophysics Data System (ADS)

    Keyes, David E.

    1989-05-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. In this procedure, subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, we make comparisons between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demostrate for it approximately 10-fold speedup on 16 processsors. The three special features of reacting flow models in relation to these linear systems are: the possibly large number of degrees of freedom per gridpoint, the dominance of dense intra-point source-term coupling over inter-point convective-diffusive coupling throughout significant portions of the flow-field and strong nonlinearities which restrict the time step to small values (independent of linear algebraic considerations) throughout significant portions of the iteration history. Though these features are exploited to advantage herein, many aspects of the paper are applicable to the modeling of general convective-diffusive systems.

  19. Computation of multi-material interactions using point method

    SciTech Connect

    Zhang, Duan Z; Ma, Xia; Giguere, Paul T

    2009-01-01

    Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations

  20. PREFACE: Theory, Modelling and Computational methods for Semiconductors

    NASA Astrophysics Data System (ADS)

    Migliorato, Max; Probert, Matt

    2010-04-01

    These conference proceedings contain the written papers of the contributions presented at the 2nd International Conference on: Theory, Modelling and Computational methods for Semiconductors. The conference was held at the St Williams College, York, UK on 13th-15th Jan 2010. The previous conference in this series took place in 2008 at the University of Manchester, UK. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in Semiconductor science and technology, where there is a substantial potential for time saving in R&D. The development of high speed computer architectures is finally allowing the routine use of accurate methods for calculating the structural, thermodynamic, vibrational and electronic properties of semiconductors and their heterostructures. This workshop ran for three days, with the objective of bringing together UK and international leading experts in the field of theory of group IV, III-V and II-VI semiconductors together with postdocs and students in the early stages of their careers. The first day focused on providing an introduction and overview of this vast field, aimed particularly at students at this influential point in their careers. We would like to thank all participants for their contribution to the conference programme and these proceedings. We would also like to acknowledge the financial support from the Institute of Physics (Computational Physics group and Semiconductor Physics group), the UK Car-Parrinello Consortium, Accelrys (distributors of Materials Studio) and Quantumwise (distributors of Atomistix). The Editors Acknowledgements Conference Organising Committee: Dr Matt Probert (University of York) and Dr Max Migliorato (University of Manchester) Programme Committee: Dr Marco Califano (University of Leeds), Dr Jacob Gavartin (Accelrys Ltd, Cambridge), Dr Stanko Tomic (STFC Daresbury Laboratory), Dr Gabi Slavcheva (Imperial College London) Proceedings edited and compiled by Dr

  1. Student Publications Enhance Teaching: Experimental Psychology and Research Methods Courses.

    ERIC Educational Resources Information Center

    Ware, Mark E.; Davis, Stephen F.

    Recent years have witnessed an increased emphasis on the professional development of undergraduate psychology students. One major thrust of this professional development has been on research that results in a convention presentation or journal publication. Research leading to journal publication is becoming a requirement for admission to many…

  2. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other

  3. Automated computational aberration correction method for broadband interferometric imaging techniques.

    PubMed

    Pande, Paritosh; Liu, Yuan-Zhi; South, Fredrick A; Boppart, Stephen A

    2016-07-15

    Numerical correction of optical aberrations provides an inexpensive and simpler alternative to the traditionally used hardware-based adaptive optics techniques. In this Letter, we present an automated computational aberration correction method for broadband interferometric imaging techniques. In the proposed method, the process of aberration correction is modeled as a filtering operation on the aberrant image using a phase filter in the Fourier domain. The phase filter is expressed as a linear combination of Zernike polynomials with unknown coefficients, which are estimated through an iterative optimization scheme based on maximizing an image sharpness metric. The method is validated on both simulated data and experimental data obtained from a tissue phantom, an ex vivo tissue sample, and an in vivo photoreceptor layer of the human retina. PMID:27420526

  4. On implicit Runge-Kutta methods for parallel computations

    NASA Technical Reports Server (NTRS)

    Keeling, Stephen L.

    1987-01-01

    Implicit Runge-Kutta methods which are well-suited for parallel computations are characterized. It is claimed that such methods are first of all, those for which the associated rational approximation to the exponential has distinct poles, and these are called multiply explicit (MIRK) methods. Also, because of the so-called order reduction phenomenon, there is reason to require that these poles be real. Then, it is proved that a necessary condition for a q-stage, real MIRK to be A sub 0-stable with maximal order q + 1 is that q = 1, 2, 3, or 5. Nevertheless, it is shown that for every positive integer q, there exists a q-stage, real MIRK which is I-stable with order q. Finally, some useful examples of algebraically stable MIRKs are given.

  5. Consensus methods: review of original methods and their main alternatives used in public health

    PubMed Central

    Bourrée, Fanny; Michel, Philippe; Salmi, Louis Rachid

    2008-01-01

    Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Données Santé Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039

  6. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    NASA Astrophysics Data System (ADS)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  7. Review methods for image segmentation from computed tomography images

    SciTech Connect

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-12-04

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

  8. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    SciTech Connect

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  9. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  10. A multigrid nonoscillatory method for computing high speed flows

    NASA Technical Reports Server (NTRS)

    Li, C. P.; Shieh, T. H.

    1993-01-01

    A multigrid method using different smoothers has been developed to solve the Euler equations discretized by a nonoscillatory scheme up to fourth order accuracy. The best smoothing property is provided by a five-stage Runge-Kutta technique with optimized coefficients, yet the most efficient smoother is a backward Euler technique in factored and diagonalized form. The singlegrid solution for a hypersonic, viscous conic flow is in excellent agreement with the solution obtained by the third order MUSCL and Roe's method. Mach 8 inviscid flow computations for a complete entry probe have shown that the accuracy is at least as good as the symmetric TVD scheme of Yee and Harten. The implicit multigrid method is four times more efficient than the explicit multigrid technique and 3.5 times faster than the single-grid implicit technique. For a Mach 8.7 inviscid flow over a blunt delta wing at 30 deg incidence, the CPU reduction factor from the three-level multigrid computation is 2.2 on a grid of 37 x 41 x 73 nodes.

  11. An Overview of a Decade of Journal Publications about Culture and Human-Computer Interaction (HCI)

    NASA Astrophysics Data System (ADS)

    Clemmensen, Torkil; Roese, Kerstin

    In this paper, we analyze the concept of human-computer interaction in cultural and national contexts. Building and extending upon the framework for understanding research in usability and culture by Honold [3], we give an overview of publications in culture and HCI between 1998 and 2008, with a narrow focus on high-level journal publications only. The purpose is to review current practice in how cultural HCI issues are studied, and to analyse problems with the measures and interpretation of this studies. We find that Hofstede's cultural dimensions has been the dominating model of culture, participants have been picked because they could speak English, and most studies have been large scale quantitative studies. In order to balance this situation, we recommend that more researchers and practitioners do qualitative, empirical work studies.

  12. Combustor flow computations in general coordinates with a multigrid method

    NASA Astrophysics Data System (ADS)

    Shyy, Wei; Braaten, Mark E.

    The computational approach presented for single-phase combusting turbulent flowfields balances the requirements of complex physical and chemical flow interactions with those of resolving the three-dimensional geometrical constraints of the combustor contours, film cooling slots, and circular dilution holes. Attention is given to the three-dimensional grid-generation algorithm, the two-dimensional adaptive grid method applied to recirculating turbulent reacting flows, and theory/data assessments for three-dimensional combusting flows in an annular gas turbine combustor.

  13. Assessment of nonequilibrium radiation computation methods for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Sharma, Surendra

    1993-01-01

    The present understanding of shock-layer radiation in the low density regime, as appropriate to hypersonic vehicles, is surveyed. Based on the relative importance of electron excitation and radiation transport, the hypersonic flows are divided into three groups: weakly ionized, moderately ionized, and highly ionized flows. In the light of this division, the existing laboratory and flight data are scrutinized. Finally, an assessment of the nonequilibrium radiation computation methods for the three regimes in hypersonic flows is presented. The assessment is conducted by comparing experimental data against the values predicted by the physical model.

  14. Experiences with the Lanczos method on a parallel computer

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.; Fulton, Robert E.

    1987-01-01

    A parallel computer implementation of the Lanczos method for the free-vibration analysis of structures is considered, and results for two example problems show substantial time-reduction over the sequential solutions. The major Lanczos calculation tasks are subdivided into subtasks, and parallelism is introduced at the subtask level. A speedup of 7.8 on eight processors was obtained for the decomposition step of the problem involving a 60-m three-longeron space mast, and a speedup of 14.6 on 16 processors was obtained for the decomposition step of the problem involving a blade-stiffened graphite-epoxy panel.

  15. Computational Studies of Protein Aggregation: Methods and Applications

    NASA Astrophysics Data System (ADS)

    Morriss-Andrews, Alex; Shea, Joan-Emma

    2015-04-01

    Protein aggregation involves the self-assembly of normally soluble proteins into large supramolecular assemblies. The typical end product of aggregation is the amyloid fibril, an extended structure enriched in β-sheet content. The aggregation process has been linked to a number of diseases, most notably Alzheimer's disease, but fibril formation can also play a functional role in certain organisms. This review focuses on theoretical studies of the process of fibril formation, with an emphasis on the computational models and methods commonly used to tackle this problem.

  16. Computational methods for improving thermal imaging for consumer devices

    NASA Astrophysics Data System (ADS)

    Lynch, Colm N.; Devaney, Nicholas; Drimbarean, Alexandru

    2015-05-01

    In consumer imaging, the spatial resolution of thermal microbolometer arrays is limited by the large physical size of the individual detector elements. This also limits the number of pixels per image. If thermal sensors are to find a place in consumer imaging, as the newly released FLIR One would suggest, this resolution issue must be addressed. Our work focuses on improving the output quality of low resolution thermal cameras through computational means. The method we propose utilises sub-pixel shifts and temporal variations in the scene, using information from thermal and visible channels. Results from simulations and lab experiments are presented.

  17. Fan Flutter Computations Using the Harmonic Balance Method

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Thomas, Jeffrey P.; Reddy, T.S.R.

    2009-01-01

    An experimental forward-swept fan encountered flutter at part-speed conditions during wind tunnel testing. A new propulsion aeroelasticity code, based on a computational fluid dynamics (CFD) approach, was used to model the aeroelastic behavior of this fan. This threedimensional code models the unsteady flowfield due to blade vibrations using a harmonic balance method to solve the Navier-Stokes equations. This paper describes the flutter calculations and compares the results to experimental measurements and previous results from a time-accurate propulsion aeroelasticity code.

  18. Method and apparatus for managing transactions with connected computers

    DOEpatents

    Goldsmith, Steven Y.; Phillips, Laurence R.; Spires, Shannon V.

    2003-01-01

    The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.

  19. A new method to compute lunisolar perturbations in satellite motions

    NASA Technical Reports Server (NTRS)

    Kozai, Y.

    1973-01-01

    A new method to compute lunisolar perturbations in satellite motion is proposed. The disturbing function is expressed by the orbital elements of the satellite and the geocentric polar coordinates of the moon and the sun. The secular and long periodic perturbations are derived by numerical integrations, and the short periodic perturbations are derived analytically. The perturbations due to the tides can be included in the same way. In the Appendix, the motion of the orbital plane for a synchronous satellite is discussed; it is concluded that the inclination cannot stay below 7 deg.

  20. Public open space, physical activity, urban design and public health: Concepts, methods and research agenda.

    PubMed

    Koohsari, Mohammad Javad; Mavoa, Suzanne; Villanueva, Karen; Sugiyama, Takemi; Badland, Hannah; Kaczynski, Andrew T; Owen, Neville; Giles-Corti, Billie

    2015-05-01

    Public open spaces such as parks and green spaces are key built environment elements within neighbourhoods for encouraging a variety of physical activity behaviours. Over the past decade, there has been a burgeoning number of active living research studies examining the influence of public open space on physical activity. However, the evidence shows mixed associations between different aspects of public open space (e.g., proximity, size, quality) and physical activity. These inconsistencies hinder the development of specific evidence-based guidelines for urban designers and policy-makers for (re)designing public open space to encourage physical activity. This paper aims to move this research agenda forward, by identifying key conceptual and methodological issues that may contribute to inconsistencies in research examining relations between public open space and physical activity. PMID:25779691

  1. Deterministic point inclusion methods for computational applications with complex geometry

    SciTech Connect

    Khamayseh, Ahmed; Kuprat, Andrew P.

    2008-11-21

    A fundamental problem in computation is finding practical and efficient algorithms for determining if a query point is contained within a model of a three-dimensional solid. The solid is modeled using a general boundary representation that can contain polygonal elements and/or parametric patches.We have developed two such algorithms: the first is based on a global closest feature query, and the second is based on a local intersection query. Both algorithms work for two- and three-dimensional objects. This paper presents both algorithms, as well as the spatial data structures and queries required for efficient implementation of the algorithms. Applications for these algorithms include computational geometry, mesh generation, particle simulation, multiphysics coupling, and computer graphics. These methods are deterministic in that they do not involve random perturbations of diagnostic rays cast from the query point in order to avoid ‘unclean’ or ‘singular’ intersections of the rays with the geometry. Avoiding the necessity of such random perturbations will become increasingly important as geometries become more convoluted and complex.

  2. DETERMINISTIC POINT INCLUSION METHODS FOR COMPUTATIONAL APPLICATIONS WITH COMPLEX GEOMETRY.

    SciTech Connect

    Khamayseh, Ahmed K; Kuprat, Andrew

    2008-01-01

    A fundamental problem in computation is finding practical and efficient algorithms for determining if a query point is contained within a model of a three-dimensional solid. The solid is modeled using a general boundary representation that can contain polygonal elements and/or parametric patches. We have developed two such algorithms: the first is based on a global closest feature query, and the second is based on a local intersection query. Both algorithms work for two- and three-dimensional objects. This paper presents both algorithms, as well as the spatial data structures and queries required for efficient implementation of the algorithms. Applications for these algorithms include computational geometry, mesh generation, particle simulation, multiphysics coupling, and computer graphics. These methods are deterministic in that they do not involve random perturbations of diagnostic rays cast from the query point in order to avoid "unclean" or "singular" intersections of the rays with the geometry. Avoiding the necessity of such random perturbations will become increasingly important as geometries become more convoluted and complex.

  3. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  4. Computational methods for studying G protein-coupled receptors (GPCRs).

    PubMed

    Kaczor, Agnieszka A; Rutkowska, Ewelina; Bartuzi, Damian; Targowska-Duda, Katarzyna M; Matosiuk, Dariusz; Selent, Jana

    2016-01-01

    The functioning of GPCRs is classically described by the ternary complex model as the interplay of three basic components: a receptor, an agonist, and a G protein. According to this model, receptor activation results from an interaction with an agonist, which translates into the activation of a particular G protein in the intracellular compartment that, in turn, is able to initiate particular signaling cascades. Extensive studies on GPCRs have led to new findings which open unexplored and exciting possibilities for drug design and safer and more effective treatments with GPCR targeting drugs. These include discovery of novel signaling mechanisms such as ligand promiscuity resulting in multitarget ligands and signaling cross-talks, allosteric modulation, biased agonism, and formation of receptor homo- and heterodimers and oligomers which can be efficiently studied with computational methods. Computer-aided drug design techniques can reduce the cost of drug development by up to 50%. In particular structure- and ligand-based virtual screening techniques are a valuable tool for identifying new leads and have been shown to be especially efficient for GPCRs in comparison to water-soluble proteins. Modern computer-aided approaches can be helpful for the discovery of compounds with designed affinity profiles. Furthermore, homology modeling facilitated by a growing number of available templates as well as molecular docking supported by sophisticated techniques of molecular dynamics and quantitative structure-activity relationship models are an excellent source of information about drug-receptor interactions at the molecular level. PMID:26928552

  5. An experiment in hurricane track prediction using parallel computing methods

    NASA Technical Reports Server (NTRS)

    Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

    1994-01-01

    The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

  6. A bibliography on finite element and related methods analysis in reactor physics computations (1971--1997)

    SciTech Connect

    Carpenter, D.C.

    1998-01-01

    This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

  7. Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity so that they are being frequently employed for specific real world applications within NASA. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by highly complex geometries. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the peculiarities of applying the immersed boundary method to this moving boundary problem, we will provide a detailed aeroacoustic analysis of the noise generation mechanisms encountered in the open rotor flow. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. The noise generation mechanisms are analyzed employing spectral analysis, proper orthogonal decomposition and the causality method.

  8. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which

  9. Immersed boundary conditions method for computational fluid dynamics problems

    NASA Astrophysics Data System (ADS)

    Husain, Syed Zahid

    This dissertation presents implicit spectrally-accurate algorithms based on the concept of immersed boundary conditions (IBC) for solving a range of computational fluid dynamics (CFD) problems where the physical domains involve boundary irregularities. Both fixed and moving irregularities are considered with particular emphasis placed on the two-dimensional moving boundary problems. The physical model problems considered are comprised of the Laplace operator, the biharmonic operator and the Navier-Stokes equations, and thus cover the most commonly encountered types of operators in CFD analyses. The IBC algorithm uses a fixed and regular computational domain with flow domain immersed inside the computational domain. Boundary conditions along the edges of the time-dependent flow domain enter the algorithm in the form of internal constraints. Spectral spatial discretization for two-dimensional problems is based on Fourier expansions in the stream-wise direction and Chebyshev expansions in the normal-to-the-wall direction. Up to fourth-order implicit temporal discretization methods have been implemented. The IBC algorithm is shown to deliver the theoretically predicted accuracy in both time and space. Construction of the boundary constraints in the IBC algorithm provides degrees of freedom in excess of that required to formulate a closed system of algebraic equations. The 'classical IBC formulation' works by retaining number boundary constraints that are just sufficient to form a closed system of equations. The use of additional boundary constraints leads to the 'over-determined formulation' of the IBC algorithm. Over-determined systems are explored in order to improve the accuracy of the IBC method and to expand its applicability to more extreme geometries. Standard direct over-determined solvers based on evaluation of pseudo-inverses of the complete coefficient matrices have been tested on three model problems, namely, the Laplace equation, the biharmonic equation

  10. Computational analysis of methods for reduction of induced drag

    NASA Technical Reports Server (NTRS)

    Janus, J. M.; Chatterjee, Animesh; Cave, Chris

    1993-01-01

    The purpose of this effort was to perform a computational flow analysis of a design concept centered around induced drag reduction and tip-vortex energy recovery. The flow model solves the unsteady three-dimensional Euler equations, discretized as a finite-volume method, utilizing a high-resolution approximate Riemann solver for cell interface flux definitions. The numerical scheme is an approximately-factored block LU implicit Newton iterative-refinement method. Multiblock domain decomposition is used to partition the field into an ordered arrangement of blocks. Three configurations are analyzed: a baseline fuselage-wing, a fuselage-wing-nacelle, and a fuselage-wing-nacelle-propfan. Aerodynamic force coefficients, propfan performance coefficients, and flowfield maps are used to qualitatively access design efficacy. Where appropriate, comparisons are made with available experimental data.

  11. A computational design method for transonic turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Dulikravich, D. S.

    1982-01-01

    This paper describes a systematical computational procedure to find configuration changes necessary to modify the resulting flow past turbomachinery cascades, channels and nozzles, to be shock-free at prescribed transonic operating conditions. The method is based on a finite area transonic analysis technique and the fictitious gas approach. This design scheme has two major areas of application. First, it can be used for design of supercritical cascades, with applications mainly in compressor blade design. Second, it provides subsonic inlet shapes including sonic surfaces with suitable initial data for the design of supersonic (accelerated) exits, like nozzles and turbine cascade shapes. This fast, accurate and economical method with a proven potential for applications to three-dimensional flows is illustrated by some design examples.

  12. Radiation Transport Computation in Stochastic Media: Method and Application

    NASA Astrophysics Data System (ADS)

    Liang, Chao

    Stochastic media, characterized by the stochastic distribution of inclusions in a background medium, are typical radiation transport media encountered in natural or engineering systems. In the community of radiation transport computation, there is always a demand of accurate and efficient methods that can account for the nature of the stochastic distribution. In this dissertation, we focus on methodology development for the radiation transport computation that is applied to neutronic analyses of nuclear reactor designs characterized by the stochastic distribution of particle fuel. Reactor concepts with the employment of a fuel design consisting of a random heterogeneous mixture of fissile material and non-fissile moderator are constantly proposed. Key physical quantities such as core criticality and power distribution, reactivity control design parameters, depletion and fuel burn-up need to be carefully evaluated. In order to meet these practical requirements, we first need to develop accurate and fast computational methods that can effectively account for the stochastic nature of double heterogeneity configuration. A Monte Carlo based method called Chord Length Sampling (CLS) method is considered to be a promising method for analyzing those TRISO-type fueled reactors. Although the CLS method has been proposed for more than two decades and much research has been conducted to enhance its applicability, further efforts are still needed to address some key research gaps that exist for the CLS method. (1) There is a general lack of thorough investigation of the factors that give rise to the inaccuracy of the CLS method found by many researchers. The accuracy of the CLS method depends on the optical and geometric properties of the system. In some specific scenarios, considerable inaccuracies have been reported. However, no research has been providing a clear interpretation of the reasons responsible for the inaccuracy in the reported scenarios. Furthermore, no any

  13. Implicit extrapolation methods for multilevel finite element computations

    SciTech Connect

    Jung, M.; Ruede, U.

    1994-12-31

    The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.

  14. Matrix element method for high performance computing platforms

    NASA Astrophysics Data System (ADS)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  15. Recent advances in computer camera methods for machine vision

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1998-10-01

    During the past year, several new computer camera methods (hardware and software) have been developed which have applications in machine vision. These are described below, along with some test results. The improvements are generally in the direction of higher speed and greater parallelism. A PCI interface card has been designed which is adaptable to multiple CCD types, both color and monochrome. A newly designed A/D converter allows for a choice of 8 or 10-bit conversion resolution and a choice of two different analog inputs. Thus, by using four of these converters feeding the 32-bit PCI data bus, up to 8 camera heads can be used with a single PCI card, and four camera heads can be operated in parallel. The card has been designed so that any of 8 different CCD types can be used with it (6 monochrome and 2 color CCDs) ranging in resolution from 192 by 165 pixels up to 1134 by 972 pixels. In the area of software, a method has been developed to better utilize the decision-making capability of the computer along with the sub-array scan capabilities of many CCDs. Specifically, it is shown below how to achieve a dual scan mode camera system wherein one scan mode is a low density, high speed scan of a complete image area, and a higher density sub-array scan is used in those areas where changes have been observed. The name given to this technique is adaptive sub-array scanning.

  16. Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity where more and more complex flow problems can be tackled with this approach. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by a contra-rotating open rotor. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the methodologies of how to apply the immersed boundary method to this moving boundary problem, we will provide a detailed validation of the aeroacoustic analysis approach employing the Launch Ascent and Vehicle Aerodynamics (LAVA) solver. Two free-stream Mach numbers with M=0.2 and M=0.78 are considered in this analysis that are based on the nominally take-off and cruise flow conditions. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. Spectral analysis is used to determine the dominant wave propagation pattern in the acoustic near-field.

  17. Parallel computation of multigroup reactivity coefficient using iterative method

    SciTech Connect

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-09

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  18. Parallel computation of multigroup reactivity coefficient using iterative method

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  19. Computational method for calligraphic style representation and classification

    NASA Astrophysics Data System (ADS)

    Zhang, Xiafen; Nagy, George

    2015-09-01

    A large collection of reproductions of calligraphy on paper was scanned into images to enable web access for both the academic community and the public. Calligraphic paper digitization technology is mature, but technology for segmentation, character coding, style classification, and identification of calligraphy are lacking. Therefore, computational tools for classification and quantification of calligraphic style are proposed and demonstrated on a statistically characterized corpus. A subset of 259 historical page images is segmented into 8719 individual character images. Calligraphic style is revealed and quantified by visual attributes (i.e., appearance features) of character images sampled from historical works. A style space is defined with the features of five main classical styles as basis vectors. Cross-validated error rates of 10% to 40% are reported on conventional and conservative sampling into training/test sets and on same-work voting with a range of voter participation. Beyond its immediate applicability to education and scholarship, this research lays the foundation for style-based calligraphic forgery detection and for discovery of latent calligraphic groups induced by mentor-student relationships.

  20. Publications

    Cancer.gov

    Information about NCI publications including PDQ cancer information for patients and health professionals, patient-education publications, fact sheets, dictionaries, NCI blogs and newsletters and major reports.

  1. 76 FR 67418 - Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ...The National Institute of Standards and Technology (NIST) publishes this notice to seek public comments on the first draft of Special Publication 500-293, US Government Cloud Computing Technology Roadmap, Release 1.0 (Draft). This document is intended to be the mechanism to define and communicate interoperability, portability, and security requirement priorities that must be met in terms of......

  2. Computer Anxiety and Students' Preferred Feedback Methods in EFL Writing

    ERIC Educational Resources Information Center

    Matsumura, Shoichi; Hann, George

    2004-01-01

    Computer-mediated instruction plays a significant role in foreign language education. The incorporation of computer technology into the classroom has also been accompanied by an increasing number of students who experience anxiety when interacting with computers. This study examined the effects of computer anxiety on students' choice of feedback…

  3. A computational method to predict carbonylation sites in yeast proteins.

    PubMed

    Lv, H Q; Liu, J; Han, J Q; Zheng, J G; Liu, R L

    2016-01-01

    Several post-translational modifications (PTM) have been discussed in literature. Among a variety of oxidative stress-induced PTM, protein carbonylation is considered a biomarker of oxidative stress. Only certain proteins can be carbonylated because only four amino acid residues, namely lysine (K), arginine (R), threonine (T) and proline (P), are susceptible to carbonylation. The yeast proteome is an excellent model to explore oxidative stress, especially protein carbonylation. Current experimental approaches in identifying carbonylation sites are expensive, time-consuming and limited in their abilities to process proteins. Furthermore, there is no bioinformational method to predict carbonylation sites in yeast proteins. Therefore, we propose a computational method to predict yeast carbonylation sites. This method has total accuracies of 86.32, 85.89, 84.80, and 86.80% in predicting the carbonylation sites of K, R, T, and P, respectively. These results were confirmed by 10-fold cross-validation. The ability to identify carbonylation sites in different kinds of features was analyzed and the position-specific composition of the modification site-flanking residues was discussed. Additionally, a software tool has been developed to help with the calculations in this method. Datasets and the software are available at https://sourceforge.net/projects/hqlstudio/ files/CarSpred.Y/. PMID:27420944

  4. Matching wind turbine rotors and loads: computational methods for designers

    SciTech Connect

    Seale, J.B.

    1983-04-01

    This report provides a comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications. The user must supply: (1) turbine aerodynamic efficiency as a function of tipspeed ratio; (2) mechanical load torque as a function of rotation speed; (3) useful delivered power as a function of incoming mechanical power; (4) site average windspeed and, for maximum accuracy, distribution data. The description of the data includes governing limits consistent with the capacities of components. The report develops, a step-by-step method for converting the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) a decision is made how turbine power is to be governed (it may self-govern) to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics come into play to predict longterm energy output. Most systems can be approximated by a graph-and-calculator approach: Computer-generated families of coefficient curves provide data for algebraic scaling formulas. The method leads not only to energy predictions, but also to insight into the processes being modeled. Direct use of a computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out witn in-depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps, including three different load-compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

  5. Ab initio methods for nuclear properties - a computational physics approach

    NASA Astrophysics Data System (ADS)

    Maris, Pieter

    2011-04-01

    A microscopic theory for the structure and reactions of light nuclei poses formidable challenges for high-performance computing. Several ab-initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab-initio no-core full configuration (NCFC) approach is based on basis space expansion methods and uses Slater determinants of single-nucleon basis functions to express the nuclear wave function. In this approach, the quantum many-particle problem becomes a large sparse matrix eigenvalue problem. The eigenvalues of this matrix give us the binding energies, and the corresponding eigenvectors the nuclear wave functions. These wave functions can be employed to evaluate experimental quantities. In order to reach numerical convergence for fundamental problems of interest, the matrix dimension often exceeds 1 billion, and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. I discuss different strategies for distributing and solving this large sparse matrix on current multicore computer architectures, including methods to deal with with memory bottleneck. Several of these strategies have been implemented in the code MFDn, which is a parallel fortran code for nuclear structure calculations. I will show scaling behavior and compare the performance of the pure MPI version with the hybrid MPI/OpenMP code on Cray XT4 and XT5 platforms. For large core counts (typically 5,000 and above), the hybrid version is more efficient than pure MPI. With this code, we have been able to predict properties of the unstable nucleus 14F, which have since been confirmed by experiments. I will also give an overview of other recent results for nuclei in the A = 6 to 16 range with 2- and 3-body interactions. Supported in part by US DOE Grant DE-FC02-09ER41582.

  6. Presenting an Environmental Analysis to the Public: An Interactive Computer Based Approach.

    NASA Astrophysics Data System (ADS)

    Stauffer, P.; Hopkins, J.; Birdsell, K.; Hollis, D.

    2001-12-01

    The Los Alamos National Laboratory (LANL) Environmental Restoration Project is currently involved in clean-up of many legacy waste sites associated with work performed in the past at LANL. A growing part of the ER mission is to involve the public in the processes of monitoring, remediation, and stewardship. The challange of presenting complex environmantal analysis to the public is addressed via an educational exercise that uses web-based applications to allow interactive learning from a home computer. The presentation begins with discussions of the site history, regulations, and basic facts about VOCs. Measured concentrations of vapor phase VOC are shown on figures which clearly relate the plume to features of concern such as the water table and nearby surface facilities. Nature and extent are demonstrated with an animation that visually shows the relationship of the vapor phase VOC plume to the monitoring boreholes. Simulations of VOC vapor transport are describe and compared to data. Conclusions based on the data and modeling complete the exercise. We hope to use this type of educational tool in the future to provide the public with the knowledge they need to become more proactive in the process of remediating legacy waste sites.

  7. A Critical Review of Computer-Assisted Learning in Public Health via the Internet, 1999-2008

    ERIC Educational Resources Information Center

    Corda, Kirsten W.; Polacek, Georgia N. L. J.

    2009-01-01

    Computers and the internet have been utilized as viable avenues for public health education delivery. Yet the effectiveness, e.g., behavior change, from use of these tools has been limited. Previous reviews have focused on single health topics such as smoking cessation and weight loss. This review broadens the scope to consider computer-assisted…

  8. Computational modeling of multicellular constructs with the material point method.

    PubMed

    Guilkey, James E; Hoying, James B; Weiss, Jeffrey A

    2006-01-01

    Computational modeling of the mechanics of cells and multicellular constructs with standard numerical discretization techniques such as the finite element (FE) method is complicated by the complex geometry, material properties and boundary conditions that are associated with such systems. The objectives of this research were to apply the material point method (MPM), a meshless method, to the modeling of vascularized constructs by adapting the algorithm to accurately handle quasi-static, large deformation mechanics, and to apply the modified MPM algorithm to large-scale simulations using a discretization that was obtained directly from volumetric confocal image data. The standard implicit time integration algorithm for MPM was modified to allow the background computational grid to remain fixed with respect to the spatial distribution of material points during the analysis. This algorithm was used to simulate the 3D mechanics of a vascularized scaffold under tension, consisting of growing microvascular fragments embedded in a collagen gel, by discretizing the construct with over 13.6 million material points. Baseline 3D simulations demonstrated that the modified MPM algorithm was both more accurate and more robust than the standard MPM algorithm. Scaling studies demonstrated the ability of the parallel code to scale to 200 processors. Optimal discretization was established for the simulations of the mechanics of vascularized scaffolds by examining stress distributions and reaction forces. Sensitivity studies demonstrated that the reaction force during simulated extension was highly sensitive to the modulus of the microvessels, despite the fact that they comprised only 10.4% of the volume of the total sample. In contrast, the reaction force was relatively insensitive to the effective Poisson's ratio of the entire sample. These results suggest that the MPM simulations could form the basis for estimating the modulus of the embedded microvessels through a parameter

  9. Graphical Methods: A Review of Current Methods and Computer Hardware and Software. Technical Report No. 27.

    ERIC Educational Resources Information Center

    Bessey, Barbara L.; And Others

    Graphical methods for displaying data, as well as available computer software and hardware, are reviewed. The authors have emphasized the types of graphs which are most relevant to the needs of the National Center for Education Statistics (NCES) and its readers. The following types of graphs are described: tabulations, stem-and-leaf displays,…

  10. Developing a personal computer-based data visualization system using public domain software

    NASA Astrophysics Data System (ADS)

    Chen, Philip C.

    1999-03-01

    The current research will investigate the possibility of developing a computing-visualization system using a public domain software system built on a personal computer. Visualization Toolkit (VTK) is available on UNIX and PC platforms. VTK uses C++ to build an executable. It has abundant programming classes/objects that are contained in the system library. Users can also develop their own classes/objects in addition to those existing in the class library. Users can develop applications with any of the C++, Tcl/Tk, and JAVA environments. The present research will show how a data visualization system can be developed with VTK running on a personal computer. The topics will include: execution efficiency; visual object quality; availability of the user interface design; and exploring the feasibility of the VTK-based World Wide Web data visualization system. The present research will feature a case study showing how to use VTK to visualize meteorological data with techniques including, iso-surface, volume rendering, vector display, and composite analysis. The study also shows how the VTK outline, axes, and two-dimensional annotation text and title are enhancing the data presentation. The present research will also demonstrate how VTK works in an internet environment while accessing an executable with a JAVA application programing in a webpage.

  11. Novel computational methods to design protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Zhou, Alice Qinhua; O'Hern, Corey; Regan, Lynne

    2014-03-01

    Despite the abundance of structural data, we still cannot accurately predict the structural and energetic changes resulting from mutations at protein interfaces. The inadequacy of current computational approaches to the analysis and design of protein-protein interactions has hampered the development of novel therapeutic and diagnostic agents. In this work, we apply a simple physical model that includes only a minimal set of geometrical constraints, excluded volume, and attractive van der Waals interactions to 1) rank the binding affinity of mutants of tetratricopeptide repeat proteins with their cognate peptides, 2) rank the energetics of binding of small designed proteins to the hydrophobic stem region of the influenza hemagglutinin protein, and 3) predict the stability of T4 lysozyme and staphylococcal nuclease mutants. This work will not only lead to a fundamental understanding of protein-protein interactions, but also to the development of efficient computational methods to rationally design protein interfaces with tunable specificity and affinity, and numerous applications in biomedicine. NSF DMR-1006537, PHY-1019147, Raymond and Beverly Sackler Institute for Biological, Physical and Engineering Sciences, and Howard Hughes Medical Institute.

  12. Inter-Domain Redundancy Path Computation Methods Based on PCE

    NASA Astrophysics Data System (ADS)

    Hayashi, Rie; Oki, Eiji; Shiomoto, Kohei

    This paper evaluates three inter-domain redundancy path computation methods based on PCE (Path Computation Element). Some inter-domain paths carry traffic that must be assured of high quality and high reliability transfer such as telephony over IP and premium virtual private networks (VPNs). It is, therefore, important to set inter-domain redundancy paths, i. e. primary and secondary paths. The first scheme utilizes an existing protocol and the basic PCE implementation. It does not need any extension or modification. In the second scheme, PCEs make a virtual shortest path tree (VSPT) considering the candidates of primary paths that have corresponding secondary paths. The goal is to reduce blocking probability; corresponding secondary paths may be found more often after a primary path is decided; no protocol extension is necessary. In the third scheme, PCEs make a VSPT considering all candidates of primary and secondary paths. Blocking probability is further decreased since all possible candidates are located, and the sum of primary and secondary path cost is reduced by choosing the pair with minimum cost among all path pairs. Numerical evaluations show that the second and third schemes offer only a few percent reduction in blocking probability and path pair total cost, while the overheads imposed by protocol revision and increase of the amount of calculation and information to be exchanged are large. This suggests that the first scheme, the most basic and simple one, is the best choice.

  13. Methods for increased computational efficiency of multibody simulations

    NASA Astrophysics Data System (ADS)

    Epple, Alexander

    This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.

  14. Pedagogical Methods of Teaching "Women in Public Speaking."

    ERIC Educational Resources Information Center

    Pederson, Lucille M.

    A course on women in public speaking, developed at the University of Cincinnati, focuses on the rhetoric of selected women who have been involved in various movements and causes in the United States in the twentieth century. Women studied include educator Mary McLeod Bethune, Congresswoman Jeannette Rankin, suffragette Carrie Chapman Catt, Helen…

  15. "Equal Educational Opportunity": Alternative Financing Methods for Public Education.

    ERIC Educational Resources Information Center

    Akin, John S.

    This paper traces the evaluation of state-local public education finance systems to present; examines the prevalent foundation system of finance; discusses the "Serrano" decision and its implications for foundation systems; and, after an examination of three possible new approaches, recommends an education finance system. The first of the new…

  16. Methods of Reducing the Cost of Public Housing. Revised Edition.

    ERIC Educational Resources Information Center

    Callender, John H.; Aureli, Giles

    An in-depth study of public housing in New York focuses almost exclusively upon the cost analysis aspect of decision. The costs of various construction techniques, design arrangements, and materials have been collected and analyzed. The stated aim of the report is to reduce cost as much as possible, with user comfort being a secondary…

  17. Public Experiments and Their Analysis with the Replication Method

    ERIC Educational Resources Information Center

    Heering, Peter

    2007-01-01

    One of those who failed to establish himself as a natural philosopher in 18th century Paris was the future revolutionary Jean Paul Marat. He did not only publish several monographs on heat, optics and electricity in which he attempted to characterise his work as being purely empirical but he also tried to establish himself as a public lecturer.…

  18. Methods and computer readable medium for improved radiotherapy dosimetry planning

    DOEpatents

    Wessol, Daniel E.; Frandsen, Michael W.; Wheeler, Floyd J.; Nigg, David W.

    2005-11-15

    Methods and computer readable media are disclosed for ultimately developing a dosimetry plan for a treatment volume irradiated during radiation therapy with a radiation source concentrated internally within a patient or incident from an external beam. The dosimetry plan is available in near "real-time" because of the novel geometric model construction of the treatment volume which in turn allows for rapid calculations to be performed for simulated movements of particles along particle tracks therethrough. The particles are exemplary representations of alpha, beta or gamma emissions emanating from an internal radiation source during various radiotherapies, such as brachytherapy or targeted radionuclide therapy, or they are exemplary representations of high-energy photons, electrons, protons or other ionizing particles incident on the treatment volume from an external source. In a preferred embodiment, a medical image of a treatment volume irradiated during radiotherapy having a plurality of pixels of information is obtained.

  19. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  20. Modern wing flutter analysis by computational fluid dynamics methods

    NASA Technical Reports Server (NTRS)

    Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.

    1988-01-01

    The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.

  1. Emerging Computational Methods for the Rational Discovery of Allosteric Drugs.

    PubMed

    Wagner, Jeffrey R; Lee, Christopher T; Durrant, Jacob D; Malmstrom, Robert D; Feher, Victoria A; Amaro, Rommie E

    2016-06-01

    Allosteric drug development holds promise for delivering medicines that are more selective and less toxic than those that target orthosteric sites. To date, the discovery of allosteric binding sites and lead compounds has been mostly serendipitous, achieved through high-throughput screening. Over the past decade, structural data has become more readily available for larger protein systems and more membrane protein classes (e.g., GPCRs and ion channels), which are common allosteric drug targets. In parallel, improved simulation methods now provide better atomistic understanding of the protein dynamics and cooperative motions that are critical to allosteric mechanisms. As a result of these advances, the field of predictive allosteric drug development is now on the cusp of a new era of rational structure-based computational methods. Here, we review algorithms that predict allosteric sites based on sequence data and molecular dynamics simulations, describe tools that assess the druggability of these pockets, and discuss how Markov state models and topology analyses provide insight into the relationship between protein dynamics and allosteric drug binding. In each section, we first provide an overview of the various method classes before describing relevant algorithms and software packages. PMID:27074285

  2. Emerging Computational Methods for the Rational Discovery of Allosteric Drugs

    PubMed Central

    2016-01-01

    Allosteric drug development holds promise for delivering medicines that are more selective and less toxic than those that target orthosteric sites. To date, the discovery of allosteric binding sites and lead compounds has been mostly serendipitous, achieved through high-throughput screening. Over the past decade, structural data has become more readily available for larger protein systems and more membrane protein classes (e.g., GPCRs and ion channels), which are common allosteric drug targets. In parallel, improved simulation methods now provide better atomistic understanding of the protein dynamics and cooperative motions that are critical to allosteric mechanisms. As a result of these advances, the field of predictive allosteric drug development is now on the cusp of a new era of rational structure-based computational methods. Here, we review algorithms that predict allosteric sites based on sequence data and molecular dynamics simulations, describe tools that assess the druggability of these pockets, and discuss how Markov state models and topology analyses provide insight into the relationship between protein dynamics and allosteric drug binding. In each section, we first provide an overview of the various method classes before describing relevant algorithms and software packages. PMID:27074285

  3. Computational method for transmission eigenvalues for a spherically stratified medium.

    PubMed

    Cheng, Xiaoliang; Yang, Jing

    2015-07-01

    We consider a computational method for the interior transmission eigenvalue problem that arises in acoustic and electromagnetic scattering. The transmission eigenvalues contain useful information about some physical properties, such as the index of refraction. Instead of the existence and estimation of the spectral property of the transmission eigenvalues, we focus on the numerical calculation, especially for spherically stratified media in R3. Due to the nonlinearity and the special structure of the interior transmission eigenvalue problem, there are not many numerical methods to date. First, we reduce the problem into a second-order ordinary differential equation. Then, we apply the Hermite finite element to the weak formulation of the equation. With proper rewriting of the matrix-vector form, we change the original nonlinear eigenvalue problem into a quadratic eigenvalue problem, which can be written as a linear system and solved by the eigs function in MATLAB. This numerical method is fast, effective, and can calculate as many transmission eigenvalues as needed at a time. PMID:26367151

  4. Helping Students Soar to Success on Computers: An Investigation of the Soar Study Method for Computer-Based Learning

    ERIC Educational Resources Information Center

    Jairam, Dharmananda; Kiewra, Kenneth A.

    2010-01-01

    This study used self-report and observation techniques to investigate how students study computer-based materials. In addition, it examined if a study method called SOAR can facilitate computer-based learning. SOAR is an acronym that stands for the method's 4 theoretically driven and empirically supported components: select (S), organize (O),…

  5. 29 CFR 779.266 - Methods of computing annual volume of sales or business.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Methods of computing annual volume of sales or business... Apply; Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No computations of annual gross dollar volume are necessary to determine...

  6. 29 CFR 779.266 - Methods of computing annual volume of sales or business.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Methods of computing annual volume of sales or business... Apply; Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No computations of annual gross dollar volume are necessary to determine...

  7. 29 CFR 779.266 - Methods of computing annual volume of sales or business.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Methods of computing annual volume of sales or business... Apply; Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No computations of annual gross dollar volume are necessary to determine...

  8. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna... each point of +17 dBu field strength for all radials and draw the contour by connecting the...

  9. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna... each point of +17 dBu field strength for all radials and draw the contour by connecting the...

  10. GRACE: Public Health Recovery Methods following an Environmental Disaster

    PubMed Central

    Svendsen, ER; Whittle, N; Wright, L; McKeown, RE; Sprayberry, K; Heim, M; Caldwell, R; Gibson, JJ; Vena, J.

    2014-01-01

    Different approaches are necessary when Community Based Participatory Research (CBPR) of environmental illness is initiated after an environmental disaster within a community. Often such events are viewed as golden scientific opportunities to do epidemiological studies. However, we believe that in such circumstances, community engagement and empowerment needs to be integrated into the public health service efforts in order for both those and any science to be successful, with special care being taken to address the immediate health needs of the community first rather than the pressing needs to answer important scientific questions. We will demonstrate how we have simultaneously provided valuable public health service, embedded generalizable scientific knowledge, and built a successful foundation for supplemental CBPR through our on-going recovery work after the chlorine gas disaster in Graniteville, South Carolina. PMID:20439226

  11. Pesticides and public health: integrated methods of mosquito management.

    PubMed Central

    Rose, R. I.

    2001-01-01

    Pesticides have a role in public health as part of sustainable integrated mosquito management. Other components of such management include surveillance, source reduction or prevention, biological control, repellents, traps, and pesticide-resistance management. We assess the future use of mosquito control pesticides in view of niche markets, incentives for new product development, Environmental Protection Agency registration, the Food Quality Protection Act, and improved pest management strategies for mosquito control. PMID:11266290

  12. A FAST NEW PUBLIC CODE FOR COMPUTING PHOTON ORBITS IN A KERR SPACETIME

    SciTech Connect

    Dexter, Jason; Agol, Eric

    2009-05-10

    Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. We present a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semianalytically for the first time. We discuss the method, its implementation in a freely available FORTRAN code, and its application to toy problems from the literature.

  13. A Fast New Public Code for Computing Photon Orbits in a Kerr Spacetime

    NASA Astrophysics Data System (ADS)

    Dexter, Jason; Agol, Eric

    2009-05-01

    Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. We present a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semianalytically for the first time. We discuss the method, its implementation in a freely available FORTRAN code, and its application to toy problems from the literature.

  14. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... COMMISSION (CONTINUED) REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a) Responsibilities of parties to a swap to report swap transaction and pricing data in real-time—(1) In general. A... repositories in providing the public dissemination of swap transaction and pricing data in...

  15. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a) Responsibilities of parties to a swap to report swap transaction and pricing data in real-time—(1) In general. A... repositories in providing the public dissemination of swap transaction and pricing data in...

  16. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a) Responsibilities of parties to a swap to report swap transaction and pricing data in real-time—(1) In general. A... repositories in providing the public dissemination of swap transaction and pricing data in...

  17. Theoretical and computational methods for three-body processes

    NASA Astrophysics Data System (ADS)

    Blandon Zapata, Juan David

    This thesis discusses the development and application of theoretical and computational methods to study three-body processes. The main focus is on the calculation of three-body resonances and bound states. This broadly includes the study of Efimov states and resonances, three-body shape resonances, three- body Feshbach resonances, three-body pre-dissociated states in systems with a conical intersection, and the calculation of three-body recombination rate coefficients. The method was applied to a number of systems. A chapter of the thesis is dedicated to the related study of deriving correlation diagrams for three-body states before and after a three-body collision. More specifically, the thesis discusses the calculation of the H+H+H three-body recombination rate coefficient using the developed method. Additionally, we discuss a conceptually simple and effective diabatization procedure for the calculation of pre-dissociated vibrational states for a system with a conical intersection. We apply the method to H 3 , where the quantum molecular dynamics are notoriously difficult and where non-adiabatic couplings are important, and a correct description of the geometric phase associated with the diabatic representation is crucial for an accurate representation of these couplings. With our approach, we were also able to calculate Efimov-type resonances. The calculations of bound states and resonances were performed by formulating the problem in hyperspherical coordinates, and obtaining three-body eigenstates and eigen-energies by applying the hyperspherical adiabatic separation and the slow variable discretization. We employed the complex absorbing potential to calculate resonance energies and lifetimes, and introduce an uniquely defined diabatization procedure to treat X 3 molecules with a conical intersection. The proposed approach is general enough to be applied to problems in nuclear, atomic, molecular and astrophysics.

  18. Studying allosteric regulation in metal sensor proteins using computational methods.

    PubMed

    Chakravorty, Dhruva K; Merz, Kenneth M

    2014-01-01

    In this chapter, we describe advances made in understanding the mechanism of allosteric regulation of DNA operator binding in the ArsR/SmtB family of metal-sensing proteins using computational methods. The paradigm, zinc-sensing transcriptional repressor Staphylococcus aureus CzrA represents an excellent model system to understand how metal sensor proteins maintain cellular metal homeostasis. Here, we discuss studies that helped to characterize a metal ion-mediated hydrogen-bonding pathway (HBP) that plays a dominant role in the allosteric mechanism of DNA operator binding in these proteins. The chapter discusses computational methods used to provide a molecular basis for the large conformational motions and allosteric coupling free energy (~6kcal/mol) associated with Zn(II) binding in CzrA. We present an accurate and convenient means by which to include metal ions in the nuclear magnetic resonance (NMR) structure determination process using molecular dynamics (MD) constrained by NMR-derived data. The method provides a realistic and physically viable description of the metal-binding site(s) and has potentially broad applicability in the structure determination of metal ion-bound proteins, protein folding, and metal template protein-design studies. Finally, our simulations provide strong support for a proposed HBP that physically connects the metal-binding residue, His97, to the DNA-binding interface through the αR helix that is present only in the Zn(II)-bound state. We find the interprotomer hydrogen bond interaction to be significantly stronger (~8kcal/mol) at functional allosteric metal-binding sites compared to the apo proteins. This interaction works to overcome the considerable disorder at these hydrogen-bonding sites in apo protein and functions as a "switch" to lock in a weak DNA-binding conformation once metal is bound. This interaction is found to be considerably weaker in nonresponsive metal-binding sites. These findings suggest a conserved functional

  19. A Computational Method for Materials Design of New Interfaces

    NASA Astrophysics Data System (ADS)

    Kaminski, Jakub; Ratsch, Christian; Weber, Justin; Haverty, Michael; Shankar, Sadasivan

    2015-03-01

    We propose a novel computational approach to explore the broad configurational space of possible interfaces formed from known crystal structures to find new heterostructure materials with potentially interesting properties. In a series of steps with increasing complexity and accuracy, the vast number of possible combinations is narrowed down to a limited set of the most promising and chemically compatible candidates. This systematic screening encompasses (i) establishing the geometrical compatibility along multiple crystallographic orientations of two materials, (ii) simple functions eliminating configurations with unfavorable interatomic steric conflicts, (iii) application of empirical and semi-empirical potentials estimating approximate energetics and structures, (iv) use of DFT based quantum-chemical methods to ascertain the final optimal geometry and stability of the interface in question. For efficient high-throughput screening we have developed a new method to calculate surface energies, which allows for fast and systematic treatment of materials terminated with non-polar surfaces. We show that our approach leads to a maximum error around 3% from the exact reference. The representative results from our search protocol will be presented for selected materials including semiconductors and oxides.

  20. Semi-coarsening multigrid methods for parallel computing

    SciTech Connect

    Jones, J.E.

    1996-12-31

    Standard multigrid methods are not well suited for problems with anisotropic coefficients which can occur, for example, on grids that are stretched to resolve a boundary layer. There are several different modifications of the standard multigrid algorithm that yield efficient methods for anisotropic problems. In the paper, we investigate the parallel performance of these multigrid algorithms. Multigrid algorithms which work well for anisotropic problems are based on line relaxation and/or semi-coarsening. In semi-coarsening multigrid algorithms a grid is coarsened in only one of the coordinate directions unlike standard or full-coarsening multigrid algorithms where a grid is coarsened in each of the coordinate directions. When both semi-coarsening and line relaxation are used, the resulting multigrid algorithm is robust and automatic in that it requires no knowledge of the nature of the anisotropy. This is the basic multigrid algorithm whose parallel performance we investigate in the paper. The algorithm is currently being implemented on an IBM SP2 and its performance is being analyzed. In addition to looking at the parallel performance of the basic semi-coarsening algorithm, we present algorithmic modifications with potentially better parallel efficiency. One modification reduces the amount of computational work done in relaxation at the expense of using multiple coarse grids. This modification is also being implemented with the aim of comparing its performance to that of the basic semi-coarsening algorithm.

  1. Development of computational methods for heavy lift launch vehicles

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Ryan, James S.

    1993-01-01

    The research effort has been focused on the development of an advanced flow solver for complex viscous turbulent flows with shock waves. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. A new computer program named CENS3D has been developed for viscous turbulent flows with discontinuities. Details of the code are described in Appendix A and Appendix B. With the developments of the numerical algorithm and dissipation model, the simulation of three-dimensional viscous compressible flows has become more efficient and accurate. The results of the research are expected to yield a direct impact on the design process of future liquid fueled launch systems.

  2. Non-unitary probabilistic quantum computing circuit and method

    NASA Technical Reports Server (NTRS)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  3. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

    SciTech Connect

    Luttman, A.

    2012-03-30

    The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

  4. A stoichiometric calibration method for dual energy computed tomography.

    PubMed

    Bourque, Alexandra E; Carrier, Jean-François; Bouchard, Hugo

    2014-04-21

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a

  5. A stoichiometric calibration method for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

    2014-04-01

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic

  6. Publications.

    ERIC Educational Resources Information Center

    Aviation/Space, 1980

    1980-01-01

    Presents a variety of publications available from government and nongovernment sources. The government publications are from the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) and are designed for educators, students, and the public. (Author/SA)

  7. Computational Methods for Domain Partitioning of Protein Structures

    NASA Astrophysics Data System (ADS)

    Veretnik, Stella; Shindyalov, Ilya

    Analysis of protein structures typically begins with decomposition of structure into more basic units, called "structural domains". The underlying goal is to reduce a complex protein structure to a set of simpler yet structurally meaningful units, each of which can be analyzed independently. Structural semi-independence of domains is their hallmark: domains often have compact structure and can fold or function independently. Domains can undergo so-called "domain shuffling"when they reappear in different combinations in different proteins thus implementing different biological functions (Doolittle, 1995). Proteins can then be conceived as being built of such basic blocks: some, especially small proteins, consist usually of just one domain, while other proteins possess a more complex architecture containing multiple domains. Therefore, the methods for partitioning a structure into domains are of critical importance: their outcome defines the set of basic units upon which structural classifications are built and evolutionary analysis is performed. This is especially true nowadays in the era of structural genomics. Today there are many methods that decompose the structure into domains: some of them are manual (i.e., based on human judgment), others are semiautomatic, and still others are completely automatic (based on algorithms implemented as software). Overall there is a high level of consistency and robustness in the process of partitioning a structure into domains (for ˜80% of proteins); at least for structures where domain location is obvious. The picture is less bright when we consider proteins with more complex architectures—neither human experts nor computational methods can reach consistent partitioning in many such cases. This is a rather accurate reflection of biological phenomena in general since domains are formed by different mechanisms, hence it is nearly impossible to come up with a set of well-defined rules that captures all of the observed cases.

  8. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of...

  9. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 4 2011-07-01 2011-07-01 false Methods for computing interest benefits and special... LOAN (FFEL) PROGRAM Federal Payments of Interest and Special Allowance § 682.304 Methods for computing... shall use the average daily balance method to determine the balance on which the Secretary computes...

  10. 26 CFR 1.669(a)-3 - Tax computed by the exact throwback method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Tax computed by the exact throwback method. 1... Taxable Years Beginning Before January 1, 1969 § 1.669(a)-3 Tax computed by the exact throwback method. (a... compute the tax, on amounts deemed distributed under section 666, by the exact throwback method...

  11. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Methods for computing interest benefits and special...) PROGRAM Federal Payments of Interest and Special Allowance § 682.304 Methods for computing interest... shall use the average daily balance method to determine the balance on which the Secretary computes...

  12. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 2 2011-04-01 2011-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of...

  13. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 4 2013-07-01 2013-07-01 false Methods for computing interest benefits and special... LOAN (FFEL) PROGRAM Federal Payments of Interest and Special Allowance § 682.304 Methods for computing... shall use the average daily balance method to determine the balance on which the Secretary computes...

  14. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 4 2012-07-01 2012-07-01 false Methods for computing interest benefits and special... LOAN (FFEL) PROGRAM Federal Payments of Interest and Special Allowance § 682.304 Methods for computing... shall use the average daily balance method to determine the balance on which the Secretary computes...

  15. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 4 2014-07-01 2014-07-01 false Methods for computing interest benefits and special... LOAN (FFEL) PROGRAM Federal Payments of Interest and Special Allowance § 682.304 Methods for computing... shall use the average daily balance method to determine the balance on which the Secretary computes...

  16. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  17. Validation of viscous and inviscid computational methods for turbomachinery components

    NASA Technical Reports Server (NTRS)

    Povinelli, L. A.

    1986-01-01

    An assessment of several three-dimensional computer codes used at the NASA Lewis Research Center is presented. Four flow situations are examined, for which both experimental data and computational results are available. The four flows form a basis for the evaluation of the computational procedures. It is concluded that transonic rotor flow at peak efficiency conditions may be calculated with a reasonable degree of accuracy, whereas, off-design conditions are not accurately determined. Duct flows and turbine cascade flows may also be computed with reasonable accuracy whereas radial inflow turbine flow remains a challenging problem.

  18. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  19. A Computational Method for Materials Design of Interfaces

    NASA Astrophysics Data System (ADS)

    Kaminski, Jakub; Ratsch, Christian; Shankar, Sadasivan

    2014-03-01

    In the present work we propose a novel computational approach to explore the broad configurational space of possible interfaces formed from known crystal structures to find new hetrostructure materials with potentially interesting properties. In the series of subsequent steps with increasing complexity and accuracy, the vast number of possible combinations is narrowed down to a limited set of the most promising and chemically compatible candidates. This systematic screening encompasses (i) establishing the geometrical compatibility along multiple crystallographic orientations of two (or more) materials, (ii) simple functions eliminating configurations with unfavorable interatomic steric conflicts, (iii) application of empirical and semi-empirical potentials estimating approximate energetics and structures, (iv) use of DFT based quantum-chemical methods to ascertain the final optimal geometry and stability of the interface in question. We also demonstrate the flexibility and efficiency of our approach depending on the size of the investigated structures and size of the search space. The representative results from our search protocol will be presented for selected materials including semiconductors, transition metal systems, and oxides.

  20. Established and emerging dose reduction methods in cardiac computed tomography.

    PubMed

    Small, Gary R; Kazmi, Mustapha; Dekemp, Robert A; Chow, Benjamin J W

    2011-08-01

    Cardiac computed tomography (CT) is a non-invasive modality that is commonly used as an alternative to invasive coronary angiography for the investigation of coronary artery disease. The enthusiasm for this technology has been tempered by a growing appreciation of the potential risks of malignancy associated with the use of ionising radiation. In the spirit of minimizing patient risk, the medical profession and industry have worked hard to developed methods and protocols to reduce patient radiation exposure while maintaining excellent diagnostic accuracy. A complete understanding of radiation reduction techniques will allow clinicians to reduce patient risk while providing an important diagnostic service. This review will consider the established and emerging techniques that may be adopted to reduce patient absorbed doses from x-ray CT. By modifying (1) x-ray tube output, (2) imaging time (scan duration), (3) imaging distance (scan length) and (4) the appropriate use of shielding, clinicians will be able to adhere to the 'as low as reasonably achievable (ALARA)' principle. PMID:21630110

  1. Computational Methods for RNA Structure Validation and Improvement.

    PubMed

    Jain, Swati; Richardson, David C; Richardson, Jane S

    2015-01-01

    With increasing recognition of the roles RNA molecules and RNA/protein complexes play in an unexpected variety of biological processes, understanding of RNA structure-function relationships is of high current importance. To make clean biological interpretations from three-dimensional structures, it is imperative to have high-quality, accurate RNA crystal structures available, and the community has thoroughly embraced that goal. However, due to the many degrees of freedom inherent in RNA structure (especially for the backbone), it is a significant challenge to succeed in building accurate experimental models for RNA structures. This chapter describes the tools and techniques our research group and our collaborators have developed over the years to help RNA structural biologists both evaluate and achieve better accuracy. Expert analysis of large, high-resolution, quality-conscious RNA datasets provides the fundamental information that enables automated methods for robust and efficient error diagnosis in validating RNA structures at all resolutions. The even more crucial goal of correcting the diagnosed outliers has steadily developed toward highly effective, computationally based techniques. Automation enables solving complex issues in large RNA structures, but cannot circumvent the need for thoughtful examination of local details, and so we also provide some guidance for interpreting and acting on the results of current structure validation for RNA. PMID:26068742

  2. Improved computational methods for simulating inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Fatenejad, Milad

    This dissertation describes the development of two multidimensional Lagrangian code for simulating inertial confinement fusion (ICF) on structured meshes. The first is DRACO, a production code primarily developed by the Laboratory for Laser Energetics. Several significant new capabilities were implemented including the ability to model radiative transfer using Implicit Monte Carlo [Fleck et al., JCP 8, 313 (1971)]. DRACO was also extended to operate in 3D Cartesian geometry on hexahedral meshes. Originally the code was only used in 2D cylindrical geometry. This included implementing thermal conduction and a flux-limited multigroup diffusion model for radiative transfer. Diffusion equations are solved by extending the 2D Kershaw method [Kershaw, JCP 39, 375 (1981)] to three dimensions. The second radiation-hydrodynamics code developed as part of this thesis is Cooper, a new 3D code which operates on structured hexahedral meshes. Cooper supports the compatible hydrodynamics framework [Caramana et al., JCP 146, 227 (1998)] to obtain round-off error levels of global energy conservation. This level of energy conservation is maintained even when two temperature thermal conduction, ion/electron equilibration, and multigroup diffusion based radiative transfer is active. Cooper is parallelized using domain decomposition, and photon energy group decomposition. The Mesh Oriented datABase (MOAB) computational library is used to exchange information between processes when domain decomposition is used. Cooper's performance is analyzed through direct comparisons with DRACO. Cooper also contains a method for preserving spherical symmetry during target implosions [Caramana et al., JCP 157, 89 (1999)]. Several deceleration phase implosion simulations were used to compare instability growth using traditional hydrodynamics and compatible hydrodynamics with/without symmetry modification. These simulations demonstrate increased symmetry preservation errors when traditional hydrodynamics

  3. Profiling Animal Toxicants by Automatically Mining Public Bioassay Data: A Big Data Approach for Computational Toxicology

    PubMed Central

    Zhang, Jun; Hsieh, Jui-Hua; Zhu, Hao

    2014-01-01

    In vitro bioassays have been developed and are currently being evaluated as potential alternatives to traditional animal toxicity models. Already, the progress of high throughput screening techniques has resulted in an enormous amount of publicly available bioassay data having been generated for a large collection of compounds. When a compound is tested using a collection of various bioassays, all the testing results can be considered as providing a unique bio-profile for this compound, which records the responses induced when the compound interacts with different cellular systems or biological targets. Profiling compounds of environmental or pharmaceutical interest using useful toxicity bioassay data is a promising method to study complex animal toxicity. In this study, we developed an automatic virtual profiling tool to evaluate potential animal toxicants. First, we automatically acquired all PubChem bioassay data for a set of 4,841 compounds with publicly available rat acute toxicity results. Next, we developed a scoring system to evaluate the relevance between these extracted bioassays and animal acute toxicity. Finally, the top ranked bioassays were selected to profile the compounds of interest. The resulting response profiles proved to be useful to prioritize untested compounds for their animal toxicity potentials and form a potential in vitro toxicity testing panel. The protocol developed in this study could be combined with structure-activity approaches and used to explore additional publicly available bioassay datasets for modeling a broader range of animal toxicities. PMID:24950175

  4. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  5. Students' Attitudes towards Control Methods in Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Hintze, Hanne; And Others

    1988-01-01

    Describes study designed to investigate dental students' attitudes toward computer-assisted teaching as applied in programs for oral radiology in Denmark. Programs using personal computers and slide projectors with varying degrees of learner and teacher control are described, and differences in attitudes between male and female students are…

  6. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  7. Library Orientation Methods, Mental Maps, and Public Services Planning.

    ERIC Educational Resources Information Center

    Ridgeway, Trish

    Two library orientation methods, a self-guided cassette walking tour and a slide-tape program, were administered to 202 freshmen students to determine if moving through the library increased students' ability to develop a mental map of the library. An effort was made to ensure that the two orientation programs were equivalent. Results from the 148…

  8. Digital stress-echocardiography using a public domain program for the Macintosh personal computer.

    PubMed

    Albiero, R; Variola, A; Dander, B; Buonanno, C

    1995-12-01

    Left ventricular wall motion abnormalities secondary to stress-induced myocardial ischemia can be detected with difficulty by mentally comparing echocardiographic images sequentially recorded on videotape. Digital stress-echocardiography, a combination of ultrasound imaging and digital archiving technologies, at least partially can overcome this problem: the technique is based on reviewing images at rest and after stress (exercise or pharmacological) side by side in dual- or quad-screen digital format, in a synchronized cine-loop, as if obtained simultaneously. This technique however is presently not widely used, due to the high cost of most commercially available systems. We have developed a digital stress-echo system, which is easy to use and relatively inexpensive, running on a Macintosh II personal computer with 8-bit graphics. The 2-D echocardiographic images recorded on videotape are digitized offline using a video digitizing board. The image can be displayed and analyzed using the public domain NIH image software developed by Wayne Rasband, without loss in image quality and resolution, particularly if using Super-VHS videotape. We have made a macro procedure for the montage in a quad-screen format of four digital recorded echocardiographic cardiac cycles of six frames that takes only a little more time than commercially available systems. In conclusion, the use of a personal computer and low-cost software may help to make digital stress-echo techniques more widely feasible in the clinical setting and increase the diagnostic power of the ultrasound technique in the evaluation of patients with known or suspected coronary artery disease. PMID:8770533

  9. Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture

    DOEpatents

    Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2011-10-11

    Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

  10. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to whether an establishment qualifies for exemption under section...

  11. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to whether an establishment qualifies for exemption under section...

  12. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to whether an establishment qualifies for exemption under section...

  13. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to whether an establishment qualifies for exemption under section...

  14. 29 CFR 779.342 - Methods of computing annual volume of sales.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Methods of computing annual volume of sales. 779.342... Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to whether an establishment qualifies for exemption under section...

  15. Asronomical refraction: Computational methods for all zenith angles

    NASA Technical Reports Server (NTRS)

    Auer, L. H.; Standish, E. M.

    2000-01-01

    It is shown that the problem of computing astronomical refraction for any value of the zenith angle may be reduced to a simple, nonsingular, numerical quadrature when the proper choice is made for the independent variable of integration.

  16. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  17. The repeated replacement method: a pure Lagrangian meshfree method for computational fluid dynamics.

    PubMed

    Walker, Wade A

    2012-01-01

    In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids' tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly "chops out" fluid from active areas and replaces it with new "flattened" fluid cells with the same mass, momentum, and energy. We call the new cells "flattened" because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175

  18. Small Scale Distance Education; "The Personal (Computer) Touch"; Tutorial Methods for TMA's Using a Computer.

    ERIC Educational Resources Information Center

    Fritsch, Helmut; And Others

    1989-01-01

    The authors present reports of current research on distance education at the FernUniversitat in West Germany. Fritsch discusses adapting distance education techniques for small classes. Kuffner describes procedures for providing feedback to students using personalized computer-generated letters. Klute discusses using a computer with tutorial…

  19. Computer Science Instruction in Elementary Grades, an Exploration of Computer-Based Learning Methods. Final Report.

    ERIC Educational Resources Information Center

    Starkweather, John A.

    During the exploratory phase of this two-year project, 234 instructional computer programs were written by 167 junior and senior high school students, instructed as individuals, in small groups, and in whole classes. Then a doctoral study investigated the effectiveness of computer-assisted instruction in the development of problem solving skills.…

  20. Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.

    2011-01-01

    This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…

  1. An alternative computational method for finding the minimum-premium insurance portfolio

    NASA Astrophysics Data System (ADS)

    Katsikis, Vasilios N.

    2016-06-01

    In this article, we design a computational method, which differs from the standard linear programming techniques, for computing the minimum-premium insurance portfolio. The corresponding algorithm as well as a Matlab implementation are provided.

  2. One-to-One Computing in Public Schools: Lessons from "Laptops for All" Programs

    ERIC Educational Resources Information Center

    Abell Foundation, 2008

    2008-01-01

    The basic tenet of one-to-one computing is that the student and teacher have Internet-connected, wireless computing devices in the classroom and optimally at home as well. Also known as "ubiquitous computing," this strategy assumes that every teacher and student has her own computing device and obviates the need for moving classes to computer…

  3. SOURCE WATER PROTECTION OF PUBLIC DRINKING WATER WELLS: COMPUTER MODELING OF ZONES CONTRIBUTING RECHARGE TO PUMPING WELLS

    EPA Science Inventory

    Computer technology to assist states, tribes, and clients in the design of wellhead and source water protection areas for public water supply wells is being developed through two distinct SubTasks: (Sub task 1) developing a web-based wellhead decision support system, WellHEDSS, t...

  4. Progress Towards Computational Method for Circulation Control Airfoils

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rumsey, C. L.; Anders, S. G.

    2005-01-01

    The compressible Reynolds-averaged Navier-Stokes equations are solved for circulation control airfoil flows. Numerical solutions are computed with both structured and unstructured grid solvers. Several turbulence models are considered, including the Spalart-Allmaras model with and without curvature corrections, the shear stress transport model of Menter, and the k-enstrophy model. Circulation control flows with jet momentum coefficients of 0.03, 0.10, and 0.226 are considered. Comparisons are made between computed and experimental pressure distributions, velocity profiles, Reynolds stress profiles, and streamline patterns. Including curvature effects yields the closest agreement with the measured data.

  5. ICRP Publication 116--the first ICRP/ICRU application of the male and female adult reference computational phantoms.

    PubMed

    Petoussi-Henss, Nina; Bolch, Wesley E; Eckerman, Keith F; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria

    2014-09-21

    ICRP Publication 116 on 'Conversion coefficients for radiological protection quantities for external radiation exposures', provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantities revealed that the latter quantities continue to provide a good approximation of effective dose for photons, neutrons and electrons for the 'conventional' energy ranges considered previously (ICRP 1996, ICRU 1998), but not at the higher energies of ICRP Publication 116. PMID:25144220

  6. ICRP Publication 116—the first ICRP/ICRU application of the male and female adult reference computational phantoms

    NASA Astrophysics Data System (ADS)

    Petoussi-Henss, Nina; Bolch, Wesley E.; Eckerman, Keith F.; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G.; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria

    2014-09-01

    ICRP Publication 116 on ‘Conversion coefficients for radiological protection quantities for external radiation exposures’, provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantities revealed that the latter quantities continue to provide a good approximation of effective dose for photons, neutrons and electrons for the ‘conventional’ energy ranges considered previously (ICRP 1996, ICRU 1998), but not at the higher energies of ICRP Publication 116.

  7. Computer-Graphics and the Literary Construct: A Learning Method.

    ERIC Educational Resources Information Center

    Henry, Avril

    2002-01-01

    Describes an undergraduate student module that was developed at the University of Exeter (United Kingdom) in which students made their own computer graphics to discover and to describe literary structures in texts of their choice. Discusses learning outcomes and refers to the Web site that shows students' course work. (Author/LRW)

  8. New Methods of Mobile Computing: From Smartphones to Smart Education

    ERIC Educational Resources Information Center

    Sykes, Edward R.

    2014-01-01

    Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It…

  9. All for One: Integrating Budgetary Methods by Computer.

    ERIC Educational Resources Information Center

    Herman, Jerry J.

    1994-01-01

    With the advent of high speed and sophisticated computer programs, all budgetary systems can be combined in one fiscal management information system. Defines and provides examples for the four budgeting systems: (1) function/object; (2) planning, programming, budgeting system; (3) zero-based budgeting; and (4) site-based budgeting. (MLF)

  10. Simple computer method provides contours for radiological images

    NASA Technical Reports Server (NTRS)

    Newell, J. D.; Keller, R. A.; Baily, N. A.

    1975-01-01

    Computer is provided with information concerning boundaries in total image. Gradient of each point in digitized image is calculated with aid of threshold technique; then there is invoked set of algorithms designed to reduce number of gradient elements and to retain only major ones for definition of contour.

  11. Optical Design Methods: Your Head As A Personal Computer

    NASA Astrophysics Data System (ADS)

    Shafer, David

    1985-07-01

    Several design approaches are described which feature the use of your head as a design tool. This involves thinking about the design task at hand, trying to break it into separate, easily understood subtasks, and approaching these in a creative and intelligent fashion, as only humans can do. You and your computer can become a very powerful team when this design philosophy is adopted.

  12. [Computation method for optimization of recipes for protein content].

    PubMed

    Kovalev, N I; Karzeva, N J; Fiterer, V O

    1987-01-01

    The authors propose a calculated protein utilization coefficient. This coefficient considers the difference between the utilization rates of the proteins being contained in the mixture and their amino-acid composition. The proposed formula allows calculations by computer. The data obtained show high correlations with the results received by biological tests with Tetrahymena cultures. PMID:3431579

  13. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    ERIC Educational Resources Information Center

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  14. Computed radiography imaging plates and associated methods of manufacture

    DOEpatents

    Henry, Nathaniel F.; Moses, Alex K.

    2015-08-18

    Computed radiography imaging plates incorporating an intensifying material that is coupled to or intermixed with the phosphor layer, allowing electrons and/or low energy x-rays to impart their energy on the phosphor layer, while decreasing internal scattering and increasing resolution. The radiation needed to perform radiography can also be reduced as a result.

  15. 29 CFR 548.500 - Methods of computation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... computing overtime compensation for a piece worker for hours of work in excess of 8 in each day is the employee's average hourly earnings for all work performed during that day. 23 The employee is entitled to one-half the basic rate for each daily overtime hour in addition to the total piece work earnings...

  16. Verifying a computational method for predicting extreme ground motion

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, B.T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  17. A method for computing the leading-edge suction in a higher-order panel method

    NASA Technical Reports Server (NTRS)

    Ehlers, F. E.; Manro, M. E.

    1984-01-01

    Experimental data show that the phenomenon of a separation induced leading edge vortex is influenced by the wing thickness and the shape of the leading edge. Both thickness and leading edge shape (rounded rather than point) delay the formation of a vortex. Existing computer programs used to predict the effect of a leading edge vortex do not include a procedure for determining whether or not a vortex actually exists. Studies under NASA Contract NAS1-15678 have shown that the vortex development can be predicted by using the relationship between the leading edge suction coefficient and the parabolic nose drag. The linear theory FLEXSTAB was used to calculate the leading edge suction coefficient. This report describes the development of a method for calculating leading edge suction using the capabilities of the higher order panel methods (exact boundary conditions). For a two dimensional case, numerical methods were developed using the double strength and downwash distribution along the chord. A Gaussian quadrature formula that directly incorporates the logarithmic singularity in the downwash distribution, at all panel edges, was found to be the best method.

  18. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  19. 29 CFR 4011.9 - Method and date of issuance of notice; computation of time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Method and date of issuance of notice; computation of time... CORPORATION CERTAIN REPORTING AND DISCLOSURE REQUIREMENTS DISCLOSURE TO PARTICIPANTS § 4011.9 Method and date of issuance of notice; computation of time. (a) Method of issuance. The PBGC applies the rules...

  20. 3D modeling method for computer animate based on modified weak structured light method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  1. BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology

    PubMed Central

    Gilson, Michael K.; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny

    2016-01-01

    BindingDB, www.bindingdb.org, is a publicly accessible database of experimental protein-small molecule interaction data. Its collection of over a million data entries derives primarily from scientific articles and, increasingly, US patents. BindingDB provides many ways to browse and search for data of interest, including an advanced search tool, which can cross searches of multiple query types, including text, chemical structure, protein sequence and numerical affinities. The PDB and PubMed provide links to data in BindingDB, and vice versa; and BindingDB provides links to pathway information, the ZINC catalog of available compounds, and other resources. The BindingDB website offers specialized tools that take advantage of its large data collection, including ones to generate hypotheses for the protein targets bound by a bioactive compound, and for the compounds bound by a new protein of known sequence; and virtual compound screening by maximal chemical similarity, binary kernel discrimination, and support vector machine methods. Specialized data sets are also available, such as binding data for hundreds of congeneric series of ligands, drawn from BindingDB and organized for use in validating drug design methods. BindingDB offers several forms of programmatic access, and comes with extensive background material and documentation. Here, we provide the first update of BindingDB since 2007, focusing on new and unique features and highlighting directions of importance to the field as a whole. PMID:26481362

  2. BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology.

    PubMed

    Gilson, Michael K; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny

    2016-01-01

    BindingDB, www.bindingdb.org, is a publicly accessible database of experimental protein-small molecule interaction data. Its collection of over a million data entries derives primarily from scientific articles and, increasingly, US patents. BindingDB provides many ways to browse and search for data of interest, including an advanced search tool, which can cross searches of multiple query types, including text, chemical structure, protein sequence and numerical affinities. The PDB and PubMed provide links to data in BindingDB, and vice versa; and BindingDB provides links to pathway information, the ZINC catalog of available compounds, and other resources. The BindingDB website offers specialized tools that take advantage of its large data collection, including ones to generate hypotheses for the protein targets bound by a bioactive compound, and for the compounds bound by a new protein of known sequence; and virtual compound screening by maximal chemical similarity, binary kernel discrimination, and support vector machine methods. Specialized data sets are also available, such as binding data for hundreds of congeneric series of ligands, drawn from BindingDB and organized for use in validating drug design methods. BindingDB offers several forms of programmatic access, and comes with extensive background material and documentation. Here, we provide the first update of BindingDB since 2007, focusing on new and unique features and highlighting directions of importance to the field as a whole. PMID:26481362

  3. On multigrid methods for the Navier-Stokes Computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Krist, S. E.; Zang, T. A.

    1988-01-01

    The overall architecture of the multipurpose parallel-processing Navier-Stokes Computer (NSC) being developed by Princeton and NASA Langley (Nosenchuck et al., 1986) is described and illustrated with extensive diagrams, and the NSC implementation of an elementary multigrid algorithm for simulating isotropic turbulence (based on solution of the incompressible time-dependent Navier-Stokes equations with constant viscosity) is characterized in detail. The present NSC design concept calls for 64 nodes, each with the performance of a class VI supercomputer, linked together by a fiber-optic hypercube network and joined to a front-end computer by a global bus. In this configuration, the NSC would have a storage capacity of over 32 Gword and a peak speed of over 40 Gflops. The multigrid Navier-Stokes code discussed would give sustained operation rates of about 25 Gflops.

  4. Parallel computation with adaptive methods for elliptic and hyperbolic systems

    SciTech Connect

    Benantar, M.; Biswas, R.; Flaherty, J.E.; Shephard, M.S.

    1990-01-01

    We consider the solution of two dimensional vector systems of elliptic and hyperbolic partial differential equations on a shared memory parallel computer. For elliptic problems, the spatial domain is discretized using a finite quadtree mesh generation procedure and the differential system is discretized by a finite element-Galerkin technique with a piecewise linear polynomial basis. Resulting linear algebraic systems are solved using the conjugate gradient technique with element-by-element and symmetric successive over-relaxation preconditioners. Stiffness matrix assembly and linear system solutions are processed in parallel with computations scheduled on noncontiguous quadrants of the tree in order to minimize process synchronization. Determining noncontiguous regions by coloring the regular finite quadtree structure is far simpler than coloring elements of the unstructured mesh that the finite quadtree procedure generates. We describe linear-time complexity coloring procedures that use six and eight colors.

  5. Application of traditional CFD methods to nonlinear computational aeroacoustics problems

    NASA Technical Reports Server (NTRS)

    Chyczewski, Thomas S.; Long, Lyle N.

    1995-01-01

    This paper describes an implementation of a high order finite difference technique and its application to the category 2 problems of the ICASE/LaRC Workshop on Computational Aeroacoustics (CAA). Essentially, a popular Computational Fluid Dynamics (CFD) approach (central differencing, Runge-Kutta time integration and artificial dissipation) is modified to handle aeroacoustic problems. The changes include increasing the order of the spatial differencing to sixth order and modifying the artificial dissipation so that it does not significantly contaminate the wave solution. All of the results were obtained from the CM5 located at the Numerical Aerodynamic Simulation Laboratory. lt was coded in CMFortran (very similar to HPF), using programming techniques developed for communication intensive large stencils, and ran very efficiently.

  6. Multi-Level iterative methods in computational plasma physics

    SciTech Connect

    Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.

    1999-03-01

    Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD.

  7. Two methods of computing molecular dipole and quadrupole derivatives

    NASA Astrophysics Data System (ADS)

    Lazzeretti, P.; Zanasi, R.; Fowler, P. W.

    1988-01-01

    Polarized basis sets are used to compute dipole and quadrupole derivatives of the hydrides LiH, CH4, NH3, H2O, and HF. Analytic calculation of derivatives is compared with calculation via the dipole and quadrupole electric shielding tensors. With these basis sets, violation of the Hellmann-Feynman theorem is only about 0.01 a.u. in dipole derivatives and 0.02 a.u. in quadrupole derivatives.

  8. Computational Methods for the Analysis of Array Comparative Genomic Hybridization

    PubMed Central

    Chari, Raj; Lockwood, William W.; Lam, Wan L.

    2006-01-01

    Array comparative genomic hybridization (array CGH) is a technique for assaying the copy number status of cancer genomes. The widespread use of this technology has lead to a rapid accumulation of high throughput data, which in turn has prompted the development of computational strategies for the analysis of array CGH data. Here we explain the principles behind array image processing, data visualization and genomic profile analysis, review currently available software packages, and raise considerations for future software development. PMID:17992253

  9. Unconventional methods of imaging: computational microscopy and compact implementations

    NASA Astrophysics Data System (ADS)

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.

  10. Computational Systems Biology in Cancer: Modeling Methods and Applications

    PubMed Central

    Materi, Wayne; Wishart, David S.

    2007-01-01

    In recent years it has become clear that carcinogenesis is a complex process, both at the molecular and cellular levels. Understanding the origins, growth and spread of cancer, therefore requires an integrated or system-wide approach. Computational systems biology is an emerging sub-discipline in systems biology that utilizes the wealth of data from genomic, proteomic and metabolomic studies to build computer simulations of intra and intercellular processes. Several useful descriptive and predictive models of the origin, growth and spread of cancers have been developed in an effort to better understand the disease and potential therapeutic approaches. In this review we describe and assess the practical and theoretical underpinnings of commonly-used modeling approaches, including ordinary and partial differential equations, petri nets, cellular automata, agent based models and hybrid systems. A number of computer-based formalisms have been implemented to improve the accessibility of the various approaches to researchers whose primary interest lies outside of model development. We discuss several of these and describe how they have led to novel insights into tumor genesis, growth, apoptosis, vascularization and therapy. PMID:19936081

  11. Frequency response modeling and control of flexible structures: Computational methods

    NASA Technical Reports Server (NTRS)

    Bennett, William H.

    1989-01-01

    The dynamics of vibrations in flexible structures can be conventiently modeled in terms of frequency response models. For structural control such models capture the distributed parameter dynamics of the elastic structural response as an irrational transfer function. For most flexible structures arising in aerospace applications the irrational transfer functions which arise are of a special class of pseudo-meromorphic functions which have only a finite number of right half place poles. Computational algorithms are demonstrated for design of multiloop control laws for such models based on optimal Wiener-Hopf control of the frequency responses. The algorithms employ a sampled-data representation of irrational transfer functions which is particularly attractive for numerical computation. One key algorithm for the solution of the optimal control problem is the spectral factorization of an irrational transfer function. The basis for the spectral factorization algorithm is highlighted together with associated computational issues arising in optimal regulator design. Options for implementation of wide band vibration control for flexible structures based on the sampled-data frequency response models is also highlighted. A simple flexible structure control example is considered to demonstrate the combined frequency response modeling and control algorithms.

  12. Unconventional methods of imaging: computational microscopy and compact implementations.

    PubMed

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading. PMID:27214407

  13. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    SciTech Connect

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria; Zhou, Ning; Huang, Zhenyu

    2012-07-26

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulation of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.

  14. Computing and monitoring potential of public spaces by shading analysis using 3d lidar data and advanced image analysis

    NASA Astrophysics Data System (ADS)

    Zwolinski, A.; Jarzemski, M.

    2015-04-01

    The paper regards specific context of public spaces in "shadow" of tall buildings located in European cities. Majority of tall buildings in European cities were built in last 15 years. Tall buildings appear mainly in city centres, directly at important public spaces being viable environment for inhabitants with variety of public functions (open spaces, green areas, recreation places, shops, services etc.). All these amenities and services are under direct impact of extensive shading coming from the tall buildings. The paper focuses on analyses and representation of impact of shading from tall buildings on various public spaces in cities using 3D city models. Computer environment of 3D city models in cityGML standard uses 3D LiDAR data as one of data types for definition of 3D cities. The structure of cityGML allows analytic applications using existing computer tools, as well as developing new techniques to estimate extent of shading coming from high-risers, affecting life in public spaces. These measurable shading parameters in specific time are crucial for proper functioning, viability and attractiveness of public spaces - finally it is extremely important for location of tall buildings at main public spaces in cities. The paper explores impact of shading from tall buildings in different spatial contexts on the background of using cityGML models based on core LIDAR data to support controlled urban development in sense of viable public spaces. The article is prepared within research project 2TaLL: Application of 3D Virtual City Models in Urban Analyses of Tall Buildings, realized as a part of Polish-Norway Grants.

  15. 76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ...NIST announces the Cloud Computing Forum & Workshop IV to be held on November 2, 3 and 4, 2011. This workshop will provide information on the U.S. Government (USG) Cloud Computing Technology Roadmap initiative. This workshop will also provide an updated status on NIST efforts to help develop open standards in interoperability, portability and security in cloud computing. This event is open to......

  16. Computer Literacy. Part II--A Teacher's Guide. A Staff Development Publication.

    ERIC Educational Resources Information Center

    Lloyd, Jo; And Others

    This teacher's guide, consisting of learning modules, lists of resources, and assessment recommendations, is designed as a tool for developing a computer literacy component of an existing prevocational course or in teaching a free-standing computer literacy course. A list of aims and objectives for a computer literacy course is provided first.…

  17. 26 CFR 1.669(a)-3 - Tax computed by the exact throwback method.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 8 2011-04-01 2011-04-01 false Tax computed by the exact throwback method. 1... Applicable to Taxable Years Beginning Before January 1, 1969 § 1.669(a)-3 Tax computed by the exact throwback... elects to compute the tax, on amounts deemed distributed under section 666, by the exact throwback...

  18. 26 CFR 1.669(a)-3 - Tax computed by the exact throwback method.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 8 2013-04-01 2013-04-01 false Tax computed by the exact throwback method. 1... Applicable to Taxable Years Beginning Before January 1, 1969 § 1.669(a)-3 Tax computed by the exact throwback... elects to compute the tax, on amounts deemed distributed under section 666, by the exact throwback...

  19. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  20. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  1. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    SciTech Connect

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  2. A method of computational magnetohydrodynamics defining stable Scyllac equilibria

    PubMed Central

    Betancourt, Octavio; Garabedian, Paul

    1977-01-01

    A computer code has been developed for the numerical calculation of sharp boundary equilibria of a toroidal plasma with diffuse pressure profile. This generalizes earlier work that was done separately on the sharp boundary and diffuse models, and it allows for large amplitude distortions of the plasma in three-dimensional space. By running the code, equilibria that are stable to the so-called m = 1, k = 0 mode have been found for Scyllac, which is a high beta toroidal confinement device of very large aspect ratio. PMID:16592383

  3. Shielding analysis methods available in the scale computational system

    SciTech Connect

    Parks, C.V.; Tang, J.S.; Hermann, O.W.; Bucholz, J.A.; Emmett, M.B.

    1986-01-01

    Computational tools have been included in the SCALE system to allow shielding analysis to be performed using both discrete-ordinates and Monte Carlo techniques. One-dimensional discrete ordinates analyses are performed with the XSDRNPM-S module, and point dose rates outside the shield are calculated with the XSDOSE module. Multidimensional analyses are performed with the MORSE-SGC/S Monte Carlo module. This paper will review the above modules and the four Shielding Analysis Sequences (SAS) developed for the SCALE system. 7 refs., 8 figs.

  4. Computation of nonparametric convex hazard estimators via profile methods

    PubMed Central

    Jankowski, Hanna K.; Wellner, Jon A.

    2010-01-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females. PMID:20300560

  5. Predicted PAR1 inhibitors from multiple computational methods

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Liu, Jinfeng; Zhu, Tong; Zhang, Lujia; He, Xiao; Zhang, John Z. H.

    2016-08-01

    Multiple computational approaches are employed in order to find potentially strong binders of PAR1 from the two molecular databases: the Specs database containing more than 200,000 commercially available molecules and the traditional Chinese medicine (TCM) database. By combining the use of popular docking scoring functions together with detailed molecular dynamics simulation and protein-ligand free energy calculations, a total of fourteen molecules are found to be potentially strong binders of PAR1. The atomic details in protein-ligand interactions of these molecules with PAR1 are analyzed to help understand the binding mechanism which should be very useful in design of new drugs.

  6. Introduction to Computational Methods for Stability and Control (COMSAC)

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Fremaux, C. Michael; Chambers, Joseph R.

    2004-01-01

    This Symposium is intended to bring together the often distinct cultures of the Stability and Control (S&C) community and the Computational Fluid Dynamics (CFD) community. The COMSAC program is itself a new effort by NASA Langley to accelerate the application of high end CFD methodologies to the demanding job of predicting stability and control characteristics of aircraft. This talk is intended to set the stage for needing a program like COMSAC. It is not intended to give details of the program itself. The topics include: 1) S&C Challenges; 2) Aero prediction methodology; 3) CFD applications; 4) NASA COMSAC planning; 5) Objectives of symposium; and 6) Closing remarks.

  7. Lattice QCD computations: Recent progress with modern Krylov subspace methods

    SciTech Connect

    Frommer, A.

    1996-12-31

    Quantum chromodynamics (QCD) is the fundamental theory of the strong interaction of matter. In order to compare the theory with results from experimental physics, the theory has to be reformulated as a discrete problem of lattice gauge theory using stochastic simulations. The computational challenge consists in solving several hundreds of very large linear systems with several right hand sides. A considerable part of the world`s supercomputer time is spent in such QCD calculations. This paper presents results on solving systems for the Wilson fermions. Recent progress is reviewed on algorithms obtained in cooperation with partners from theoretical physics.

  8. Adaptive computational methods for SSME internal flow analysis

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1986-01-01

    Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.

  9. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  10. Computational method for thermoviscoelasticity with application to rock mechanics

    NASA Astrophysics Data System (ADS)

    Lee, S. C.

    1984-01-01

    Large scale numerical computations associated with rock mechanics problems have required efficient and economical models for predicting temperature, stress, failure, and deformed structural configuration under various loafing conditions. To meet this requirement, the complex dependence of the properties of geological materials on the time and temperature is modified to yield a reduced time scale as a function of time and temperature under the thermorheologically simple material (TSM) postulate. The thermorheologically linear concept is adopted in the finite element formulation by uncoupling thermal and mechanical responses. The thermal responses, based on transient heat conduction or convective diffusion, are formulated by using the two point recurrence scheme and the upwinding scheme, respectively. An incremental solution procedure with the implicit time stepping scheme is proposed for the solution of the thermoviscoelastic response. The proposed thermoviscoelastic solution algorithm is based on the uniaxial creep experimental data and the corresponding temperature shift functions, and is intended to minimize computational efforts by allowing large time step size with stable solutions. A thermoelastic fracture formulation is also presented by introducing the degenerate quadratic isoparametric singular element for the thermally induced line crack problems.

  11. Efficient Computer Network Anomaly Detection by Changepoint Detection Methods

    NASA Astrophysics Data System (ADS)

    Tartakovsky, Alexander G.; Polunchenko, Aleksey S.; Sokolov, Grigory

    2013-02-01

    We consider the problem of efficient on-line anomaly detection in computer network traffic. The problem is approached statistically, as that of sequential (quickest) changepoint detection. A multi-cyclic setting of quickest change detection is a natural fit for this problem. We propose a novel score-based multi-cyclic detection algorithm. The algorithm is based on the so-called Shiryaev-Roberts procedure. This procedure is as easy to employ in practice and as computationally inexpensive as the popular Cumulative Sum chart and the Exponentially Weighted Moving Average scheme. The likelihood ratio based Shiryaev-Roberts procedure has appealing optimality properties, particularly it is exactly optimal in a multi-cyclic setting geared to detect a change occurring at a far time horizon. It is therefore expected that an intrusion detection algorithm based on the Shiryaev-Roberts procedure will perform better than other detection schemes. This is confirmed experimentally for real traces. We also discuss the possibility of complementing our anomaly detection algorithm with a spectral-signature intrusion detection system with false alarm filtering and true attack confirmation capability, so as to obtain a synergistic system.

  12. A low computation cost method for seizure prediction.

    PubMed

    Zhang, Yanli; Zhou, Weidong; Yuan, Qi; Wu, Qi

    2014-10-01

    The dynamic changes of electroencephalograph (EEG) signals in the period prior to epileptic seizures play a major role in the seizure prediction. This paper proposes a low computation seizure prediction algorithm that combines a fractal dimension with a machine learning algorithm. The presented seizure prediction algorithm extracts the Higuchi fractal dimension (HFD) of EEG signals as features to classify the patient's preictal or interictal state with Bayesian linear discriminant analysis (BLDA) as a classifier. The outputs of BLDA are smoothed by a Kalman filter for reducing possible sporadic and isolated false alarms and then the final prediction results are produced using a thresholding procedure. The algorithm was evaluated on the intracranial EEG recordings of 21 patients in the Freiburg EEG database. For seizure occurrence period of 30 min and 50 min, our algorithm obtained an average sensitivity of 86.95% and 89.33%, an average false prediction rate of 0.20/h, and an average prediction time of 24.47 min and 39.39 min, respectively. The results confirm that the changes of HFD can serve as a precursor of ictal activities and be used for distinguishing between interictal and preictal epochs. Both HFD and BLDA classifier have a low computational complexity. All of these make the proposed algorithm suitable for real-time seizure prediction. PMID:25062892

  13. Computational Method for Electrical Potential and Other Field Problems

    ERIC Educational Resources Information Center

    Hastings, David A.

    1975-01-01

    Proposes the finite differences relaxation method as a teaching tool in secondary and university level courses discussing electrical potential, temperature distribution in a region, and similar problems. Outlines the theory and operating procedures of the method, and discusses examples of teaching applications, including possible laboratory…

  14. Yeast Ancestral Genome Reconstructions: The Possibilities of Computational Methods

    NASA Astrophysics Data System (ADS)

    Tannier, Eric

    In 2006, a debate has risen on the question of the efficiency of bioinformatics methods to reconstruct mammalian ancestral genomes. Three years later, Gordon et al. (PLoS Genetics, 5(5), 2009) chose not to use automatic methods to build up the genome of a 100 million year old Saccharomyces cerevisiae ancestor. Their manually constructed ancestor provides a reference genome to test whether automatic methods are indeed unable to approach confident reconstructions. Adapting several methodological frameworks to the same yeast gene order data, I discuss the possibilities, differences and similarities of the available algorithms for ancestral genome reconstructions. The methods can be classified into two types: local and global. Studying the properties of both helps to clarify what we can expect from their usage. Both methods propose contiguous ancestral regions that come very close (> 95% identity) to the manually predicted ancestral yeast chromosomes, with a good coverage of the extant genomes.

  15. A Combined Method to Compute the Proximities of Asteroids

    NASA Astrophysics Data System (ADS)

    Šegan, S.; Milisavljević, S.; Marčeta, D.

    2011-09-01

    We describe a simple and efficient numerical-analytical method to find all of the proximities and critical points of the distance function in the case of two elliptical orbits with a common focus. Our method is based on the solutions of Simovljević's (1974) graphical method and on the transcendent equations developed by Lazović (1993). The method is tested on 2 997 576 pairs of asteroid orbits and compared with the algebraic and polynomial solutions of Gronchi (2005). The model with four proximities was obtained by Gronchi (2002) only by applying the method of random samples, i.e., after many simulations and trials with various values of elliptical elements. We found real pairs with four proximities.

  16. A method of examining the structure and topological properties of public-transport networks

    NASA Astrophysics Data System (ADS)

    Dimitrov, Stavri Dimitri; Ceder, Avishai (Avi)

    2016-06-01

    This work presents a new method of examining the structure of public-transport networks (PTNs) and analyzes their topological properties through a combination of computer programming, statistical data and large-network analyses. In order to automate the extraction, processing and exporting of data, a software program was developed allowing to extract the needed data from General Transit Feed Specification, thus overcoming difficulties occurring in accessing and collecting data. The proposed method was applied to a real-life PTN in Auckland, New Zealand, with the purpose of examining whether it showed characteristics of scale-free networks and exhibited features of "small-world" networks. As a result, new regression equations were derived analytically describing observed, strong, non-linear relationships among the probabilities of randomly chosen stops in the PTN to be serviced by a given number of routes. The established dependence is best fitted by an exponential rather than a power-law function, showing that the PTN examined is neither random nor scale-free, but a mixture of the two. This finding explains the presence of hubs that are not typical of exponential networks and simultaneously not highly connected to the other nodes as is the case with scale-free networks. On the other hand, the observed values of the topological properties of the network show that although it is highly clustered, owing to its representation as a directed graph, it differs slightly from "small-world" networks, which are characterized by strong clustering and a short average path length.

  17. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    NASA Astrophysics Data System (ADS)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  18. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    NASA Technical Reports Server (NTRS)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  19. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  20. Decluttering methods for high density computer-generated graphic displays

    NASA Technical Reports Server (NTRS)

    Schultz, E. E., Jr.; Nichols, D. A.; Curran, P. S.

    1985-01-01

    Several decluttering methods were compared with respect to the speed and accuracy of user performance which resulted. The presence of a map background was also manipulated. Partial removal of nonessential graphic features through symbol simplification was as effective a decluttering technique as was total removal of nonessential graphic features. The presence of a map background interacted with decluttering conditions when response time was the dependent measure. Results indicate that the effectiveness of decluttering methods depends upon the degree to which each method makes essential graphic information distinctive from nonessential information. Practical implications are discussed.