Computer methods mechanics and
Vu-Quoc, Loc
Computer methods in applied mechanics and enflineerlng ELSEVIER Comput. Methods Appl. Mech. Engrg complications in the computational formulation and computer implementation. We discuss in detail Lagrangian formulation and the Eulerian-Lagrangian formulation from the computer implementation viewpoint
Kershenbaum, Anne D.; Langston, Michael A.; Levine, Robert S.; Saxton, Arnold M.; Oyana, Tonny J.; Kilbourne, Barbara J.; Rogers, Gary L.; Gittner, Lisaann S.; Baktash, Suzanne H.; Matthews-Juarez, Patricia; Juarez, Paul D.
2014-01-01
Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother’s age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130
Sellers, C.; Fox, B.; Paulz, J.
1996-03-01
The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.
Computer methods mechanics and
Yosibash, Zohar
m.__33 l!id Computer methods in applied mechanics and engineering EL!!EMER Comput. Methods Appl derivatives from finite element solutions Barna A. Szab6, Zohar Yosibash * Center for Computational Mechanics February 1995 Abstract A superconvergent method for the computation of the derivatives of the solution
Publication-quality computer graphics
Slabbekorn, M.H.; Johnston, R.B. Jr.
1981-01-01
A user-friendly graphic software package is being used at Oak Ridge National Laboratory to produce publication-quality computer graphics. Close interaction between the graphic designer and computer programmer have helped to create a highly flexible computer graphics system. The programmer-oriented environment of computer graphics has been modified to allow the graphic designer freedom to exercise his expertise with lines, form, typography, and color. The resultant product rivals or surpasses that work previously done by hand. This presentation of computer-generated graphs, charts, diagrams, and line drawings clearly demonstrates the latitude and versatility of the software when directed by a graphic designer.
Computer Science and Technology Publications. NBS Publications List 84.
ERIC Educational Resources Information Center
National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.
This bibliography lists publications of the Institute for Computer Sciences and Technology of the National Bureau of Standards. Publications are listed by subject in the areas of computer security, computer networking, and automation technology. Sections list publications of: (1) current Federal Information Processing Standards; (2) computer…
Computer methods mechanics and
Tezduyar, Tayfun E.
Computer methods in applied mechanics and engineering ELSEVIER Comput. Methods Appl. Mech. Engrg. 155 (1998) 235-248 Enhanced-Discretization Interface-Capturing Technique (EDICT) for computation (EDICT) for computation of unsteady flow problems with interfaces, such as two-fluid and free
Computer methods mechanics and
Li, Shaofan
&g ____II!!!! Computer methods in applied mechanics and engineerlng ELSEVIER Comput. Methods Appl to design new window functions so they can enhance the computational performance of the MLSRK algorithm-7825/96/$15.00 @ 1996 Elsevier Science S.A. All rights reserved PII SOO45-7825(96)01082-l #12;160 S. Li, WK. Liu/Comput
Special Publication 500-307 Cloud Computing
Special Publication 500-307 Cloud Computing Service Metrics Description NIST Cloud Computing Reference Architecture and Taxonomy Working Group NIST Cloud Computing Program Information Technology Computing Service Metrics Description NIST Cloud Computing Reference Architecture and Taxonomy Working Group
47 CFR 80.771 - Method of computing coverage.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 2014-10-01 false Method of computing coverage. 80.771 Section 80...THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu...
47 CFR 80.771 - Method of computing coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 2013-10-01 false Method of computing coverage. 80.771 Section 80...THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu...
47 CFR 80.771 - Method of computing coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 2011-10-01 false Method of computing coverage. 80.771 Section 80...THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu...
47 CFR 80.771 - Method of computing coverage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 2012-10-01 false Method of computing coverage. 80.771 Section 80...THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu...
Computers in Public Broadcasting: Who, What, Where.
ERIC Educational Resources Information Center
Yousuf, M. Osman
This handbook offers guidance to public broadcasting managers on computer acquisition and development activities. Based on a 1981 survey of planned and current computer uses conducted by the Corporation for Public Broadcasting (CPB) Information Clearinghouse, computer systems in public radio and television broadcasting stations are listed by…
List of Free Computer-Related Publications
NSDL National Science Digital Library
The List of Free Computer-Related Publications includes hardcopy magazines, newspapers, and journals related to computing which can be subscribed to free of charge. Each entry contains a brief overview of that publication, including its primary focus, typical content, publication frequency, subscription information, as well as an (admittedly) subjective overall rating. Note that some publications have qualifications you must meet in order for the subscription to be free.
Some Uses of Computers in Rhetoric and Public Address.
ERIC Educational Resources Information Center
Clevenger, Theodore, Jr.
1969-01-01
The author discusses the impact of the "computer revolution" on the field of rhetoric and public address in terms of the potential applications of computer methods to rhetorical problems. He first discusses the computer as a very fast calculator, giving the example of a study that probably would not have been undertaken if the calculations had had…
Computer Skills Integration in Public Relations Curricula.
ERIC Educational Resources Information Center
Curtin, Patricia A.; Witherspoon, Elizabeth M.
1999-01-01
Surveys (by e-mail) people in charge of public-relations sequences in U.S. colleges and universities regarding what computer skills they consider most useful for their students. Notes the roles of program size and breadth, and gender and age of educators. Investigates computer-skills prerequisites, useful computer skills, computer skills…
Computational Methods Minor Department of Computer Science
Barr, Valerie
in computational methods (CSC-103); 2. 2-3 intermediate level applications oriented courses offered in the computer into their senior project. The specific requirements are: 1. an introductory course in computational methods: There are six introductory courses offered by the computer science department, each of which covers a common set
Educational Computing in the Andover Public Schools.
ERIC Educational Resources Information Center
Mitsakos, Charles L.
A rationale for computers in education in the Andover (Massachusetts) public schools, a curricular scope and sequence, a computer acquisitions plan, and a staff development summary are presented. The report is a result of an 18-month study of computers in education; pilot programs in the schools; and input from specialists in business, education,…
Computational Methods for Crashworthiness
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (compiler); Carden, Huey D. (compiler)
1993-01-01
Presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Crashworthiness held at Langley Research Center on 2-3 Sep. 1992 are included. The presentations addressed activities in the area of impact dynamics. Workshop attendees represented NASA, the Army and Air Force, the Lawrence Livermore and Sandia National Laboratories, the aircraft and automotive industries, and academia. The workshop objectives were to assess the state-of-technology in the numerical simulation of crash and to provide guidelines for future research.
Acquisition of Computing Literacy on Shared Public Computers: Children and the "Hole in the Wall"
ERIC Educational Resources Information Center
Mitra, Sugata; Dangwal, Ritu; Chatterjee, Shiffon; Jha, Swati; Bisht, Ravinder S.; Kapur, Preeti
2005-01-01
Earlier work, often referred to as the "hole in the wall" experiments, has shown that groups of children can learn to use public computers on their own. This paper presents the method and results of an experiment conducted to investigate whether such unsupervised group learning in shared public spaces is universal. The experiment was conducted…
Public Relations, Computers, and Election Success.
ERIC Educational Resources Information Center
Banach, William J.; Westley, Lawrence
This paper describes a successful financial election campaign that used a combination of computer technology and public relations techniques. Analysis, determination of needs, development of strategy, organization, finance, communication, and evaluation are given as the steps to be taken for a successful school financial campaign. The authors…
Computer methods mechanics and
Guo, Benqi
and effective alternative to classical direct solvers. Domain decomposition method for the p. Mech. Engrg. 157 (1998) 425-440 Domain decomposition method for the h-p version finite element method, Shanghai 201800, PR China Received 17 December 1997 Abstract Domain decomposition method for the h
Computational Methods in Uncertainty Quantification
Computational Methods in Uncertainty Quantification Robert Scheichl Department of Mathematical Methods in UQ HGS Course, June 2015 1 / 32 #12;Lecture 4 Bayesian Inverse Problems Conditioning on Data Inverse Problems Least Squares Minimisation and Regularisation Bayes' Rule and Bayesian Interpretation
Systems Science Methods in Public Health
Luke, Douglas A.; Stamatakis, Katherine A.
2012-01-01
Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example in preparing for global pandemics. However, systems science study designs and analytic methods remain underutilized and are not widely featured in public health curricula or training. In this review we present an argument for the utility of systems science methods in public health, introduce three important systems science methods (system dynamics, network analysis, and agent-based modeling), and provide three case studies where these methods have been used to answer important public health science questions in the areas of infectious disease, tobacco control, and obesity. PMID:22224885
Satellite orbit computation methods
NASA Technical Reports Server (NTRS)
1977-01-01
Mathematical and algorithmical techniques for solution of problems in satellite dynamics were developed, along with solutions to satellite orbit motion. Dynamical analysis of shuttle on-orbit operations were conducted. Computer software routines for use in shuttle mission planning were developed and analyzed, while mathematical models of atmospheric density were formulated.
Method for computed tomography
Wagner, W.
1980-10-14
In transversal computer tomography apparatus, in which the positioning zone in which the patient can be positioned is larger than the scanning zone in which a body slice can be scanned, reconstruction errors are liable to occur. These errors are caused by incomplete irradiation of the body during examination. They become manifest not only as an incorrect image of the area not irradiated, but also have an adverse effect on the image of the other, completely irradiated areas. The invention enables reduction of these errors.
Closing the "Digital Divide": Building a Public Computing Center
ERIC Educational Resources Information Center
Krebeck, Aaron
2010-01-01
The public computing center offers an economical and environmentally friendly model for providing additional public computer access when and where it is needed. Though not intended to be a replacement for a full-service branch, the public computing center does offer a budget-friendly option for quickly expanding high-demand services into the…
Computational Methods in Drug Discovery
Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens
2014-01-01
Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236
Computational Methods Development at Ames
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Smith, Charles A. (Technical Monitor)
1998-01-01
This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.
Publicly Auditable Secure Multi-Party Computation* Carsten Baum
International Association for Cryptologic Research (IACR)
Publicly Auditable Secure Multi-Party Computation* Carsten Baum , Ivan Damg°ard , and Claudio no protocol of the SPDZ type supports an audit phase. In this paper, we show how to achieve efficient MPC with a public audit. We formalize the concept of publicly auditable secure computation and provide an enhanced
Code of Federal Regulations, 2010 CFR
2010-07-01
...and donation of public domain computer software. 201.26 Section 201.26 Patents...and donation of public domain computer software. (a) General. This section...the deposit of public domain computer software under section 805 of Public...
Special Publication 800-92 Guide to Computer Security
Special Publication 800-92 Guide to Computer Security Log Management Recommendations of the National Institute of Standardsand Technology KarenKent MurugiahSouppaya #12;#12;Guide to Computer Security Murugiah Souppaya NIST Special Publication 800-92 C O M P U T E R S E C U R I T Y Computer Security
Cryptography Challenges for Computational Privacy in Public Clouds
International Association for Cryptologic Research (IACR)
Cryptography Challenges for Computational Privacy in Public Clouds Sashank Dara Cisco Systems security but its readiness for this new generational shift of computing platform i.e. Cloud Computing into the underpinnings of Computational Privacy and lead to better solutions. I. INTRODUCTION Cloud computing came out
Computational methods for stellerator configurations
NASA Astrophysics Data System (ADS)
Betancourt, O.
This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings.
Epistemic divergence and the publicity of scientific methods
Gualtiero Piccinini
2003-01-01
Epistemic divergence occurs when different investigators give different answers to the same question using evidence-collecting methods that are not public. Without following the principle that scientific methods must be public, scientific communities risk epistemic divergence. I explicate the notion of public method and argue that, to avoid the risk of epistemic divergence, scientific communities should (and do) apply only methods
Computers and Public Policy. Proceedings of the Symposium Man and the Computer.
ERIC Educational Resources Information Center
Oden, Teresa, Ed.; Thompson, Christine, Ed.
Experts from the fields of law, business, government, and research were invited to a symposium sponsored by Dartmouth College to examine public policies which are challenged by the advent of computer technology. Eleven papers were delivered addressing such critical social issues related to computing and public policies as the man-computer…
Significance of Computional Intelligence Method in Computer Networks
Reginald Lal; Andrew Chiou
2009-01-01
The increase in network traffic has lead to the concept of congestion in computer networks. The problem of network congestion control remains a major issue in today's computer networks. Despite various methods and algorithms that has been proposed, however due to the dynamic nature of computer networks, no universal control method has been widely accepted. This paper reviews various conventional
A Computer-Assisted Instruction in Teaching Abstract Statistics to Public Affairs Undergraduates
ERIC Educational Resources Information Center
Ozturk, Ali Osman
2012-01-01
This article attempts to demonstrate the applicability of a computer-assisted instruction supported with simulated data in teaching abstract statistical concepts to political science and public affairs students in an introductory research methods course. The software is called the Elaboration Model Computer Exercise (EMCE) in that it takes a great…
Acceptance of Computer Technology in the Corporate Public Affairs Function.
ERIC Educational Resources Information Center
Glenn, Martha Cole; And Others
A survey of 160 top "Fortune" 500 companies was conducted in 1979 to determine the extent to which computers were being used in public affairs/government relations research and analysis. The survey instrument was divided into six sections, containing a total of 26 closed-end questions. The six sections elicited information on (1) public affairs…
BOOK REVIEW Computational Photography: Methods and Applications.
Schettini, Raimondo
BOOK REVIEW Computational Photography: Methods and Applications. By Rastislav Lukac, Boca Raton, FL of computational photography is given by Wikipedia, which is also used by the book's Editor to begin his editorial introduction: ``Computational photography refers broadly to computational imaging techniques that enhance
Methods and applications in computational protein design
Biddle, Jason Charles
2010-01-01
In this thesis, we summarize our work on applications and methods for computational protein design. First, we apply computational protein design to address the problem of degradation in stored proteins. Specifically, we ...
How You Can Protect Public Access Computers "and" Their Users
ERIC Educational Resources Information Center
Huang, Phil
2007-01-01
By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and…
Computational methods for stealth design
Cable, V.P. )
1992-08-01
A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Recordation of documents pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... Recordation of documents pertaining to computer shareware and donation of public...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Recordation of documents pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... Recordation of documents pertaining to computer shareware and donation of public...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Recordation of documents pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... Recordation of documents pertaining to computer shareware and donation of public...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Recordation of documents pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... Recordation of documents pertaining to computer shareware and donation of public...
Optimization Methods for Computer Animation.
ERIC Educational Resources Information Center
Donkin, John Caldwell
Emphasizing the importance of economy and efficiency in the production of computer animation, this master's thesis outlines methodologies that can be used to develop animated sequences with the highest quality images for the least expenditure. It is assumed that if computer animators are to be able to fully exploit the available resources, they…
TIMEINTEGRATION METHODS IN COMPUTATIONAL AERODYNAMICS
Stanford University
on the computer." & % #12;' $ Food Chain Joukowsky, Prandtl, Schlichting, Theodorsen the design will do Engineers Issigonis (Mini) Engineers who design the product People who decide what product to make (Henry Ford, Bill flow & % #12;' $ Aerodynamic Flow computations AIRPLANE DENSITY from 0.6250 to 1.1000 AIRPLANE CP from
Single light scattering: computational methods
Victor G. Farafonov; Vladimir Il’in
expansions of the electromagnetic fields in terms of certain wave functions form an important group. This group includes the so-called separation of variables method (SVM), the extended boundary condition method (EBCM), and the point matching method (PMM). The methods are characterized by relatively high accuracy and speed, but they can be efficiently applied only to scatterers of rather simplified shapes
An Analysis of Natural Computing Publication Venues
Fernandez, Thomas
-2011. A list of the Top 100 venues (http://scholar.google.com/citations?view_op=top_venues) ranks Nature (h5PEc, arXiv and the Social Science Research Network (SSRN). In this paper we outline the impact of the different venues in the disci- pline of Natural Computing and compare their impact to areas of research
Methods Towards Invasive Human Brain Computer Interfaces
Methods Towards Invasive Human Brain Computer Interfaces Thomas Navin Lal1 , Thilo Hinterberger2 there has been growing interest in the develop- ment of Brain Computer Interfaces (BCIs). The field has. Birbaumer et al. [1, 9] developed a Brain Computer Interface (BCI), called the Thought Translation Device
Teaching Practical Public Health Evaluation Methods
ERIC Educational Resources Information Center
Davis, Mary V.
2006-01-01
Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…
Computational methods of neutron transport
E. E. Lewis; W. F. Miller
1984-01-01
This books presents a balanced overview of the major methods currently available for obtaining numerical solutions in neutron and gamma ray transport. It focuses on methods particularly suited to the complex problems encountered in the analysis of reactors, fusion devices, radiation shielding, and other nuclear systems. Derivations are given for each of the methods showing how the transport equation is
Computational methods for stellerator configurations
Betancourt
1989-01-01
This project consists of two parallel objectives. On the one hand, computational techniques for three dimensional magnetic confinement configurations were developed or refined and on the other hand, this new techniques were applied to the solution of practical fusion energy problems or the techniques themselves were transferred to other fusion researcher for practical use in the field.
Angenent, Lars T.
Software Requests Mann Library Public Access Computing Mann Library's public access computing for requesting Mann Library public access computing software is: Sara E. Wright, 254-6218, sew268@cornell.edu #12; to academic and research-related resources, by all patrons of the Library. Software for instruction
32 CFR 310.52 - Computer matching publication and review requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
...2014-07-01 2014-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...
32 CFR 310.52 - Computer matching publication and review requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
...2011-07-01 2011-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...
32 CFR 310.52 - Computer matching publication and review requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
...2010-07-01 2010-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...
32 CFR 310.52 - Computer matching publication and review requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
...2013-07-01 2013-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...
32 CFR 310.52 - Computer matching publication and review requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
...2012-07-01 2012-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...
Computational approaches to mine publicly available databases.
Voelker, Rodger B; Cresko, William A; Berglund, J Andrew
2014-01-01
Publicly available sequence annotation data is a vital resource for researchers. Many types of information are available, including structural annotations (i.e., the locations and identities of genomic features) and functional annotations (e.g., gene expression and protein interactions). Annotation data is especially useful for interrogating Next-Gen sequencing data (e.g., identifying genomic features that are associated with mapped reads). Additionally, the vast amount of data that is available offers researchers the opportunity to mine existing data sets and make new discoveries. The ability to efficiently obtain, manipulate, and interrogate this data is a valuable and empowering skill. In this chapter, we introduce several primary data repositories and describe the most commonly encountered file formats. In order to highlight some of the key concepts, operations, and utilities that are involved in working with annotation data we provide a fully worked example of using annotations to answer some basic questions about a particular CHIP-seq data set. PMID:24549675
A Method of Evaluation for Metropolitan Public Library Systems.
ERIC Educational Resources Information Center
Campbell, H. C.
1980-01-01
Discusses a method of evaluating public library systems development in metropolitan areas with regard to four aspects: financial systems, interlibrary cooperation, new uses and services, and technical developments. The method was first proposed by the Urban Library Study Project of the Toronto Public Libraries. (Author/FM)
Computational Methods in Nanostructure Design
NASA Astrophysics Data System (ADS)
Bellesia, Giovanni; Lampoudi, Sotiria; Shea, Joan-Emma
Self-assembling peptides can serve as building blocks for novel biomaterials. Replica exchange molecular dynamics simulations are a powerful means to probe the conformational space of these peptides. We discuss the theoretical foundations of this enhanced sampling method and its use in biomolecular simulations. We then apply this method to determine the monomeric conformations of the Alzheimer amyloid-?(12-28) peptide that can serve as initiation sites for aggregation.
Computational and theoretical methods for protein folding.
Compiani, Mario; Capriotti, Emidio
2013-12-01
A computational approach is essential whenever the complexity of the process under study is such that direct theoretical or experimental approaches are not viable. This is the case for protein folding, for which a significant amount of data are being collected. This paper reports on the essential role of in silico methods and the unprecedented interplay of computational and theoretical approaches, which is a defining point of the interdisciplinary investigations of the protein folding process. Besides giving an overview of the available computational methods and tools, we argue that computation plays not merely an ancillary role but has a more constructive function in that computational work may precede theory and experiments. More precisely, computation can provide the primary conceptual clues to inspire subsequent theoretical and experimental work even in a case where no preexisting evidence or theoretical frameworks are available. This is cogently manifested in the application of machine learning methods to come to grips with the folding dynamics. These close relationships suggested complementing the review of computational methods within the appropriate theoretical context to provide a self-contained outlook of the basic concepts that have converged into a unified description of folding and have grown in a synergic relationship with their computational counterpart. Finally, the advantages and limitations of current computational methodologies are discussed to show how the smart analysis of large amounts of data and the development of more effective algorithms can improve our understanding of protein folding. PMID:24187909
Mittal, Rajat
1 (accepted for publication in Medical & Biological Engineering & Computing) A Coupled Flow-Acoustic pipe flow which seemed to contradict the postulate of Bruns. Fredberg[12] derived a theoretical model
Spectral Methods for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Zang, T. A.; Streett, C. L.; Hussaini, M. Y.
1994-01-01
As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral methods in numerical calculations stems from the attractive approximation properties of orthogonal polynomial expansions.
Computer-Based Test Interpretation and the Public Interest.
ERIC Educational Resources Information Center
Mitchell, James V., Jr.
Computer-based test interpretation (CBTI) is discussed in terms of its potential dangers to the public interest, problems with professional review of CBTI systems, and needed policies for these systems. Several problems with CBTI systems are outlined: (1) they may be nicely packaged, but it is difficult to establish their value; (2) they do not…
Computer mediated communication and publication productivity among faculty
Joel Cohen
1996-01-01
Investigates whether faculty who use computer mediated communication (CMC) achieve greater scholarly productivity as measured by publications and a higher incidence in the following prestige factors: receipt of awards; service on a regional or national committee of a professional organization; service on an editorial board of a refereed journal; service as a principal investigator on an externally funded project; or
Computer Mediated Communication and Publication Productivity among Faculty.
ERIC Educational Resources Information Center
Cohen, Joel
1996-01-01
Reports the results of a study that investigated whether faculty who use computer mediated communication (CMC) achieve greater scholarly productivity as measured by publications and numerous other prestige factors. Findings that indicate positive results from CMC are described, implications for faculty and academic libraries are discussed, and…
A Bibliography of Selected Rand Publications; Computing Technology.
ERIC Educational Resources Information Center
Rand Corp., Santa Monica, CA.
The bibliography contains 308 abstracts of unclassified Rand studies dealing with various aspects of computing technology. The studies selected have all been issued during the period January 1963 through August 1971. The intention is to revise the bibliography at periodic intervals to incorporate new publications. Both subject and author indexes…
Nqthm-1992 GENERAL PUBLIC SOFTWARE LICENSE Computational Logic, Inc.
Boyer, Robert Stephen
SOFTWARE IS LICENSED FREE OF CHARGE, WE PROVIDE ABSOLUTELY NO WARRANTY. THE SOFTWARE IS PROVIDED "AS ISNqthm-1992 GENERAL PUBLIC SOFTWARE LICENSE Computational Logic, Inc. 1717 West Sixth, Suite 290 Austin, Texas 78703-4776 Please read this license carefully before using the Nqthm-1992 Software
Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand
ERIC Educational Resources Information Center
Jayakar, Krishna; Park, Eun-A
2012-01-01
The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…
Computational Methods for Failure Analysis and Life Prediction
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (compiler); Harris, Charles E. (compiler); Housner, Jerrold M. (compiler); Hopkins, Dale A. (compiler)
1993-01-01
This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.
Numerical Methods for Computing Casimir Interactions
Steven G. Johnson
\\u000a We review several different approaches for computing Casimir forces and related fluctuation-induced interactions between bodies\\u000a of arbitrary shapes and materials. The relationships between this problem and well known computational techniques from classical\\u000a electromagnetism are emphasized. We also review the basic principles of standard computational methods, categorizing them\\u000a according to three criteria—choice of problem, basis, and solution technique—that can be used
Computer methods in electric network analysis
Saver, P.; Hajj, I.; Pai, M.; Trick, T.
1983-06-01
The computational algorithms utilized in power system analysis have more than just a minor overlap with those used in electronic circuit computer aided design. This paper describes the computer methods that are common to both areas and highlights the differences in application through brief examples. Recognizing this commonality has stimulated the exchange of useful techniques in both areas and has the potential of fostering new approaches to electric network analysis through the interchange of ideas.
Computational methods for biomolecular electrostatics.
Dong, Feng; Olsen, Brett; Baker, Nathan A
2008-01-01
An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate, and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion, and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems, with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951
The Contingent Valuation Method in Public Libraries
ERIC Educational Resources Information Center
Chung, Hye-Kyung
2008-01-01
This study aims to present a new model measuring the economic value of public libraries, combining the dissonance minimizing (DM) and information bias minimizing (IBM) format in the contingent valuation (CV) surveys. The possible biases which are tied to the conventional CV surveys are reviewed. An empirical study is presented to compare the model…
Graphical method for analyzing digital computer efficiency
NASA Technical Reports Server (NTRS)
Chan, S. P.; Munoz, R. M.
1971-01-01
Analysis method utilizes graph-theoretic approach for evaluating computation cost and makes logical distinction between linear graph of a computation and linear graph of a program. It applies equally well to other processes which depend on quatitative edge nomenclature and precedence relationships between edges.
Computational Methods for High-Dimensional Rotations
Buja, Andreas
Computational Methods for High-Dimensional Rotations in Data Visualization ANDREAS BUJA1 DIANNE, The Wharton School, University of Pennsylvania, 471 Huntsman Hall, Philadelphia, PA 19104-6302; http://www-stat.wharton.upenn.edu/~buja
Computational Chemistry Using Modern Electronic Structure Methods
ERIC Educational Resources Information Center
Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert
2007-01-01
Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.
Computational Anatomy -Methods and Mathematical Challenges
Díaz, Lorenzo J.
Computational Anatomy - Methods and Mathematical Challenges Martins Bruveris EPFL August 12, 2012´eformables pour la reconnaissance de formes et l' anatomie num´erique, PhD thesis, 2007] Martins Bruveris CA
Numerical methods for computing Casimir interactions
Steven G. Johnson
2010-10-01
We review several different approaches for computing Casimir forces and related fluctuation-induced interactions between bodies of arbitrary shapes and materials. The relationships between this problem and well known computational techniques from classical electromagnetism are emphasized. We also review the basic principles of standard computational methods, categorizing them according to three criteria---choice of problem, basis, and solution technique---that can be used to classify proposals for the Casimir problem as well. In this way, mature classical methods can be exploited to model Casimir physics, with a few important modifications.
Computational Methods for Ideal Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Kercher, Andrew D.
Numerical schemes for the ideal magnetohydrodynamics (MHD) are widely used for modeling space weather and astrophysical flows. They are designed to resolve the different waves that propagate through a magnetohydro fluid, namely, the fast, Alfven, slow, and entropy waves. Numerical schemes for ideal magnetohydrodynamics that are based on the standard finite volume (FV) discretization exhibit pseudo-convergence in which non-regular waves no longer exist only after heavy grid refinement. A method is described for obtaining solutions for coplanar and near coplanar cases that consist of only regular waves, independent of grid refinement. The method, referred to as Compound Wave Modification (CWM), involves removing the flux associated with non-regular structures and can be used for simulations in two- and three-dimensions because it does not require explicitly tracking an Alfven wave. For a near coplanar case, and for grids with 213 points or less, we find root-mean-square-errors (RMSEs) that are as much as 6 times smaller. For the coplanar case, in which non-regular structures will exist at all levels of grid refinement for standard FV schemes, the RMSE is as much as 25 times smaller. A multidimensional ideal MHD code has been implemented for simulations on graphics processing units (GPUs). Performance measurements were conducted for both the NVIDIA GeForce GTX Titan and Intel Xeon E5645 processor. The GPU is shown to perform one to two orders of magnitude greater than the CPU when using a single core, and two to three times greater than when run in parallel with OpenMP. Performance comparisons are made for two methods of storing data on the GPU. The first approach stores data as an Array of Structures (AoS), e.g., a point coordinate array of size 3 x n is iterated over. The second approach stores data as a Structure of Arrays (SoA), e.g. three separate arrays of size n are iterated over simultaneously. For an AoS, coalescing does not occur, reducing memory efficiency. All results are given for Cartesian grids, but the algorithms are implemented for a general geometry on a unstructured grids.
Computational Methods for Rough Classification and Discovery.
ERIC Educational Resources Information Center
Bell, D. A.; Guan, J. W.
1998-01-01
Rough set theory is a new mathematical tool to deal with vagueness and uncertainty. Computational methods are presented for using rough sets to identify classes in datasets, finding dependencies in relations, and discovering rules which are hidden in databases. The methods are illustrated with a running example from a database of car test results.…
Eulerian semiclassical computational methods in quantum dynamics
Maryland at College Park, University of
: simple, efficient in high dimension particles (rays) may diverge: loss of accuracy, remeshing (increasing; computational cost higher (reducing cost: moment closure, level set method) #12;A ray tracing result · Rays, interfaces between different materials, etc. · Modern theory (KAM theory) and numerical methods (symplectic
Computing discharge using the index velocity method
Levesque, Victor A.; Oberg, Kevin A.
2012-01-01
Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression techniques in which the mean cross-sectional velocity for the standard section is related to the measured index velocity. Most ratings are simple-linear regressions, but more complex ratings may be necessary in some cases. Once the rating is established, validation measurements should be made periodically. Over time, validation measurements may provide additional definition to the rating or result in the creation of a new rating. The computation of discharge is the last step in the index velocity method, and in some ways it is the most straight-forward step. This step differs little from the steps used to compute discharge records for stage-discharge gaging stations. The ratings are entered into database software used for records computation, and continuous records of discharge are computed.
Updated Panel-Method Computer Program
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1995-01-01
Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.
PH 7019 Public Health Research Methods (CRN: 87556)
Frantz, Kyle J.
PH 7019 Public Health Research Methods (CRN: 87556) Shanta R. Dube, PhD, MPH Epidemiology. & Van Ryzin, G. G. (2011). Research Methods in Practice: Strategies for Description and Causation general introduction to research methods, emphasizing systematic approaches to collection and analysis
Method and system for benchmarking computers
Gustafson, John L. (Ames, IA)
1993-09-14
A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.
Meshless methods for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Katz, Aaron Jon
While the generation of meshes has always posed challenges for computational scientists, the problem has become more acute in recent years. Increased computational power has enabled scientists to tackle problems of increasing size and complexity. While algorithms have seen great advances, mesh generation has lagged behind, creating a computational bottleneck. For industry and government looking to impact current and future products with simulation technology, mesh generation imposes great challenges. Many generation procedures often lack automation, requiring many man-hours, which are becoming far more expensive than computer hardware. More automated methods are less reliable for complex geometry with sharp corners, concavity, or otherwise complex features. Most mesh generation methods to date require a great deal of user expertise to obtain accurate simulation results. Since the application of computational methods to real world problems appears to be paced by mesh generation, alleviating this bottleneck potentially impacts an enormous field of problems. Meshless methods applied to computational fluid dynamics is a relatively new area of research designed to help alleviate the burden of mesh generation. Despite their recent inception, there exists no shortage of formulations and algorithms for meshless schemes in the literature. A brief survey of the field reveals varied approaches arising from diverse mathematical backgrounds applied to a wide variety of applications. All meshless schemes attempt to bypass the use of a conventional mesh entirely or in part by discretizing governing partial differential equations on scattered clouds of points. A goal of the present thesis is to develop a meshless scheme for computational fluid dynamics and evaluate its performance compared with conventional methods. The meshless schemes developed in this work compare favorably with conventional finite volume methods in terms of accuracy and efficiency for the Euler and Navier-Stokes equations. The success of these schemes may be largely attributeed their sound mathematical foundation based on a local extremum diminishing property, which has been generalized to handle local clouds of points instead of mesh-based topologies. In addition, powerful algorithms are developed to accelerate convergence for meshless schemes, which also apply to mesh based schemes in a mesh transparent manner. The convergence acceleration technique, termed "multicloud," produces schemes with convergence rates rivaling structured multigrid. However, the advantage of multicloud is that it makes no assumptions regarding mesh topology or discretization used on the finest level. Thus, multicloud is extrememly general and widely applicable. Finally, a unique application of meshless methods is demonstrated for overset grids in which a meshless method is used to seamlessly connect different types of grids. It is shown that meshless methods provide significant advantages over conventional interpolation procedures for overset grids. This application serves to highlight the practical utility of meshless schemes for computational fluid dynamics.
Semiempirical methods for computing turbulent flows
NASA Technical Reports Server (NTRS)
Belov, I. A.; Ginzburg, I. P.
1986-01-01
Two semiempirical theories which provide a basis for determining the turbulent friction and heat exchange near a wall are presented: (1) the Prandtl-Karman theory, and (2) the theory utilizing an equation for the energy of turbulent pulsations. A comparison is made between exact numerical methods and approximate integral methods for computing the turbulent boundary layers in the presence of pressure, blowing, or suction gradients. Using the turbulent flow around a plate as an example, it is shown that, when computing turbulent flows with external turbulence, it is preferable to construct a turbulence model based on the equation for energy of turbulent pulsations.
Survey of Public IaaS Cloud Computing API
NASA Astrophysics Data System (ADS)
Yamato, Yoji; Moriya, Takaaki; Ogawa, Takeshi; Akahani, Junichi
Recently, Cloud computing is spread rapidly and many Cloud providers start their Cloud services. One of the Cloud computing problems is Cloud provider Lock In” for users. Actually, Cloud computing management APIs such as ordering or provisioning are different in each Cloud provider, so that users need to study and implement new APIs when they change Cloud providers. OGF and DMTF start the discussions of standardization of Cloud computing APIs, but there is no standard now. In this technical note, to clarify what APIs cloud providers should provide, we study common APIs for Cloud computing. We survey and compare Cloud computing APIs such as Rackspace Cloud Server, Sun Cloud, GoGrid, ElasticHosts, Amazon EC2 and FlexiScale which are currently provided as public IaaS Cloud APIs in the market. From the survey, the common APIs should support REST access style and provide account management, virtual server management, storage management, network management and resource usage management capabilities. We also show an example of OSS to provide these common APIs compared to normal hosting services OSS.
Computational Methods for Structural Mechanics and Dynamics
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (editor); Housner, Jerrold M. (editor); Tanner, John A. (editor); Hayduk, Robert J. (editor)
1989-01-01
Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.
Chapter 10: Computational Methods for Promoter Recognition
Chapter 10: Computational Methods for Promoter Recognition Michael Q. Zhang Associate Professor@cshl.org 10.1 Introduction In this chapter, we shall describe the problem of promoter recognition. We begin element) and BRE (TFIIB recognition element). Not every element occurs in a core promoter, people have
Yokohama, Noriya
2013-07-01
This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. PMID:23877155
Shifted power method for computing tensor eigenpairs.
Mayo, Jackson R.; Kolda, Tamara Gibson
2010-10-01
Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.
29 CFR 548.500 - Methods of computation.
Code of Federal Regulations, 2012 CFR
2012-07-01
...AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Computation...Methods of computation. The methods of computing overtime pay on the basic rates for...employees are the same as the methods of computing overtime pay at the regular rate....
77 FR 26509 - Notice of Public Meeting-Cloud Computing Forum & Workshop V
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-04
...Notice of Public Meeting--Cloud Computing Forum & Workshop V AGENCY: National...SUMMARY: NIST announces the Cloud Computing Forum & Workshop V to be held...the U.S. Government (USG) Cloud Computing Technology Roadmap...
77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-18
...Notice of Public Meeting--Cloud Computing and Big Data Forum and Workshop...Technology (NIST) announces a Cloud Computing and Big Data Forum and Workshop...hands-on workshop. The NIST Cloud Computing and Big Data Forum and...
76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-07
...Notice of Public Meeting--Cloud Computing Forum & Workshop IV AGENCY...SUMMARY: NIST announces the Cloud Computing Forum & Workshop IV to be held...the U.S. Government (USG) Cloud Computing Technology Roadmap...
Referees Often Miss Obvious Errors in Computer and Electronic Publications
NASA Astrophysics Data System (ADS)
de Gloucester, Paul Colin
2013-05-01
Misconduct is extensive and damaging. So-called science is prevalent. Articles resulting from so-called science are often cited in other publications. This can have damaging consequences for society and for science. The present work includes a scientometric study of 350 articles (published by the Association for Computing Machinery; Elsevier; The Institute of Electrical and Electronics Engineers, Inc.; John Wiley; Springer; Taylor & Francis; and World Scientific Publishing Co.). A lower bound of 85.4% articles are found to be incongruous. Authors cite inherently self-contradictory articles more than valid articles. Incorrect informational cascades ruin the literature's signal-to-noise ratio even for uncomplicated cases.
ERIC Educational Resources Information Center
Knox, A. Whitney; Miller, Bruce A.
1980-01-01
Describes a method for estimating the number of cathode ray tube terminals needed for public use of an online library catalog. Authors claim method could also be used to estimate needed numbers of microform readers for a computer output microform (COM) catalog. Formulae are included. (Author/JD)
A method to compute periodic sums
NASA Astrophysics Data System (ADS)
Gumerov, Nail A.; Duraiswami, Ramani
2014-09-01
In a number of problems in computational physics, a finite sum of kernel functions centered at N particle locations located in a box in three dimensions must be extended by imposing periodic boundary conditions on box boundaries. Even though the finite sum can be efficiently computed via fast summation algorithms, such as the fast multipole method (FMM), the periodized extension is usually treated via a different algorithm, Ewald summation, accelerated via the fast Fourier transform (FFT). A different approach to compute this periodized sum just using a blackbox finite fast summation algorithm is presented in this paper. The method splits the periodized sum into two parts. The first, comprising the contribution of all points outside a large sphere enclosing the box, and some of its neighbors, is approximated inside the box by a collection of kernel functions (“sources”) placed on the surface of the sphere or using an expansion in terms of spectrally convergent local basis functions. The second part, comprising the part inside the sphere, and including the box and its immediate neighborhood, is treated via available summation algorithms. The coefficients of the sources are determined by least squares collocation of the periodicity condition of the total potential, imposed on a circumspherical surface for the box. While the method is presented in general, details are worked out for the case of evaluating electrostatic potentials and forces. Results show that when used with the FMM, the periodized sum can be computed to any specified accuracy, at an additional cost of the order of the free-space FMM. Several technical details and efficient algorithms for auxiliary computations are provided, as are numerical comparisons.
Computational methods for industrial radiation measurement applications
Gardner, R.P.; Guo, P.; Ao, Q.
1996-12-31
Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a {open_quotes}black box{close_quotes} mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments.
Department of Computer Science Series of Publications A
Roos, Teemu
-952-10-3988-1 (paperback) ISBN 978-952-10-3989-8 (PDF) Computing Reviews (1998) Classification: G.3, H.1.1, I.2.6, I.2.7, I-952-10-3988-1 (paperback) ISBN 978-952-10-3989-8 (PDF) Abstract In this Thesis, we develop theory and methods
Computational Thermochemistry and Benchmarking of Reliable Methods
Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.
2006-06-20
During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.
Computational methods of electron/photon transport
Mack, J.M.
1983-01-01
A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated.
Parallel computer methods for eigenvalue extraction
NASA Technical Reports Server (NTRS)
Akl, Fred
1988-01-01
A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is used in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. The advantages of this new algorithm in parallel computer architecture are discussed.
Analytic Method for Computing Instrument Pointing Jitter
NASA Technical Reports Server (NTRS)
Bayard, David
2003-01-01
A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.
Laurini, Robert
Groupware for Urban Planning and Computer-based Public Participation Pr. Robert Laurini 1 Robert FOR URBAN PLANNING AND COMPUTER-BASED PUBLIC PARTICIPATION Groupware for Urban Planning · I - What is Groupware? · II - Is Groupware Useful for Urban Planning? · III - Public Participation · IV - Conclusions I
Rotarius, Timothy; Wan, Thomas T H; Liberman, Aaron
2007-01-01
Research plays a critical role throughout virtually every conduit of the health services industry. The key terms of research, public relations, and organizational interests are discussed. Combining public relations as a strategic methodology with the organizational concern as a factor, a typology of four different research methods emerges. These four health marketing research methods are: investigative, strategic, informative, and verification. The implications of these distinct and contrasting research methods are examined. PMID:19042536
Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)
2013-01-01
Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.
17 CFR 43.3 - Method and timing for real-time public reporting.
Code of Federal Regulations, 2014 CFR
2014-04-01
...2014-04-01 false Method and timing for real-time public reporting. 43.3 Section 43...TRADING COMMISSION (CONTINUED) REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...
Selected Publications 1. A. Cherkaev. Variational methods for structural
Cherkaev, Andrej
Selected Publications 1. A. Cherkaev. Variational methods for structural optimization. Springer/her intuition and experience. Since ancient times this technique has proved effective, and engineering landmarks, the problem is addressed by standard means. The corresponding problems and techniques are discussed in my book
The Diffusion of Evaluation Methods among Public Relations Practitioners.
ERIC Educational Resources Information Center
Dozier, David M.
A study explored the relationships between public relations practitioners' organizational roles and the type of evaluation methods they used on the job. Based on factor analysis of role data obtained from an earlier study, four organizational roles were defined and ranked: communication manager, media relations specialist, communication liaison,…
Naive vs. Sophisticated Methods of Forecasting Public Library Circulations.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1984-01-01
Two sophisticated--autoregressive integrated moving average (ARIMA), straight-line regression--and two naive--simple average, monthly average--forecasting techniques were used to forecast monthly circulation totals of 34 public libraries. Comparisons of forecasts and actual totals revealed that ARIMA and monthly average methods had smallest mean…
IMAGING AND COMPUTATIONAL METHODS FOR EXPLORING SUB-CELLULAR ANATOMY
Keyser, John
IMAGING AND COMPUTATIONAL METHODS FOR EXPLORING SUB-CELLULAR ANATOMY A Dissertation by DAVID Science #12;IMAGING AND COMPUTATIONAL METHODS FOR EXPLORING SUB-CELLULAR ANATOMY A Dissertation by DAVID 2009 Major Subject: Computer Science #12;iii ABSTRACT Imaging and Computational Methods for Exploring
Key management of the double random-phase-encoding method using public-key encryption
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2010-03-01
Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.
Survey of the Computer Users of the Upper Arlington Public Library.
ERIC Educational Resources Information Center
Tsardoulias, L. Sevim
The Computer Services Department of the Upper Arlington Public Library in Franklin County, Ohio, provides microcomputers for public use, including IBM compatible and Macintosh computers, a laser printer, and dot-matrix printers. Circulation statistics provide data regarding the frequency and amount of computer use, but these statistics indicate…
Implicit methods for computing chemically reacting flow
NASA Technical Reports Server (NTRS)
Li, C. P.
1986-01-01
The backward Euler scheme was used to solve a large system of inviscid flow and chemical rate equations in three spatial coordinates. The flow equations were integrated simultaneously in time by a conventional ADI factorization technique, then the species equations were solved by either simultaneous or successive techniques. The methods were evaluated in their efficiency and robustness for a hypersonic flow problem involving an aerobrake configuration. It was found that both implicit methods can effectively reduce the stiffness associated with the chemical production term and that the successive solution for the species was as stable as the simultaneous solution. The latter method is more economical because the computation time varies linearly with the number of species.
Review of Computational Stirling Analysis Methods
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.
2004-01-01
Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.
Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...
Evolutionary Computing Methods for Spectral Retrieval
NASA Technical Reports Server (NTRS)
Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna
2009-01-01
A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.
A new spectral method to compute FCN
NASA Astrophysics Data System (ADS)
Zhang, M.; Huang, C. L.
2014-12-01
Free core nutation (FCN) is a rotational modes of the earth with fluid core. All traditional theoretical methods produce FCN period near 460 days with PREM, while the precise observations (VLBI + SG tides) say it should be near 430 days. In order to fill this big gap, astronomers and geophysicists give various assumptions, e.g., increasing core-mantle-boundary (CMB) flattening by about 5%, a strong coupling between nutation and geomagnetic field near CMB, viscous coupling, or topographical coupling etc. Do we really need these unproved assumptions? or is it only the problem of these traditional theoretical methods themselves? Earth models (e.g. PREM) provide accurate and robust profiles of physical parameters, like density and Lame parameters, but their radial derivatives, which are also used in all traditional methods to calculate normal modes (e.g.. FCN), nutation and tides of non-rigid earth theoretically, are not so trustable as the parameters themselves. A new multiple layer spectral method is proposed and applied to the computation of normal modes, to avoid these problems. This new method can solve not only one order ellipsoid but also irregular asymmetric 3D earth model. Our primary result of the FCN period is 435 sidereal days.
Monte Carlo methods on advanced computer architectures
Martin, W.R. [Univ. of Michigan, Ann Arbor, MI (United States)
1991-12-31
Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-01
...Publication 500-293, US Government Cloud Computing Technology Roadmap, Release...Publication 500-293, US Government Cloud Computing Technology Roadmap, Release...accelerate their adoption of cloud computing. The roadmap has been...
Algebraic and Logical Methods in Quantum Computation
Neil J. Ross
2015-10-08
This thesis contains contributions to the theory of quantum computation. We first define a new method to efficiently approximate special unitary operators. Specifically, given a special unitary U and a precision {\\epsilon} > 0, we show how to efficiently find a sequence of Clifford+V or Clifford+T operators whose product approximates U up to {\\epsilon} in the operator norm. In the general case, the length of the approximating sequence is asymptotically optimal. If the unitary to approximate is diagonal then our method is optimal: it yields the shortest sequence approximating U up to {\\epsilon}. Next, we introduce a mathematical formalization of a fragment of the Quipper quantum programming language. We define a typed lambda calculus called Proto-Quipper which formalizes a restricted but expressive fragment of Quipper. The type system of Proto-Quipper is based on intuitionistic linear logic and prohibits the duplication of quantum data, in accordance with the no-cloning property of quantum computation. We prove that Proto-Quipper is type-safe in the sense that it enjoys the subject reduction and progress properties.
An efficient method to compute singularity induced bifurcations of
Kwatny, Harry G.
An efficient method to compute singularity induced bifurcations of decoupled parameter, we present an efficient method to compute singular points and singu- larity induced bifurcation part of the DAEs brings singularity issues into dynamic stability assessment of power systems. Roughly
Three fast computational approximation methods in hypersonic aerothermodynamics
Riabov, Vladimir V.
Three fast computational approximation methods in hypersonic aerothermodynamics V.V. Riabov* Rivier analyzed to study nonequilibrium hypersonic viscous flows near blunt bodies. These approximations allow; Nonequilibrium hypersonic flows 1. Introduction Numerous methods [1,2] that require significant computational
Public health surveillance: historical origins, methods and evaluation.
Declich, S.; Carter, A. O.
1994-01-01
In the last three decades, disease surveillance has grown into a complete discipline, quite distinct from epidemiology. This expansion into a separate scientific area within public health has not been accompanied by parallel growth in the literature about its principles and methods. The development of the fundamental concepts of surveillance systems provides a basis on which to build a better understanding of the subject. In addition, the concepts have practical value as they can be used in designing new systems as well as understanding or evaluating currently operating systems. This article reviews the principles of surveillance, beginning with a historical survey of the roots and evolution of surveillance, and discusses the goals of public health surveillance. Methods for data collection, data analysis, interpretation, and dissemination are presented, together with proposed procedures for evaluating and improving a surveillance system. Finally, some points to be considered in establishing a new surveillance system are presented. PMID:8205649
Computational predictive methods for fracture and fatigue
NASA Astrophysics Data System (ADS)
Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.
1994-09-01
The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.
Computational predictive methods for fracture and fatigue
NASA Technical Reports Server (NTRS)
Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.
1994-01-01
The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.
Computational electromagnetic methods for transcranial magnetic stimulation
NASA Astrophysics Data System (ADS)
Gomez, Luis J.
Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3.0 times less volume than Figure-8 coils. Uncertainty quantification (UQ): The location/volume/depth of the stimulated region during TMS is often strongly affected by variability in the position and orientation of TMS coils, as well as anatomical differences between patients. A surrogate model-assisted UQ framework was developed and used to statistically characterize TMS depression therapy. The framework identifies key parameters that strongly affect TMS fields, and partially explains variations in TMS treatment responses.
ERIC Educational Resources Information Center
Olson, Christopher
2013-01-01
Advances in technology and course delivery methods have enabled persons with disabilities to enroll in higher education at an increasing rate. Federal regulations state persons with disabilities must be granted equal access to the information contained in computer-based instructional materials, but faculty at the six public universities in South…
Computational methods applied to wind tunnel optimization
NASA Astrophysics Data System (ADS)
Lindsay, David
This report describes computational methods developed for optimizing the nozzle of a three-dimensional subsonic wind tunnel. This requires determination of a shape that delivers flow to the test section, typically with a speed increase of 7 or more and a velocity uniformity of .25% or better, in a compact length without introducing boundary layer separation. The need for high precision, smooth solutions, and three-dimensional modeling required the development of special computational techniques. These include: (1) alternative formulations to Neumann and Dirichlet boundary conditions, to deal with overspecified, ill-posed, or cyclic problems, and to reduce the discrepancy between numerical solutions and boundary conditions; (2) modification of the Finite Element Method to obtain solutions with numerically exact conservation properties; (3) a Matlab implementation of general degree Finite Element solvers for various element designs in two and three dimensions, exploiting vector indexing to obtain optimal efficiency; (4) derivation of optimal quadrature formulas for integration over simplexes in two and three dimensions, and development of a program for semi-automated generation of formulas for any degree and dimension; (5) a modification of a two-dimensional boundary layer formulation to provide accurate flow conservation in three dimensions, and modification of the algorithm to improve stability; (6) development of multi-dimensional spline functions to achieve smoother solutions in three dimensions by post-processing, new three-dimensional elements for C1 basis functions, and a program to assist in the design of elements with higher continuity; and (7) a development of ellipsoidal harmonics and Lame's equation, with generalization to any dimension and a demonstration that Cartesian, cylindrical, spherical, spheroidal, and sphero-conical harmonics are all limiting cases. The report includes a description of the Finite Difference, Finite Volume, and domain remapping methods, coordinate transformation theorems and techniques including the Method of Jacobians, and a derivation of the fluid flow fundamentals required for the model. It applies the methods to study the effect of cross-section and fillet variation, and to obtain a sample design of a high-uniformity nozzle.
Checklist and Pollard Walk butterfly survey methods on public lands
Royer, R.A.; Austin, J.E.; Newton, W.E.
1998-01-01
Checklist and Pollard Walk butterfly survey methods were contemporaneously applied to seven public sites in North Dakota during the summer of 1995. Results were compared for effect of method and site on total number of butterflies and total number of species detected per hour. Checklist searching produced significantly more butterfly detections per hour than Pollard Walks at all sites. Number of species detected per hour did not differ significantly either among sites or between methods. Many species were detected by only one method, and at most sites generalist and invader species were more likely to be observed during checklist searches than during Pollard Walks. Results indicate that checklist surveys are a more efficient means for initial determination of a species list for a site, whereas for long-term monitoring the Pollard Walk is more practical and statistically manageable. Pollard Walk transects are thus recommended once a prairie butterfly fauna has been defined for a site by checklist surveys.
Modules and methods for all photonic computing
Schultz, David R. (Knoxville, TN); Ma, Chao Hung (Oak Ridge, TN)
2001-01-01
A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.
Judging the Impact of Conference and Journal Publications in High Performance Computing
Zhou, Yuanyuan
Judging the Impact of Conference and Journal Publications in High Performance Computing is a more intimate way of conveying one's ideas than journal publishing, and is seen to be more effective. For experimentalists conference publication is preferred to journal publication, and the premier conferences
Judging the Impact of Conference and Journal Publications in Computer Architecture
Zhou, Yuanyuan
Judging the Impact of Conference and Journal Publications in Computer Architecture In the document is a more intimate way of conveying one's ideas than journal publishing, and is seen to be more effective. For experimentalists conference publication is preferred to journal publication, and the premier conferences
Introduction to the Theory of Computation Public Key Cryptography and RSA
Gallier, Jean
CIS511 Introduction to the Theory of Computation Public Key Cryptography and RSA Jean Gallier April 30, 2010 #12;2 #12;Chapter 1 Public Key Cryptography; The RSA System 1.1 The RSA System Ever since a message to another party, J, say Julia. 3 #12;4 CHAPTER 1. PUBLIC KEY CRYPTOGRAPHY; THE RSA SYSTEM However
A nonlinear substructuring method for concurrent processing computers
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Bergan, P.
1986-01-01
This paper proposes a method, based on substructuring, to solve nonlinear structural analysis problems on multiprocessor computers. Background information is given on the use of substructuring in large-scale finite element programs and computational time distributions for the major components for an example nonlinear finite element analysis are discussed. Implementation of the substructuring method on a typical multiprocessor computer is described and estimates are made of expected reductions in computation times based on nonlinear substructuring results obtained on a single processor computer.
Toronto, University of
March 21, 2014 21:11 Computer Methods in Biomechanics and Biomedical Engineering CMBBEPaper Computer Methods in Biomechanics and Biomedical Engineering Vol. 00, No. 00, May 2014, 115 RESEARCH://www.informaworld.com #12;March 21, 2014 21:11 Computer Methods in Biomechanics and Biomedical Engineering CMBBEPaper 2
Public participation GIS: a method for identifying ecosystems services
Brown, Greg; Montag, Jessica; Lyon, Katie
2012-01-01
This study evaluated the use of an Internet-based public participation geographic information system (PPGIS) to identify ecosystem services in Grand County, Colorado. Specific research objectives were to examine the distribution of ecosystem services, identify the characteristics of participants in the study, explore potential relationships between ecosystem services and land use and land cover (LULC) classifications, and assess the methodological strengths and weakness of the PPGIS approach for identifying ecosystem services. Key findings include: (1) Cultural ecosystem service opportunities were easiest to identify while supporting and regulatory services most challenging, (2) participants were highly educated, knowledgeable about nature and science, and have a strong connection to the outdoors, (3) some LULC classifications were logically and spatially associated with ecosystem services, and (4) despite limitations, the PPGIS method demonstrates potential for identifying ecosystem services to augment expert judgment and to inform public or environmental policy decisions regarding land use trade-offs.
Saving lives: a computer simulation game for public education about emergencies
Morentz, J.W.
1985-01-01
One facet of the Information Revolution in which the nation finds itself involves the utilization of computers, video systems, and a variety of telecommunications capabilities by those who must cope with emergency situations. Such technologies possess a significant potential for performing emergency public education and transmitting key information that is essential for survival. An ''Emergency Public Information Competitive Challenge Grant,'' under the aegis of the Federal Emergency Management Agency (FEMA), has sponsored an effort to use computer technology - both large, time-sharing systems and small personal computers - to develop computer games which will help teach techniques of emergency management to the public at large. 24 references.
A Divide and Conquer Method to Compute Binomial Ideals
Mehta, Shashank K
A Divide and Conquer Method to Compute Binomial Ideals Deepanjan Kesh1 and Shashank K Mehta2 1 terms. In this paper, we give a divide-and-conquer strategy to compute binomial ideals. This work to compute binomial ideals spends a significant amount of time computing GrÂ¨obner basis and that Gr
Scientific Methods in Computer Science Gordana Dodig-Crnkovic
Cunningham, Conrad
(Religion, Art, ...) 5 Natural Sciences (Physics, Chemistry, Biology, ...) 2 Social Sciences (EconomicsScientific Methods in Computer Science Gordana Dodig-Crnkovic Department of Computer Science analyzes scientific aspects of Computer Science. First it defines science and scientific method in general
Computational Evaluation of the Traceback Method
ERIC Educational Resources Information Center
Kol, Sheli; Nir, Bracha; Wintner, Shuly
2014-01-01
Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…
Computational structural mechanics methods research using an evolving framework
NASA Technical Reports Server (NTRS)
Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.
1990-01-01
Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.
ERIC Educational Resources Information Center
Bengston, David N.; Fan, David P.
1999-01-01
Presents an innovative methodology for evaluating strategic goals in a public agency. The method involves computer content analysis of online news media text to evaluate expressed attitudes. Provides a way to assess the views of a wide range of stakeholders quickly and efficiently. (SLD)
Diploma Thesis in Computer Science Overview of Authentication Methods
Borchert, Bernd
Diploma Thesis in Computer Science Overview of Authentication Methods with unsecure Client Computer with advice, encouragement, reviews and coffee. Thanks to my family my mother, my brother and my sister in law
Alternative methods for computing sound radiation from vibrating surfaces
NASA Technical Reports Server (NTRS)
Bernhard, R. J.; Gardner, B. K.; Smith, D. C.
1987-01-01
The merits of various numerical and experimental methods for computing sound fields radiated from vibrating structures are examined. The finite difference method, the finite element method, direct boundary element method, indirect boundary element near-field acoustic holography, two-microphone methods, and spatial transformation of sound fields are considered. The proper utilization of the methods is discussed.
Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing
International Association for Cryptologic Research (IACR)
Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing Qian Wang1, {wjlou}@ece.wpi.edu Abstract. Cloud Computing has been envisioned as the next-generation architecture the integrity of data storage in Cloud Computing. In particular, we consider the task of allow- ing a third
Atomistic Method Applied to Computational Modeling of Surface Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo H.; Abel, Phillip B.
2000-01-01
The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on the BFS (Bozzolo, Ferrante, and Smith) method for the calculation of the energetics, consists of a small number of simple PCbased computer codes that deal with the different aspects of surface alloy formation. Two analysis modes are available within this package. The first mode provides an atom-by-atom description of real and virtual stages 1. during the process of surface alloying, based on the construction of catalogues of configurations where each configuration describes one possible atomic distribution. BFS analysis of this catalogue provides information on accessible states, possible ordering patterns, and details of island formation or film growth. More importantly, it provides insight into the evolution of the system. Software developed by the Computational Materials Group allows for the study of an arbitrary number of elements forming surface alloys, including an arbitrary number of surface atomic layers. The second mode involves large-scale temperature-dependent computer 2. simulations that use the BFS method for the energetics and provide information on the dynamic processes during surface alloying. These simulations require the implementation of Monte-Carlo-based codes with high efficiency within current workstation environments. This methodology capitalizes on the advantages of the BFS method: there are no restrictions on the number or type of elements or on the type of crystallographic structure considered. This removes any restrictions in the definition of the configuration catalogues used in the analytical calculations, thus allowing for the study of arbitrary ordering patterns, ultimately leading to the actual surface alloy structure. Moreover, the Monte Carlo numerical technique used for the large-scale simulations allows for a detailed visualization of the simulated process, the main advantage of this type of analysis being the ability to understand the underlying features that drive these processes. Because of the simplicity of the BFS method for e energetics used in these calculations, a detailed atom-by-atom analysis can be performed at any
ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING
The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...
Improved Computational Methods for Ray Tracing
Hank Weghorst; Gary Hooper; Donald P. Greenberg
1984-01-01
This paper describes algorithmic procedures that have been implemented to reduce the computational expense of producing ray-traced images. The selection of bounding volumes is examined to reduce the computational cost of the ray-intersection test. The use of object coherence, which relies on a hierarchical description of the environment, is then presented. Finally, since the building of the ray- intersection trees
Formal Aspects of Computing Applicable Formal Methods
Calder, Muffy
.1007/s00165-012-0270-3 Modelling IEEE 802.11 CSMA/CA RTS/ CTS with stochastic bigraphs with sharing Muffy Formal Aspects of Computing Modelling IEEE 802.11 CSMA/CA RTS/CTS with stochastic bigraphs with sharing Muffy Calder and Michele Sevegnani School of Computing Science, University of Glasgow, Lilybank Gardens
Computational methods in sequence and structure prediction
NASA Astrophysics Data System (ADS)
Lang, Caiyi
This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed a software package which is capable of designing novel protein structures at the atomic resolution. This software package allows us to perform protein structure design with a flexible backbone. The backbone flexibility includes loop region relaxation as well as a secondary structure collective mode relaxation scheme. (Abstract shortened by UMI.)
A Classification of Recent Australasian Computing Education Publications
ERIC Educational Resources Information Center
Computer Science Education, 2007
2007-01-01
A new classification system for computing education papers is presented and applied to every computing education paper published between January 2004 and January 2007 at the two premier computing education conferences in Australia and New Zealand. We find that while simple reports outnumber other types of paper, a healthy proportion of papers…
Vaentaenen, Ari . E-mail: armiva@utu.fi; Marttunen, Mika . E-mail: Mika.Marttunen@ymparisto.fi
2005-04-15
Public involvement is a process that involves the public in the decision making of an organization, for example a municipality or a corporation. It has developed into a widely accepted and recommended policy in environment altering projects. The EU Water Framework Directive (WFD) took force in 2000 and stresses the importance of public involvement in composing river basin management plans. Therefore, the need to develop public involvement methods for different situations and circumstances is evident. This paper describes how various public involvement methods have been applied in a development project involving the most heavily regulated lake in Finland. The objective of the project was to assess the positive and negative impacts of regulation and to find possibilities for alleviating the adverse impacts on recreational use and the aquatic ecosystem. An exceptional effort was made towards public involvement, which was closely connected to planning and decision making. The applied methods were (1) steering group work, (2) survey, (3) dialogue, (4) theme interviews, (5) public meeting and (6) workshops. The information gathered using these methods was utilized in different stages of the project, e.g., in identifying the regulation impacts, comparing alternatives and compiling the recommendations for regulation development. After describing our case and the results from the applied public involvement methods, we will discuss our experiences and the feedback from the public. We will also critically evaluate our own success in coping with public involvement challenges. In addition to that, we present general recommendations for dealing with these problematic issues based on our experiences, which provide new insights for applying various public involvement methods in multi-objective decision making projects.
Chalmers Publication Library Evaluation of Link Adaptation Methods in Multi-User OFDM Systems with
Chalmers Publication Library Evaluation of Link Adaptation Methods in Multi-User OFDM Systems with Imperfect Channel State Information This document has been downloaded from Chalmers Publication Library (CPL Publication Library (CPL) offers the possibility of retrieving research publications produced at Chalmers
3. Computing Observables Computer simulation methods are by now an established tool in many
Heermann, Dieter W.
3. Computing Observables Computer simulation methods are by now an established tool in many branches of science. The motivations for computer simulations of physical systems are manifold. One-type approximation. With a computer simulation we have the ability to study systems not yet tractable with analytical
Novel Methods for Communicating Plasma Science to the General Public
NASA Astrophysics Data System (ADS)
Zwicker, Andrew; Merali, Aliya; Wissel, S. A.; Delooper, John
2012-10-01
The broader implications of Plasma Science remains an elusive topic that the general public rarely discusses, regardless of their relevance to energy, the environment, and technology. Recently, we have looked beyond print media for methods to reach large numbers of people in creative and informative ways. These have included video, art, images, and music. For example, our submission to the ``What is a Flame?'' contest was ranked in the top 15 out of 800 submissions. Images of plasmas have won 3 out of 5 of the Princeton University ``Art of Science'' competitions. We use a plasma speaker to teach students of all ages about sound generation and plasma physics. We report on the details of each of these and future videos and animations under development.
Natural Computing Methods in Bioinformatics: A Survey
Masulli, Francesco
,10], Swarm Intelligence based on simulation of social behavior of animals [11], Immunocomputing inspired other biological information like hydrophobicity, evolutionary information, and solvent accessibility optimization techniques, such as Evolutionary Computation based on simulation of biologi- cal evolution [9
SAR/QSAR methods in public health practice
Demchuk, Eugene, E-mail: edemchuk@cdc.gov; Ruiz, Patricia; Chou, Selene; Fowler, Bruce A.
2011-07-15
Methods of (Quantitative) Structure-Activity Relationship ((Q)SAR) modeling play an important and active role in ATSDR programs in support of the Agency mission to protect human populations from exposure to environmental contaminants. They are used for cross-chemical extrapolation to complement the traditional toxicological approach when chemical-specific information is unavailable. SAR and QSAR methods are used to investigate adverse health effects and exposure levels, bioavailability, and pharmacokinetic properties of hazardous chemical compounds. They are applied as a part of an integrated systematic approach in the development of Health Guidance Values (HGVs), such as ATSDR Minimal Risk Levels, which are used to protect populations exposed to toxic chemicals at hazardous waste sites. (Q)SAR analyses are incorporated into ATSDR documents (such as the toxicological profiles and chemical-specific health consultations) to support environmental health assessments, prioritization of environmental chemical hazards, and to improve study design, when filling the priority data needs (PDNs) as mandated by Congress, in instances when experimental information is insufficient. These cases are illustrated by several examples, which explain how ATSDR applies (Q)SAR methods in public health practice.
Computer-Aided Dispatch System as a Decision Making Tool in Public and Private Sectors
Lee, I-Jen
We describe in detail seven distinct areas in both public and private sectors in which a real-time computer-aided dispatch system is applicable to the allocation of scarce resources. Characteristics of a real-time ...
Reliability-Driven Reputation Based Scheduling for Public-Resource Computing Xiaofeng Wang#1
Melbourne, University of
Reliability-Driven Reputation Based Scheduling for Public-Resource Computing Using GA Xiaofeng Wang environments, providing reliable scheduling based on resource reliability evaluation is becoming increasingly important. Most existing reputation models used for reliability evaluation ignore the time influence
DEVELOPING METHODS FOR COMPUTER PROGRAMMING BY MUSICAL PERFORMANCE AND COMPOSITION
Miranda, Eduardo Reck
DEVELOPING METHODS FOR COMPUTER PROGRAMMING BY MUSICAL PERFORMANCE AND COMPOSITION Alexis Kirke successful work in sonifying computer program code to help debugging. This paper investigates the reverse process, allowing music to be used to write computer programs. Such an approach would be less language
Revised CPA method to compute Lyapunov functions for nonlinear systems
Hafstein, Sigurður Freyr
Revised CPA method to compute Lyapunov functions for nonlinear systems Peter A. Giesla , Sigurdur FSchool of Science and Engineering, Reykjavik University, Menntavegur 1, IS-101 Reykjavik, Iceland Abstract The CPA in computing a CPA Lyapunov function for such a system. The size of the domain of the computed CPA Lyapunov
Computational Anatomy, Object Matching, and the Level Set Method
Ferguson, Thomas S.
Computational Anatomy, Object Matching, and the Level Set Method Wei-Hsun Liao1, Luminita Vese2 matching in computational anatomy. We present a new framework for warping pairs of overlapping and non and the infinite dimensional group actions is discussed. 1 Introduction Computational anatomy [1, 2] is an emerging
Fiscal federalism and local public finance: A computable general equilibrium (CGE) framework
Thomas Nechyba
1996-01-01
This paper attempts to make an argument for the feasibility and usefulness of a computable general equilibrium approach to studying fiscal federalism and local public finance. It begins by presenting a general model of fiscal federalism that has at its base a local public goods model with (1) multiple types of mobile agents who are endowed with preferences, private good
The Development of an Online Course to Teach Public Administrators Computer Utilization
Janet Gubbins; Melanie Clay; Jerry Perkins
Although there is a growing requirement that public administrators have technology skills, within the Master of Public Administration programs at most universities there are few accommodations for technology training that are both field specific and meet the demands of non-traditional graduate students. Often times the computer courses that are offered are designed to address the needs of students pursuing careers
The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers
ERIC Educational Resources Information Center
DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert
2013-01-01
Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…
Computational methods for aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Peeters, M. F.
1983-01-01
Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.
17 CFR 43.3 - Method and timing for real-time public reporting.
Code of Federal Regulations, 2012 CFR
2012-04-01
...2012-04-01 false Method and timing for real-time public reporting. 43.3 Section 43...COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...
17 CFR 43.3 - Method and timing for real-time public reporting.
Code of Federal Regulations, 2013 CFR
2013-04-01
...2013-04-01 false Method and timing for real-time public reporting. 43.3 Section 43...COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...
12 CFR 227.25 - Unfair balance computation method.
Code of Federal Regulations, 2010 CFR
2010-01-01
...2010-01-01 2010-01-01 false Unfair balance computation method. 227.25 Section...Account Practices Rule § 227.25 Unfair balance computation method. (a) General rule...bank must not impose finance charges on balances on a consumer credit card account...
Domain identification in impedance computed tomography by spline collocation method
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1990-01-01
A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.
Advanced Computing Initiative To Study Methods of Improving Fusion
, builds on US leadership in fusion simulations. Partner with DOE Advanced Scientific Computing. 1 GregAdvanced Computing Initiative To Study Methods of Improving Fusion July 10, 2014 Science motivation simulations to understand how these methods scale, to quantitatively predict performance improvements
A Numerical Method for Computing an SVD-like Decomposition
Xu, Hongguo
2005-09-05
We present a numerical method for computing the SVD-like decomposition B = QDS-1 , where Q is orthogonal, S is symplectic, and D is a permuted diagonal matrix. The method can be applied directly to compute the canonical form of the Hamiltonian...
Classical versus Computer Algebra Methods in Elementary Geometry
ERIC Educational Resources Information Center
Pech, Pavel
2005-01-01
Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…
Computing (x): An Analytic Method J. C. Lagarias
Bernstein, Daniel
computations were carried out by Mapes [14] and Bohman [2], who computed several values up to and including/3 + ) time and O(x1/3 + ) space to compute (x), see [10]. This version of the Meissel-Lehmer method has been for computing (x), based on entirely different ideas, which require O(x3/5 + ) time and O(x ) space for any
Computational Methods for Analyzing Health News Coverage
ERIC Educational Resources Information Center
McFarlane, Delano J.
2011-01-01
Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…
Method of performing computational aeroelastic analyses
NASA Technical Reports Server (NTRS)
Silva, Walter A. (Inventor)
2011-01-01
Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.
Integral Deferred Correction methods for scientific computing
NASA Astrophysics Data System (ADS)
Morton, Maureen Marilla
Since high order numerical methods frequently can attain accurate solutions more efficiently than low order methods, we develop and analyze new high order numerical integrators for the time discretization of ordinary and partial differential equations. Our novel methods address some of the issues surrounding high order numerical time integration, such as the difficulty of many popular methods' construction and handling the effects of disparate behaviors produce by different terms in the equations to be solved. We are motivated by the simplicity of how Deferred Correction (DC) methods achieve high order accuracy [72, 27]. DC methods are numerical time integrators that, rather than calculating tedious coefficients for order conditions, instead construct high order accurate solutions by iteratively improving a low order preliminary numerical solution. With each iteration, an error equation is solved, the error decreases, and the order of accuracy increases. Later, DC methods were adjusted to include an integral formulation of the residual, which stabilizes the method. These Spectral Deferred Correction (SDC) methods [25] motivated Integral Deferred Corrections (IDC) methods. Typically, SDC methods are limited to increasing the order of accuracy by one with each iteration due to smoothness properties imposed by the gridspacing. However, under mild assumptions, explicit IDC methods allow for any explicit rth order Runge-Kutta (RK) method to be used within each iteration, and then an order of accuracy increase of r is attained after each iteration [18]. We extend these results to the construction of implicit IDC methods that use implicit RK methods, and we prove analogous results for order of convergence. One means of solving equations with disparate parts is by semi-implicit integrators, handling a "fast" part implicitly and a "slow" part explicitly. We incorporate additive RK (ARK) integrators into the iterations of IDC methods in order to construct new arbitrary order semi-implicit methods, which we denote IDC-ARK methods. Under mild assumptions, we rigorously establish the order of accuracy, finding that using any rth order ARK method within each iteration gives an order of accuracy increase of r after each iteration [15]. We apply IDC-ARK methods to several numerical examples and present preliminary results for adaptive timestepping with IDC-ARK methods. Another means of solving equations with disparate parts is by operator splitting methods. We construct high order splitting methods by employing low order splitting methods within each IDC iteration. We analyze the efficiency of our split IDC methods as compared to high order split methods in [77] and also note that our construction is less tedious. Conservation of mass is proved for split IDC methods with semi-Lagrangian WENO reconstruction applied to the Vlasov-Poisson system. We include numerical results for the application of split IDC methods to constant advection, rotating, and classic plasma physics problems. This is a preliminary, yet significant, step in the development of simple, high order numerical integrators that are designed for solving differential equations that display disparate behaviors. Our results could extend naturally to an asymptotic preserving setting or to other operator splittings.
Strengthening Computer Technology Programs. Special Publication Series No. 49.
ERIC Educational Resources Information Center
McKinney, Floyd L., Comp.
Three papers present examples of strategies used by developing institutions and historically black colleges to strengthen computer technology programs. "Promoting Industry Support in Developing a Computer Technology Program" (Albert D. Robinson) describes how the Washtenaw Community College (Ann Arbor, Michigan) Electrical/Electronics Department…
Universal Tailored Access: Automating Setup of Public and Classroom Computers.
ERIC Educational Resources Information Center
Whittaker, Stephen G.; Young, Ted; Toth-Cohen, Susan
2002-01-01
This article describes a setup smart access card that enables users with visual impairments to customize magnifiers and screen readers on computers by loading the floppy disk into the computer and finding and pressing two successive keys. A trial with four elderly users found instruction took about 15 minutes. (Contains 3 references.) (CR)
Consensus methods: review of original methods and their main alternatives used in public health
Bourrée, Fanny; Michel, Philippe; Salmi, Louis Rachid
2008-01-01
Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Données Santé Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039
Code of Federal Regulations, 2014 CFR
2014-10-01
...interconnecting private and public systems of communications. 90.483 Section...Transmitter Control Interconnected Systems § 90.483 Permissible methods...interconnecting private and public systems of communications....
Code of Federal Regulations, 2011 CFR
2011-10-01
...interconnecting private and public systems of communications. 90.483 Section...Transmitter Control Interconnected Systems § 90.483 Permissible methods...interconnecting private and public systems of communications....
Code of Federal Regulations, 2010 CFR
2010-10-01
...interconnecting private and public systems of communications. 90.483 Section...Transmitter Control Interconnected Systems § 90.483 Permissible methods...interconnecting private and public systems of communications....
Code of Federal Regulations, 2012 CFR
2012-10-01
...interconnecting private and public systems of communications. 90.483 Section...Transmitter Control Interconnected Systems § 90.483 Permissible methods...interconnecting private and public systems of communications....
Code of Federal Regulations, 2013 CFR
2013-10-01
...interconnecting private and public systems of communications. 90.483 Section...Transmitter Control Interconnected Systems § 90.483 Permissible methods...interconnecting private and public systems of communications....
Parallel computation with the spectral element method
Ma, Hong
1995-12-01
Spectral element models for the shallow water equations and the Navier-Stokes equations have been successfully implemented on a data parallel supercomputer, the Connection Machine model CM-5. The nonstaggered grid formulations for both models are described, which are shown to be especially efficient in data parallel computing environment.
NONPARAMETRIC ROBUST METHODS FOR COMPUTER VISION
Written under the direction of Professor Peter Meer and approved by New Brunswick, New Jersey January. Comaniciu Dissertation Director: Professor Peter Meer Low level computer vision tasks are misleadingly distance based approaches. iii #12; Acknowledgements I am grateful to Professor Peter Meer for his advice
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-04
...Public Meeting--Intersection of Cloud Computing and Mobility Forum and Workshop...INFORMATION: NIST hosted six prior Cloud Computing Forum & Workshop events in May...Government's experience with cloud computing, report on the status of...
36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?
Code of Federal Regulations, 2014 CFR
2014-07-01
...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...
36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?
Code of Federal Regulations, 2012 CFR
2012-07-01
...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...
36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?
Code of Federal Regulations, 2013 CFR
2013-07-01
...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...
Computational methods for internal flows with emphasis on turbomachinery
NASA Technical Reports Server (NTRS)
Mcnally, W. D.; Sockol, P. M.
1981-01-01
Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.
COMSAC: Computational Methods for Stability and Control. Part 1
NASA Technical Reports Server (NTRS)
Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)
2004-01-01
Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?
Assessment of gene order computing methods for Alzheimer's disease
2013-01-01
Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541
Under consideration for publication in Formal Aspects of Computing Beyond Contracts for Concurrency
Ostroff, Jonathan S.
Under consideration for publication in Formal Aspects of Computing Beyond Contracts for Concurrency Eiffel can be extended to the concurrent case. However, some safety and liveness properties depend upon the Hoare rules where applicable to reduce the number of steps in a computation. Keywords: SCOOP (Simple
Under consideration for publication in Formal Aspects of Computing The RISC ProofNavigator
Under consideration for publication in Formal Aspects of Computing The RISC Proof for Symbolic Computation (RISC) Johannes Kepler University, Linz, Austria http://www.risc.uni-linz.ac.at Abstract. This paper gives an overview on the RISC ProofNavigator, an interactive proving assistant
Small Towns and Small Computers: Can a Match Be Made? A Public Policy Seminar.
ERIC Educational Resources Information Center
National Association of Towns and Townships, Washington, DC.
A public policy seminar discussed how to match small towns and small computers. James K. Coyne, Special Assistant to the President and Director of the White House Office of Private Sector Initiatives, offered opening remarks and described a database system developed by his office to link organizations and communities with small computers to…
The Battle to Secure Our Public Access Computers
ERIC Educational Resources Information Center
Sendze, Monique
2006-01-01
Securing public access workstations should be a significant part of any library's network and information-security strategy because of the sensitive information patrons enter on these workstations. As the IT manager for the Johnson County Library in Kansas City, Kan., this author is challenged to make sure that thousands of patrons get the access…
The Computer as an Aid to Public Relations Writing.
ERIC Educational Resources Information Center
Rayfield, Robert E.
Teachers of public relations and other communication areas, with endorsement from the Association for Education in Journalism and Mass Communication (AEJMC), should request the data processing industry to develop assisted instruction programs in journalistic writing. Such action would provide a clearly defined need for a significant market and…
Li, Shaofan
function in the Sobolev norms. As a meshless method, the convergence rate is measured by a new control in developing meshless approximations for Gale&in procedures to solve partial differential equations. Several. [8] and Liu [9]. It seems to us that the moving least square interpolant based meshless method has
Statistical and Computational Methods for Genetic Diseases: An Overview
Di Taranto, Maria Donata
2015-01-01
The identification of causes of genetic diseases has been carried out by several approaches with increasing complexity. Innovation of genetic methodologies leads to the production of large amounts of data that needs the support of statistical and computational methods to be correctly processed. The aim of the paper is to provide an overview of statistical and computational methods paying attention to methods for the sequence analysis and complex diseases. PMID:26106440
Computational methods for physical mapping of chromosomes
Torney, D.C.; Schenk, K.R. (Los Alamos National Lab., NM (USA)); Whittaker, C.C. (International Business Machines Corp., Albuquerque, NM (USA) Los Alamos National Lab., NM (USA)); White, S.W. (International Business Machines Corp., Kingston, NY (USA))
1990-01-01
A standard technique for mapping a chromosome is to randomly select pieces, to use restriction enzymes to cut these pieces into fragments, and then to use the fragments for estimating the probability of overlap of these pieces. Typically, the order of the fragments within a piece is not determined, and the observed fragment data from each pair of pieces must be permuted N1 {times} N2 ways to evaluate the probability of overlap, N1 and N2 being the observed number of fragments in the two selected pieces. We will describe computational approaches used to substantially reduce the computational complexity of the calculation of overlap probability from fragment data. Presently, about 10{sup {minus}4} CPU seconds on one processor of an IBM 3090 is required for calculation of overlap probability from the fragment data of two randomly selected pieces, with an average of ten fragments per piece. A parallel version has been written using IBM clustered FORTRAN. Parallel measurements for 1, 6, and 12 processors will be presented. This approach has proven promising in the mapping of chromosome 16 at Los Alamos National Laboratory. We will also describe other computational challenges presented by physical mapping. 4 refs., 4 figs., 1 tab.
Original computer method for the experimental data processing in photoelasticity
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Panait, Cornel; Barhalescu, Mihaela; Sabau, Adrian; Dumitrache, Constantin; Dascalescu, Anca-Elena
2015-02-01
Optical methods in experimental mechanics are important because their results are accurate and they may be used for both full field interpretation and analysis of the local rapid variation of the stresses produced by the stress concentrators. Researchers conceived several graphical, analytical and numerical methods for the experimental data reduction. The paper presents an original computer method employed to compute the analytic functions of the isostatics, using the pattern of isoclinics of a photoelastic model or coating. The resulting software instrument may be included in hybrid models consisting of analytical, numerical and experimental studies. The computer-based integration of the results of these studies offers a higher level of understanding of the phenomena. A thorough examination of the sources of inaccuracy of this computer based numerical method was done and the conclusions were tested using the original computer code which implements the algorithm.
A Novel College Network Resource Management Method using Cloud Computing
NASA Astrophysics Data System (ADS)
Lin, Chen
At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.
Phase Field Method: Spinodal Decomposition Computer Laboratory
NSDL National Science Digital Library
García, R. Edwin
2008-08-25
In this lab, spinodal decomposition is numerically implemented in FiPy. A simple example python script (spinodal.py) summarizes the concepts. This lab is intended to complement the "Phase Field Method: An Introduction" lecture
Least-squares methods for computational electromagnetics
Kolev, Tzanio Valentinov
2004-11-15
for the time-harmonic test using Multigrid. .... 139 x LIST OF FIGURES FIGURE Page 2.1 Typical geometry of the domain Omega. ................... 13 2.2 Face bubble functions: element of BFh in 2D and the bubbles for each face of a tetrahedron in 3D....2) Theoretically (1.1) should be solved on all of R3. However, one usually computes in a sufficiently large domain, which is assumed to be surrounded by a perfect conductor. The boundary conditions in this case are: E?n = 0, B ? n =0 onpartialdiffOmega , (1.3...
Advanced Monte Carlo Methods: Computing Greeks
Giles, Mike
W(t) an Euler approximation with timestep h is Sn+1 = Sn + a(Sn) h + b(Sn) Zn h, where Z is a N(0, 1) random random numbers for the "bumped" path simulations to minimise the variance. Computing Greeks Â p. 6 #12 of the discrete states Sn, we have the M-dimensional integral V = E[f(S)] = f(S) p(S) dS, where dS dS1 dS2 dS3
Developing a multimodal biometric authentication system using soft computing methods.
Malcangi, Mario
2015-01-01
Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision. PMID:25502384
Computational Methods for Jet Noise Simulation
NASA Technical Reports Server (NTRS)
Goodrich, John W. (Technical Monitor); Hagstrom, Thomas
2003-01-01
The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.
Determinant Computation on the GPU using the Condensation Method
Moreno Maza, Marc
Determinant Computation on the GPU using the Condensation Method Sardar Anisul Haque1, Marc Moreno@csd.uwo.ca Abstract. We report on a GPU implementation of the condensation method designed by Abdelmalek Salem. Our results suggest that a GPU implementation of the condensation method has a large potential
Modified multirevolution integration methods for satellite orbit computation
O. F. Graf; D. G. Bettis
1975-01-01
Multirevolution methods allow for the computation of satellite orbits in steps spanning many revolutions. The methods previously discussed in the literature are based on polynomial approximations, and as a result they will integrate exactly (excluding round-off errors) polynomial functions of a discrete independent variable. Modified methods are derived that will integrate exactly products of linear and periodic functions. Numerical examples
A numerical method for computing minimal surfaces in arbitrary dimension
Frey, Pascal
method because of it flexibility when handling topological changes, especially in higher dimensionsA numerical method for computing minimal surfaces in arbitrary dimension Thomas Cecil * ICES.-T. Cheng, The level set method applied to geometrically based motion, materials science, and image
Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems
NASA Technical Reports Server (NTRS)
Terrile, Richard J.; Guillaume, Alexandre
2011-01-01
A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.
Computational Methods for Modification of Metabolic Networks
Tamura, Takeyuki; Lu, Wei; Akutsu, Tatsuya
2015-01-01
In metabolic engineering, modification of metabolic networks is an important biotechnology and a challenging computational task. In the metabolic network modification, we should modify metabolic networks by newly adding enzymes or/and knocking-out genes to maximize the biomass production with minimum side-effect. In this mini-review, we briefly review constraint-based formalizations for Minimum Reaction Cut (MRC) problem where the minimum set of reactions is deleted so that the target compound becomes non-producible from the view point of the flux balance analysis (FBA), elementary mode (EM), and Boolean models. Minimum Reaction Insertion (MRI) problem where the minimum set of reactions is added so that the target compound newly becomes producible is also explained with a similar formalization approach. The relation between the accuracy of the models and the risk of overfitting is also discussed. PMID:26106462
Computer methods mechanlGs end
Helsinki, University of
to the actual 3D behaviour. How do various variants of the classical lowest-order s models for a linearly elastic, is,atropic material, and we apply both numerical methods (FEM) Scale resolution and layers. With the thickness of the shell considered variable and the problem
Revocation in Publicly Verifiable Outsourced Computation James Alderman
International Association for Cryptologic Research (IACR)
devices. There is also a trend towards cloud computing and enormous volumes of data ("big data") which Cid , and Jason Crampton Information Security Group, Royal Holloway, University of London Egham a number of new security models and present a construction of such a scheme built upon Key-Policy Attribute
Computing Technology; A Bibliography of Selected Rand Publications;
ERIC Educational Resources Information Center
Rand Corp., Santa Monica, CA.
Abstracts of over 300 unclassified Rand Corporation studies dealing with various aspects of computing technology are presented in this bibliography. The studies selected were all issued during the period January 1963 through December 1971. A subject index which includes a brief annotation and code number for each entry and an author index are…
The ACLS Survey of Scholars: Views on Publications, Computers, Libraries.
ERIC Educational Resources Information Center
Morton, Herbert C.; Price, Anne Jamieson
1986-01-01
Reviews results of a survey by the American Council of Learned Societies (ACLS) of 3,835 scholars in the humanities and social sciences who are working both in colleges and universities and outside the academic community. Areas highlighted include professional reading, authorship patterns, computer use, and library use. (LRW)
Computers in Public Schools: Changing the Image with Image Processing.
ERIC Educational Resources Information Center
Raphael, Jacqueline; Greenberg, Richard
1995-01-01
The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…
Computer Technology Standards of Learning for Virginia's Public Schools
ERIC Educational Resources Information Center
Virginia Department of Education, 2005
2005-01-01
The Computer/Technology Standards of Learning identify and define the progressive development of essential knowledge and skills necessary for students to access, evaluate, use, and create information using technology. They provide a framework for technology literacy and demonstrate a progression from physical manipulation skills for the use of…
NIST Special Publication 250-59 NIST Computer Time Services
registered IP addresses 34 c) Secure shell connections 34 5. Alarm System 34 6. Attacks and Responses 34 IV. Technical Description of Internet Time Service 6 A. Technical Description of Hardware 6 1. Computer Systems of Server Software 7 1. Operating System 7 2. Time Server Software 9 a) Standard daemons 9 b) Clock
Equations of motion methods for computing electron affinities and
Simons, Jack
CHAPTER 17 Equations of motion methods for computing electron affinities and ionization potentials, Salt Lake City, UT 84112, USA Abstract The ab initio calculation of molecular electron affinities (EA) electron affinity (EA) of a molecule can be computed by (approximately) solving the Schro¨dinger equation
Python for Education: Computational Methods for Nonlinear Systems
Christopher R. Myers; James. P. Sethna
2007-04-24
We describe a novel, interdisciplinary, computational methods course that uses Python and associated numerical and visualization libraries to enable students to implement simulations for a number of different course modules. Problems in complex networks, biomechanics, pattern formation, and gene regulation are highlighted to illustrate the breadth and flexibility of Python-powered computational environments.
Method for computing coupled-channels Gamow-state energies
He, G.; Fink, P.; Landau, R.H. (Physics Department, Oregon State University, Corvallis, Oregon 97331 (US))
1989-09-01
The bound states and resonances of a two-particle system occur at the complex energies for which the system's {ital T} matrix has poles. Presented is a more efficient method of computing these energies for symmetric potential interactions.
2.093 Computer Methods in Dynamics, Fall 2002
Bathe, Klaus-Jürgen
Formulation of finite element methods for analysis of dynamic problems in solids, structures, fluid mechanics, and heat transfer. Computer calculation of matrices and numerical solution of equilibrium equations by direct ...
36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?
Code of Federal Regulations, 2010 CFR
2010-07-01
...rules apply to public access use of the Internet on NARA-supplied computers? 1254...rules apply to public access use of the Internet on NARA-supplied computers? (a...computers (workstations) are available for Internet use in all NARA research...
36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?
Code of Federal Regulations, 2011 CFR
2011-07-01
...rules apply to public access use of the Internet on NARA-supplied computers? 1254...rules apply to public access use of the Internet on NARA-supplied computers? (a...computers (workstations) are available for Internet use in all NARA research...
Transonic Flow Computations Using Nonlinear Potential Methods
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
2000-01-01
This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.
Computer Simulation Methods for Defect Configurations and Nanoscale Structures
Gao, Fei
2010-01-01
This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.
ERIC Educational Resources Information Center
Osunwusi, Adeyinka Olumuyiwa; Abifarin, Michael Segun
2013-01-01
The aim of this study was to conduct a comparative assessment of computer literacy of private and public secondary school students. Although the definition of computer literacy varies widely, this study treated computer literacy in terms of access to, and use of, computers and the internet, basic knowledge and skills required to use computers and…
Koohsari, Mohammad Javad; Mavoa, Suzanne; Villanueva, Karen; Sugiyama, Takemi; Badland, Hannah; Kaczynski, Andrew T; Owen, Neville; Giles-Corti, Billie
2015-05-01
Public open spaces such as parks and green spaces are key built environment elements within neighbourhoods for encouraging a variety of physical activity behaviours. Over the past decade, there has been a burgeoning number of active living research studies examining the influence of public open space on physical activity. However, the evidence shows mixed associations between different aspects of public open space (e.g., proximity, size, quality) and physical activity. These inconsistencies hinder the development of specific evidence-based guidelines for urban designers and policy-makers for (re)designing public open space to encourage physical activity. This paper aims to move this research agenda forward, by identifying key conceptual and methodological issues that may contribute to inconsistencies in research examining relations between public open space and physical activity. PMID:25779691
Lecture Notes in Computer Science 4800 Commenced Publication in 1973
Dershowitz, Nachum
, D.2.4, D.2-3, I.2.2 LNCS Sublibrary: SL 1 Â Theoretical Computer Science and General Issues ISSN is a part of Springer Science+Business Media springer.com Â© Springer-Verlag Berlin Heidelberg 2008 Printed, India Printed on acid-free paper SPIN: 12227471 06/3180 5 4 3 2 1 0 #12;Dedicated to Boris (Boaz
47 CFR 61.20 - Method of filing publications.
Code of Federal Regulations, 2011 CFR
2011-10-01
...publications and associated documents, such as transmittal letters, requests for special permission, and supporting information...chapter, issuing carriers must submit the original of the cover letter (without attachments), FCC Form 159, and the...
Computational Methods in Quantum Field Theory
Kurt Langfeld
2007-11-19
After a brief introduction to the statistical description of data, these lecture notes focus on quantum field theories as they emerge from lattice models in the critical limit. For the simulation of these lattice models, Markov chain Monte-Carlo methods are widely used. We discuss the heat bath and, more modern, cluster algorithms. The Ising model is used as a concrete illustration of important concepts such as correspondence between a theory of branes and quantum field theory or the duality map between strong and weak couplings. The notes then discuss the inclusion of gauge symmetries in lattice models and, in particular, the continuum limit in which quantum Yang-Mills theories arise.
Implicit methods for computing chemically reacting flow
NASA Astrophysics Data System (ADS)
Li, C. P.
Modeling the inviscid air flow and its constituents over a hypersonically flying body requires a large system of Euler and chemical rate equations in three spatial coordinates. In most cases, the simplest approach to solve for the variables would be based on explicit integration of the governing equations. But the standard techniques are not suitable for this purpose because the integration step size must be inordinately small in order to maintain numerical stability. The difficulty is due to the stiff character of the difference equations, as there exists a large spectrum of spatial and temporal scales in the approximation of physical phenomena by numerical methods. For instance, in the calculation of gradients caused by shock and by cooled wall on a coarse grid, unchecked numerical errors eventually will lead to violent instability, and in calculations of species near chemical equilibrium, a small error in one species will give rise to a large error in the source term for other species. Despite the different nature of the stiffness in a complex system of equations, the most effective approach is believed to be implicit integration. The step increment is no longer dictated by the stability criteria for explicit methods, but instead is dictated by the degree of linearization introduced to the governing equations and by the order of desired accuracy. The linearization is enacted by means of Jacobian matrices, resulting from the differentiation of the flux as well as the rate production terms with respect to dependent variables. The backward Euler scheme is then applied to discretize the partial differential equations and to convert them into a system of linear difference equations in vector form. As this particular approach has the A-stable property, it is the one recommended by Lomax and Bailey(1) for one-dimensional nonequilibrium flow studies. However, in the practice of solving flow problems in multidimensions, it was not clear then how to deal with the mammoth size of the sparse block matrix equations. The implementation of an implicit method in the solution procedure could be as prohibitively expensive as a modified Runge-Kutta method.(2)
Analytical and numerical methods; advanced computer concepts
Lax, P D
1991-03-01
This past year, two projects have been completed and a new is under way. First, in joint work with R. Kohn, we developed a numerical algorithm to study the blowup of solutions to equations with certain similarity transformations. In the second project, the adaptive mesh refinement code of Berger and Colella for shock hydrodynamic calculations has been parallelized and numerical studies using two different shared memory machines have been done. My current effort is towards the development of Cartesian mesh methods to solve pdes with complicated geometries. Most of the coming year will be spent on this project, which is joint work with Prof. Randy Leveque at the University of Washington in Seattle.
Discrete linear canonical transform computation by adaptive method.
Zhang, Feng; Tao, Ran; Wang, Yue
2013-07-29
The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process. PMID:23938684
Armstrong, L.D.; Rymer, G.; Perkins, S.
1994-12-31
This paper addresses a process facilitation technique using computer hardware and software that assists its users in group decision-making, consensus building, surveying and polling, and strategic planning. The process and equipment has been successfully used by the Department of Energy and Martin Marietta Energy Systems, Inc., Environmental Restoration and Waste Management Community Relations program. The technology is used to solicit and encourage qualitative and documented public feedback in government mandated or sponsored public meetings in Oak Ridge, Tennessee.
Using boundary methods to compute the Casimir energy
F. C. Lombardo; F. D. Mazzitelli; P. I. Villar
2010-03-10
We discuss new approaches to compute numerically the Casimir interaction energy for waveguides of arbitrary section, based on the boundary methods traditionally used to compute eigenvalues of the 2D Helmholtz equation. These methods are combined with the Cauchy's theorem in order to perform the sum over modes. As an illustration, we describe a point-matching technique to compute the vacuum energy for waveguides containing media with different permittivities. We present explicit numerical evaluations for perfect conducting surfaces in the case of concentric corrugated cylinders and a circular cylinder inside an elliptic one.
Methods for operating parallel computing systems employing sequenced communications
Benner, Robert E. (Albuquerque, NM); Gustafson, John L. (Albuquerque, NM); Montry, Gary R. (Albuquerque, NM)
1999-01-01
A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.
Methods for operating parallel computing systems employing sequenced communications
Benner, R.E.; Gustafson, J.L.; Montry, G.R.
1999-08-10
A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Members Topics Applic Public Projects Collabor COMPUTATIONAL INTELLIGENCE GROUP
Cardeñosa, Jesús
, classification by regression Heuristic Optimization: differential evolution, evolutionary strategies, local search methods, genetic algorithms, estimation of distribution algorithms, particle swarm and gravitational search algorithms Neuroinformatics: analysis of fMRI or MEG data in order to predict behavioral
Computer systems and methods for visualizing data
Stolte, Chris; Hanrahan, Patrick
2010-07-13
A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.
Computational Simulations and the Scientific Method
NASA Technical Reports Server (NTRS)
Kleb, Bil; Wood, Bill
2005-01-01
As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.
A computing method for spatial accessibility based on grid partition
NASA Astrophysics Data System (ADS)
Ma, Linbing; Zhang, Xinchang
2007-06-01
An accessibility computing method and process based on grid partition was put forward in the paper. As two important factors impacting on traffic, density of road network and relative spatial resistance for difference land use was integrated into computing traffic cost in each grid. A* algorithms was inducted to searching optimum traffic cost of grids path, a detailed searching process and definition of heuristic evaluation function was described in the paper. Therefore, the method can be implemented more simply and its data source is obtained more easily. Moreover, by changing heuristic searching information, more reasonable computing result can be obtained. For confirming our research, a software package was developed with C# language under ArcEngine9 environment. Applying the computing method, a case study on accessibility of business districts in Guangzhou city was carried out.
Review of parallel computing methods and tools for FPGA technology
NASA Astrophysics Data System (ADS)
Cieszewski, Rados?aw; Linczuk, Maciej; Pozniak, Krzysztof; Romaniuk, Ryszard
2013-10-01
Parallel computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated using parallel computing techniques. Specialized parallel computer architectures are used for accelerating speci c tasks. High-Energy Physics Experiments measuring systems often use FPGAs for ne-grained computation. FPGA combines many bene ts of both software and ASIC implementations. Like software, the mapped circuit is exible, and can be recon gured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This paper presents existing methods and tools for ne-grained computation implemented in FPGA using Behavioral Description and High Level Programming Languages.
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
Computational methods to study kinetics of DNA replication
Bechhoefer, John
Computational methods to study kinetics of DNA replication Scott Cheng-Hsin Yang, Michel G that describe the state of DNA while undergoing replication in S phase. In this chapter, we describe methods, and the finite amount of DNA in a chromosome. Key words: DNA replication, replication fork velocity, origin
Calculating PI Using Historical Methods and Your Personal Computer.
ERIC Educational Resources Information Center
Mandell, Alan
1989-01-01
Provides a software program for determining PI to the 15th place after the decimal. Explores the history of determining the value of PI from Archimedes to present computer methods. Investigates Wallis's, Liebniz's, and Buffon's methods. Written for Tandy GW-BASIC (IBM compatible) with 384K. Suggestions for Apple II's are given. (MVL)
Precise computations of chemotactic collapse using moving mesh methods
Carretero, Ricardo
Precise computations of chemotactic collapse using moving mesh methods C.J. Budd a , R. Carretero) 061904]. We analyse a dynamic (scale-invariant) remeshing method which performs spatial mesh movement construct are ideally suited to a large number of prob- lems in mathematical biology for which collapse
Precise computations of chemotactic collapse using moving mesh methods
Scheichl, Robert
Precise computations of chemotactic collapse using moving mesh methods C.J. Budd a , R. Carretero) remeshing methods which perform spatial mesh movement based upon equidistribution. Using a suitably chosen to a large number of problems in mathematical biology for which collapse phenomena are expected. Key words
Computer Subroutines for Analytic Rotation by Two Gradient Methods.
ERIC Educational Resources Information Center
van Thillo, Marielle
Two computer subroutine packages for the analytic rotation of a factor matrix, A(p x m), are described. The first program uses the Flectcher (1970) gradient method, and the second uses the Polak-Ribiere (Polak, 1971) gradient method. The calculations in both programs involve the optimization of a function of free parameters. The result is a…
Comparison of methods for computing streamflow statistics for Pennsylvania streams
Ehlke, Marla H.; Reed, Lloyd A.
1999-01-01
Methods for computing streamflow statistics intended for use on ungaged locations on Pennsylvania streams are presented and compared to frequency distributions of gaged streamflow data. The streamflow statistics used in the comparisons include the 7-day 10-year low flow, 50-year flood flow, and the 100-year flood flow; additional statistics are presented. Streamflow statistics for gaged locations on streams in Pennsylvania were computed using three methods for the comparisons: 1) Log-Pearson type III frequency distribution (Log-Pearson) of continuous-record streamflow data, 2) regional regression equations developed by the U.S. Geological Survey in 1982 (WRI 82-21), and 3) regional regression equations developed by the Pennsylvania State University in 1981 (PSU-IV). Log-Pearson distribution was considered the reference method for evaluation of the regional regression equations. Low-flow statistics were computed using the Log-Pearson distribution and WRI 82-21, whereas flood-flow statistics were computed using all three methods. The urban adjustment for PSU-IV was modified from the recommended computation to exclude Philadelphia and the surrounding areas (region 1) from the adjustment. Adjustments for storage area for PSU-IV were also slightly modified. A comparison of the 7-day 10-year low flow computed from Log-Pearson distribution and WRI-82- 21 showed that the methods produced significantly different values for about 7 percent of the state. The same methods produced 50-year and 100-year flood flows that were significantly different for about 24 percent of the state. Flood-flow statistics computed using Log-Pearson distribution and PSU-IV were not significantly different in any regions of the state. These findings are based on a statistical comparison using the t-test on signed ranks and graphical methods.
Method for implementation of recursive hierarchical segmentation on parallel computers
NASA Technical Reports Server (NTRS)
Tilton, James C. (Inventor)
2005-01-01
A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.
Managing expectations when publishing tools and methods for computational proteomics.
Martens, Lennart; Kohlbacher, Oliver; Weintraub, Susan T
2015-05-01
Computational tools are pivotal in proteomics because they are crucial for identification, quantification, and statistical assessment of data. The gateway to finding the best choice of a tool or approach for a particular problem is frequently journal articles, yet there is often an overwhelming variety of options that makes it hard to decide on the best solution. This is particularly difficult for nonexperts in bioinformatics. The maturity, reliability, and performance of tools can vary widely because publications may appear at different stages of development. A novel idea might merit early publication despite only offering proof-of-principle, while it may take years before a tool can be considered mature, and by that time it might be difficult for a new publication to be accepted because of a perceived lack of novelty. After discussions with members of the computational mass spectrometry community, we describe here proposed recommendations for organization of informatics manuscripts as a way to set the expectations of readers (and reviewers) through three different manuscript types that are based on existing journal designations. Brief Communications are short reports describing novel computational approaches where the implementation is not necessarily production-ready. Research Articles present both a novel idea and mature implementation that has been suitably benchmarked. Application Notes focus on a mature and tested tool or concept and need not be novel but should offer advancement from improved quality, ease of use, and/or implementation. Organizing computational proteomics contributions into these three manuscript types will facilitate the review process and will also enable readers to identify the maturity and applicability of the tool for their own workflows. PMID:25764342
Computational Methods for Protein Identification from Mass Spectrometry Data
McHugh, Leo; Arthur, Jonathan W
2008-01-01
Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology. PMID:18463710
PBL and Computer Programming — The Seven Steps Method with Adaptations
NASA Astrophysics Data System (ADS)
Nuutila, Esko; Törmä, Seppo; Malmi, Lauri
2005-06-01
Problem-Based Learning (PBL) method emphasizes students' own activity in learning about problems, setting up their own learning goals and actively searching for and analyzing information. In this paper, we describe and discuss our experiences on applying PBL, especially the seven steps method widely used in medical faculties, in an introductory computer programming course. We explain how the method is implemented, give examples and identify different kinds of PBL cases, and describe how the method is supplemented by other learning methods in our course. According to our experience, the PBL method increases the commitment of the students which results in a significantly lower drop-out rate: the average is 17% versus 45% in our traditional programming courses. In addition to computer programming, students also learn generic skills related to group work, collaborative design work, independent studying, and externalization of their knowledge.
Code of Federal Regulations, 2011 CFR
2011-10-01
...Public notice of changes in Statewide methods and standards for setting payment... PAYMENTS FOR SERVICES Payment Methods: General Provisions § 447.205 Public notice of changes in Statewide methods and standards for setting...
Code of Federal Regulations, 2010 CFR
2010-10-01
...Public notice of changes in Statewide methods and standards for setting payment... PAYMENTS FOR SERVICES Payment Methods: General Provisions § 447.205 Public notice of changes in Statewide methods and standards for setting...
A stochastic method for computing hadronic matrix elements
Drach, Vincent; Jansen, Karl; Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; Hadjiyiannakou, Kyriakos; Renner, Dru B.
2014-01-22
We present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.
Public Experiments and Their Analysis with the Replication Method
ERIC Educational Resources Information Center
Heering, Peter
2007-01-01
One of those who failed to establish himself as a natural philosopher in 18th century Paris was the future revolutionary Jean Paul Marat. He did not only publish several monographs on heat, optics and electricity in which he attempted to characterise his work as being purely empirical but he also tried to establish himself as a public lecturer.…
"Equal Educational Opportunity": Alternative Financing Methods for Public Education.
ERIC Educational Resources Information Center
Akin, John S.
This paper traces the evaluation of state-local public education finance systems to present; examines the prevalent foundation system of finance; discusses the "Serrano" decision and its implications for foundation systems; and, after an examination of three possible new approaches, recommends an education finance system. The first of the new…
Pedagogical Methods of Teaching "Women in Public Speaking."
ERIC Educational Resources Information Center
Pederson, Lucille M.
A course on women in public speaking, developed at the University of Cincinnati, focuses on the rhetoric of selected women who have been involved in various movements and causes in the United States in the twentieth century. Women studied include educator Mary McLeod Bethune, Congresswoman Jeannette Rankin, suffragette Carrie Chapman Catt, Helen…
Methods and systems for providing reconfigurable and recoverable computing resources
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.
Proposed congestion control method for cloud computing environments
Kuribayashi, Shin-ichi
2012-01-01
As cloud computing services rapidly expand their customer base, it has become important to share cloud resources, so as to provide them economically. In cloud computing services, multiple types of resources, such as processing ability, bandwidth and storage, need to be allocated simultaneously. If there is a surge of requests, a competition will arise between these requests for the use of cloud resources. This leads to the disruption of the service and it is necessary to consider a measure to avoid or relieve congestion of cloud computing environments. This paper proposes a new congestion control method for cloud computing environments which reduces the size of required resource for congested resource type instead of restricting all service requests as in the existing networks. Next, this paper proposes the user service specifications for the proposed congestion control method, and clarifies the algorithm to decide the optimal size of required resource to be reduced, based on the load offered to the system. I...
A comparative study of computational methods in cosmic gas dynamics
NASA Technical Reports Server (NTRS)
Van Albada, G. D.; Van Leer, B.; Roberts, W. W., Jr.
1982-01-01
Many theoretical investigations of fluid flows in astrophysics require extensive numerical calculations. The selection of an appropriate computational method is, therefore, important for the astronomer who has to solve an astrophysical flow problem. The present investigation has the objective to provide an informational basis for such a selection by comparing a variety of numerical methods with the aid of a test problem. The test problem involves a simple, one-dimensional model of the gas flow in a spiral galaxy. The numerical methods considered include the beam scheme, Godunov's method (G), the second-order flux-splitting method (FS2), MacCormack's method, and the flux corrected transport methods of Boris and Book (1973). It is found that the best second-order method (FS2) outperforms the best first-order method (G) by a huge margin.
Adler, Joan
" IACMM Israel Association for Computational Methods in Mechanics 29th Israel Symposium.ac.technion.aerodyne@givolid ! TECHNION - Israel Institute of Technology Faculty of Aerospace Engineering and Faculty of Mechanical:40( ,,,,'''' ,,,, ''''---- -POD )12:05( **** ,,,, **** ,,,, 12121212::::33330000 Israel Association
VerSum: Verifiable Computations over Large Public Logs Jelle van den Hooff
blockchains, or a Certificate Transparency log. VERSUM clients ensure that the output is correct by comparing publicly available logs, whose validity is guaranteed. The logs are large (e.g., the Bitcoin blockchain is added to the Bitcoin blockchain). To run computations over these logs requires a Permission to make
Trends in Access to Computing Technology and Its Use in Chicago Public Schools, 2001-2005
ERIC Educational Resources Information Center
Coca, Vanessa; Allensworth, Elaine M.
2007-01-01
Five years after Consortium on Chicago School Research (CCSR) research revealed a "digital divide" among Chicago Public Schools (CPS) and limited computer usage by staff and students, this new study shows that district schools have overcome many of these obstacles, particularly in terms of technology access and use among teachers and…
Lüttgen, Gerald
Under consideration for publication in Formal Aspects of Computing Verifying Compiled File System failures may render all application-level programs unsafe and gives way to serious security problems properties within the Linux VFS implementation, and equally at assessing the feasibility of our SOCA
International Association for Cryptologic Research (IACR)
-repudiation. Introduction: For electronical commercial applications, evidence of possession of documents is especially of signer authenticity and data integrity assurance. However, it is necessary to keep commercial documents} chooses a secret key xi Zq * and computes his public key yi = ix g mod p. He publishes yi which is 1 of 4
Worrell, James
Under consideration for publication in Formal Aspects of Computing Three Tokens in Herman of Leicester, UK Abstract. Herman's algorithm is a synchronous randomized protocol for achieving self by Herman [8]. Herman's algorithm is a randomized procedure by which a ring of processes connected uni
Learning From Engineering and Computer Science About Communicating The Field To The Public
NASA Astrophysics Data System (ADS)
Moore, S. L.; Tucek, K.
2014-12-01
The engineering and computer science community has taken the lead in actively informing the public about their discipline, including the societal contributions and career opportunities. These efforts have been intensified in regards to informing underrepresented populations in STEM about engineering and computer science. Are there lessons to be learned by the geoscience community in communicating the societal impacts and career opportunities in the geosciences, especially in regards to broadening participation and meeting Next Generation Science Standards? An estimated 35 percent increase in the number of geoscientist jobs in the United States forecasted for the period between 2008 and 2018, combined with majority populations becoming minority populations, make it imperative that we improve how we increase the public's understanding of the geosciences and how we present our message to targeted populations. This talk will look at recommendations from the National Academy of Engineering's Changing the Conversation: Messages for Improving the Public Understanding of Engineering, and communication strategies by organizations such as Code.org, to highlight practices that the geoscience community can adopt to increase public awareness of the societal contributions of the geosciences, the career opportunities in the geosciences, and the importance of the geosciences in the Next Generation Science Standards. An effort to communicate geoscience to the public, Earth is Calling, will be compared and contrasted to these efforts, and used as an example of how geological societies and other organizations can engage the general public and targeted groups about the geosciences.
Reconnection methods for an arbitrary polyhedral computational grid
Rasskazova, V.V.; Sofronov, I.D.; Shaporenko, A.N. [Russian Federal Nuclear Center (Russian Federation); Burton, D.E.; Miller, D.S. [Lawrence Livermore National Lab., CA (United States)
1996-08-01
The paper suggests a method for local reconstructions of a 3D irregular computational grid and the algorithm of its program implementation. Two grid reconstruction operations are used as basic: paste of two cells having a common face and cut of a certain cell into two by a given plane. This paper presents criteria to use one or another operation, the criteria are analyzed. A program for local reconstruction of a 3D irregular grid is used to conduct two test computations and the computed results are given.
Customizing computational methods for visual analytics with big data.
Choo, Jaegul; Park, Haesun
2013-01-01
The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data. PMID:24808056
New computational methods and algorithms for semiconductor science and nanotechnology
NASA Astrophysics Data System (ADS)
Gamoke, Benjamin C.
The design and implementation of sophisticated computational methods and algorithms are critical to solve problems in nanotechnology and semiconductor science. Two key methods will be described to overcome challenges in contemporary surface science. The first method will focus on accurately cancelling interactions in a molecular system, such as modeling adsorbates on periodic surfaces at low coverages, a problem for which current methodologies are computationally inefficient. The second method pertains to the accurate calculation of core-ionization energies through X-ray photoelectron spectroscopy. The development can provide assignment of peaks in X-ray photoelectron spectra, which can determine the chemical composition and bonding environment of surface species. Finally, illustrative surface-adsorbate and gas-phase studies using the developed methods will also be featured.
The continuous slope-area method for computing event hydrographs
Smith, Christopher F.; Cordova, Jeffrey T.; Wiele, Stephen M.
2010-01-01
The continuous slope-area (CSA) method expands the slope-area method of computing peak discharge to a complete flow event. Continuously recording pressure transducers installed at three or more cross sections provide water-surface slopes and stage during an event that can be used with cross-section surveys and estimates of channel roughness to compute a continuous discharge hydrograph. The CSA method has been made feasible by the availability of low-cost recording pressure transducers that provide a continuous record of stage. The CSA method was implemented on the Babocomari River in Arizona in 2002 to monitor streamflow in the channel reach by installing eight pressure transducers in four cross sections within the reach. Continuous discharge hydrographs were constructed from five streamflow events during 2002-2006. Results from this study indicate that the CSA method can be used to obtain continuous hydrographs and rating curves can be generated from streamflow events.
ERIC Educational Resources Information Center
Oldehoeft, Arthur E.
The design and development of a program of computer-assisted instruction (CAI) which assists the student in learning elementary algorithms of an undergraduate numerical methods course is presented, along with special programing features such as partial precision arithmetic, computer-generated problems, and approximate matching of mathematical…
Santiago G Moreno; Alex J Sutton; Erick H Turner; Keith R Abrams; Nicola J Cooper; Tom M Palmer; A E Ades
2009-01-01
Objective To assess the performance of novel contour enhanced funnel plots and a regression based adjustment method to detect and adjust for publication biases.Design Secondary analysis of a published systematic literature review.Data sources Placebo controlled trials of antidepressants previously submitted to the US Food and Drug Administration (FDA) and matching journal publications. Methods Publication biases were identified using novel contour
GRACE: Public Health Recovery Methods following an Environmental Disaster
Svendsen, ER; Whittle, N; Wright, L; McKeown, RE; Sprayberry, K; Heim, M; Caldwell, R; Gibson, JJ; Vena, J.
2014-01-01
Different approaches are necessary when Community Based Participatory Research (CBPR) of environmental illness is initiated after an environmental disaster within a community. Often such events are viewed as golden scientific opportunities to do epidemiological studies. However, we believe that in such circumstances, community engagement and empowerment needs to be integrated into the public health service efforts in order for both those and any science to be successful, with special care being taken to address the immediate health needs of the community first rather than the pressing needs to answer important scientific questions. We will demonstrate how we have simultaneously provided valuable public health service, embedded generalizable scientific knowledge, and built a successful foundation for supplemental CBPR through our on-going recovery work after the chlorine gas disaster in Graniteville, South Carolina. PMID:20439226
Integration of computational methods into automotive wind tunnel testing
Katz, J.
1989-01-01
This paper discusses the aerodynamics of a generic, enclosed-wheel racing-car shape without wheels investigated numerically and compared with one-quarter scale wind-tunnel data. Because both methods lack perfection in simulating actual road conditions, a complementary application of these methods was studied. The computations served for correcting the high-blockage wind-tunnel results and provided detailed pressure data which improved the physical understanding of the flow field. The experimental data was used here mainly to provide information on the location of flow-separation lines and on the aerodynamic loads; these in turn were used to validate and to calibrate the computations.
ERIC Educational Resources Information Center
Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna
2013-01-01
Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…
Software for computing eigenvalue bounds for iterative subspace matrix methods
NASA Astrophysics Data System (ADS)
Shepard, Ron; Minkoff, Michael; Zhou, Yunkai
2005-07-01
This paper describes software for computing eigenvalue bounds to the standard and generalized hermitian eigenvalue problem as described in [Y. Zhou, R. Shepard, M. Minkoff, Computing eigenvalue bounds for iterative subspace matrix methods, Comput. Phys. Comm. 167 (2005) 90-102]. The software discussed in this manuscript applies to any subspace method, including Lanczos, Davidson, SPAM, Generalized Davidson Inverse Iteration, Jacobi-Davidson, and the Generalized Jacobi-Davidson methods, and it is applicable to either outer or inner eigenvalues. This software can be applied during the subspace iterations in order to truncate the iterative process and to avoid unnecessary effort when converging specific eigenvalues to a required target accuracy, and it can be applied to the final set of Ritz values to assess the accuracy of the converged results. Program summaryTitle of program: SUBROUTINE BOUNDS_OPT Catalogue identifier: ADVE Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVE Computers: any computer that supports a Fortran 90 compiler Operating systems: any computer that supports a Fortran 90 compiler Programming language: Standard Fortran 90 High speed storage required:5m+5 working-precision and 2m+7 integer for m Ritz values No. of bits in a word: The floating point working precision is parameterized with the symbolic constant WP No. of lines in distributed program, including test data, etc.: 2452 No. of bytes in distributed program, including test data, etc.: 281 543 Distribution format: tar.gz Nature of physical problem: The computational solution of eigenvalue problems using iterative subspace methods has widespread applications in the physical sciences and engineering as well as other areas of mathematical modeling (economics, social sciences, etc.). The accuracy of the solution of such problems and the utility of those errors is a fundamental problem that is of importance in order to provide the modeler with information of the reliability of the computational results. Such applications include using these bounds to terminate the iterative procedure at specified accuracy limits. Method of solution: The Ritz values and their residual norms are computed and used as input for the procedure. While knowledge of the exact eigenvalues is not required, we require that the Ritz values are isolated from the exact eigenvalues outside of the Ritz spectrum and that there are no skipped eigenvalues within the Ritz spectrum. Using a multipass refinement approach, upper and lower bounds are computed for each Ritz value. Typical running time: While typical applications would deal with m<20, for m=100000, the running time is 0.12 s on an Apple PowerBook.
Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS.
Jalali, Arash; Olabode, Olusegun A; Bell, Christopher M
2012-01-01
As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities. PMID:23569644
Probability computations using the SIGMA-PI method on a personal computer
Haskin, F.E.; Lazo, M.S.; Heger, A.S.
1990-09-30
The SIGMA-PI ({Sigma}{Pi}) method as implemented in the SIGPI computer code, is designed to accurately and efficiently evaluate the probability of Boolean expressions in disjunctive normal form given the base event probabilities. The method is not limited to problems in which base event probabilities are small, nor to Boolean expressions that exclude the compliments of base events, nor to problems in which base events are independent. The feasibility of implementing the {Sigma}{Pi} method on a personal computer has been evaluated, and a version of the SIGPI code capable of quantifying simple Boolean expressions with independent base events on the personal computer has been developed. Tasks required for a fully functional personal computer version of SIGPI have been identified together with enhancements that could be implemented to improve the utility and efficiency of the code.
Computational methods for fracture mechanics and probabilistic fatigue
NASA Astrophysics Data System (ADS)
Harkness, Harrington Hunter
Techniques to determine fatigue reliability are presented and demonstrated. The techniques combine: (1) the Paris relation to model of the growth of fatigue cracks; (2) models of nondestructive evaluation techniques; (3) computational fracture mechanics to treat complex shaped components; and (4) probabilistic and reliability methods to account for uncertainties in parameters which influence fatigue life. Emphasis is placed on methods for computational fracture mechanics, reliability analysis, and their combination. Fundamental aspects of computational fracture mechanics are reviewed, including near-crack discretization and post-processing to determine stress intensity factors. New finite elements for use at a crack tip in 2D and a crack front in 3D are introduced for linear elastic analyses. Results obtained with these elements show good agreement with solutions available in handbooks. Coupling the 3D crack elements with boundary elements to determine stress intensity factors for surface breaking cracks in complex components is demonstrated. The first order reliability method (FORM) is shown to be accurate in a fatigue setting based on comparisons to results obtained by Monte Carlo simulations (MCS). The main advantage of FORM is that relatively few simulations of fatigue growth from initiation to failure are necessary. For complex components, FORM may require only minutes, whereas MCS is typically unfeasible for determining failure probabilities below 10-4. Techniques for overcoming several potential limitations of FORM analyses are addressed, including treatment of multiple inspections, intra-specimen random variations, and multiple failure modes. In the approach that is taken, the computational fracture mechanics and probabilistic aspects are effectively decoupled to improve efficiency. This is achieved by developing stress intensity factor parameterizations for the expected crack shapes and sizes based on results obtained with computational fracture mechanics methods. The computer programs which implement the probabilistic methods then call on these parameterizations during the fatigue simulations.
A computational method for automated characterization of genetic components.
Yordanov, Boyan; Dalchau, Neil; Grant, Paul K; Pedersen, Michael; Emmott, Stephen; Haseloff, Jim; Phillips, Andrew
2014-08-15
The ability to design and construct synthetic biological systems with predictable behavior could enable significant advances in medical treatment, agricultural sustainability, and bioenergy production. However, to reach a stage where such systems can be reliably designed from biological components, integrated experimental and computational techniques that enable robust component characterization are needed. In this paper we present a computational method for the automated characterization of genetic components. Our method exploits a recently developed multichannel experimental protocol and integrates bacterial growth modeling, Bayesian parameter estimation, and model selection, together with data processing steps that are amenable to automation. We implement the method within the Genetic Engineering of Cells modeling and design environment, which enables both characterization and design to be integrated within a common software framework. To demonstrate the application of the method, we quantitatively characterize a synthetic receiver device that responds to the 3-oxohexanoyl-homoserine lactone signal, across a range of experimental conditions. PMID:24628037
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (editor); Housner, Jerrold M. (editor); Tanner, John A. (editor); Hayduk, Robert J. (editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
Method and system for environmentally adaptive fault tolerant computing
NASA Technical Reports Server (NTRS)
Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)
2010-01-01
A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.
Analysis and optimization of cyclic methods in orbit computation
NASA Technical Reports Server (NTRS)
Pierce, S.
1973-01-01
The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.
A METHOD FOR OBTAINING DIGITAL SIGNATURES AND PUBLIC-KEY CRYP-TOSYSTEMS
R. L. Rivest; A. Shamir; L. M. Adelman
1977-01-01
Abstract An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: 1. Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intended recipient. Only he can
A faster method of computation of lattice quark number susceptibilities
Gavai, R V
2011-01-01
We compute the quark number susceptibilities in two flavor QCD for staggered fermions by adding the chemical potential as a Lagrange multiplier for the point-split number density term. Since lesser number of quark propagators are required at any order, this method leads to faster computations. We propose a subtraction procedure to remove the inherent undesired lattice terms and check that it works well by comparing our results with the existing ones where the elimination of these terms is analytically guaranteed. We also show that the ratios of susceptibilities are robust, opening a door for better estimates of location of the QCD critical point through the computation of the tenth and twelfth order baryon number susceptibilities without significant additional computational overload.
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2004-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2003-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Computational Methods for CLIP-seq Data Processing.
Reyes-Herrera, Paula H; Ficarra, Elisa
2014-01-01
RNA-binding proteins (RBPs) are at the core of post-transcriptional regulation and thus of gene expression control at the RNA level. One of the principal challenges in the field of gene expression regulation is to understand RBPs mechanism of action. As a result of recent evolution of experimental techniques, it is now possible to obtain the RNA regions recognized by RBPs on a transcriptome-wide scale. In fact, CLIP-seq protocols use the joint action of CLIP, crosslinking immunoprecipitation, and high-throughput sequencing to recover the transcriptome-wide set of interaction regions for a particular protein. Nevertheless, computational methods are necessary to process CLIP-seq experimental data and are a key to advancement in the understanding of gene regulatory mechanisms. Considering the importance of computational methods in this area, we present a review of the current status of computational approaches used and proposed for CLIP-seq data. PMID:25336930
Computational Methods for CLIP-seq Data Processing
Reyes-Herrera, Paula H; Ficarra, Elisa
2014-01-01
RNA-binding proteins (RBPs) are at the core of post-transcriptional regulation and thus of gene expression control at the RNA level. One of the principal challenges in the field of gene expression regulation is to understand RBPs mechanism of action. As a result of recent evolution of experimental techniques, it is now possible to obtain the RNA regions recognized by RBPs on a transcriptome-wide scale. In fact, CLIP-seq protocols use the joint action of CLIP, crosslinking immunoprecipitation, and high-throughput sequencing to recover the transcriptome-wide set of interaction regions for a particular protein. Nevertheless, computational methods are necessary to process CLIP-seq experimental data and are a key to advancement in the understanding of gene regulatory mechanisms. Considering the importance of computational methods in this area, we present a review of the current status of computational approaches used and proposed for CLIP-seq data. PMID:25336930
A Spectral Time-Domain Method for Computational Electrodynamics
NASA Astrophysics Data System (ADS)
Lambers, James V.
2009-09-01
We present a new approach to the numerical solution of Maxwell's equations in the case of spatially-varying electric permittivity and/or magnetic permeability, based on Krylov subspace spectral (KSS) methods. KSS methods for scalar equations compute each Fourier coefficient of the solution using techniques developed by Gene Golub and Gérard Meurant for approximating elements of functions of matrices by Gaussian quadrature in the spectral, rather than physical, domain. We show how they can be generalized to coupled systems of equations, such as Maxwell's equations, by choosing appropriate basis functions that, while induced by this coupling, still allow efficient and robust computation of the Fourier coefficients of each spatial component of the electric and magnetic fields. We also discuss the implementation of appropriate boundary conditions for simulation on infinite computational domains, and how discontinuous coefficients can be handled.
Computer controlled fluorometer device and method of operating same
Kolber, Zbigniew (Shoreham, NY); Falkowski, Paul (Stony Brook, NY)
1990-01-01
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.
ERIC Educational Resources Information Center
Morse, Frances K.; Daiute, Colette
There is a burgeoning body of research on gender differences in computing attitudes and behaviors. After a decade of experience, researchers from both inside and outside the field of educational computing research are raising methodological and conceptual issues which suggest that perhaps researchers have shortchanged girls and women in…
Computational Methods for Learning Population History from Large Scale Genetic
, Population History, Markov chain Monte Carlos, Coalescent Theory, Genome Wide Association Study #12;#12;iv. It is also only by studying the diversity in human and differ- ent species that we can understand what makesComputational Methods for Learning Population History from Large Scale Genetic Variation Datasets
Problem Set 3 ISE 407 Computational Methods in Optimization
Ralphs, Ted
as in Matlab. (c) Do some computational experiments comparing the empirical running time for multiplying random running time of the naive algorithm, which requires O(n3) operations. 1 #12;(a) Generalize the above. (b) Write the recursion for the running time of your method, solve it, and show that the running time
European Society of Computational Methods in Sciences and Engineering (ESCMSE)
Koch, Othmar
European Society of Computational Methods in Sciences and Engineering (ESCMSE) Journal of Numerical Analysis, Industrial and Applied Mathematics (JNAIAM) vol. 4, no. 1-2, 2009, pp. 129-149 ISSN 1790 in the theory of shallow membrane caps, [35], is associated with such problems. Even in ecology
Computational methods for calculating geometric parameters of tectonic plates
Antonio Schettino
1999-01-01
Present day and ancient plate tectonic configurations can be modelled in terms of non-overlapping polygonal regions, separated by plate boundaries, on the unit sphere. The computational methods described in this article allow an evaluation of the area and the inertial tensor components of a polygonal region on the unit sphere, as well as an estimation of the associated errors. These
International Conference on Computational Methods Marine Engineering MARINE 2005
Löhner, Rainald
of the interface between air and water is known a priori; on the contrary, it often involves unsteady fragmentation with floating structures, green water on deck and sloshing (e.g. in LNG tankers) are but a few examples of flows-tracking and interface-capturing methods. The former computes the liquid flow only, using a numerical grid that adapts
Computational Methods for Atmospheric Science, ATS607 Colorado State University
Computational Methods for Atmospheric Science, ATS607 Colorado State University Department of Atmospheric Science, Spring 2014 Wednesdays and Fridays @ 2:15 3:30 Room: ENGR Research Center (ERC research, another atmospheric science topic that you are interested in, or you may choose from some
Advanced Computer Methods for Grounding Analysis Ignasi Colominas1
Colominas, Ignasi
are not exposed to dan- gerous electrical shocks and to guarantee the integrity of equipment and the continuity of grounding grids of large electrical substations in practical cases present some difficulties mainly dueAdvanced Computer Methods for Grounding Analysis Ignasi Colominas1 , Jos´e Par´is1 , Xes
To appear in Computer Methods in Applied Mechanics and Engineering
Qian, Xiaoping
To appear in Computer Methods in Applied Mechanics and Engineering Isogeometric shape optimization optimization of rectangular-like NURBS patches to the optimization of topologically complex geometries. We have an isogeometric shape optimization approach that is applicable to topologically complex geometries. Isogeometric
Foundational Methods in Computer Science 2012 Dalhousie University, Halifax, Canada
Selinger, Peter
Foundational Methods in Computer Science 2012 Dalhousie University, Halifax, Canada June 14:30 Â 9:00: Coffee and Breakfast 9:00 Â 9:45: Ernie Manes (Massachusetts): More work for Robin: Universal:00: Coffee and Breakfast 9:00 Â 9:45: Robin Cockett (Calgary): Can you differentiate a polynomial? (part 1) 9
Decluttering Methods for Computer-Generated Graphic Displays
NASA Technical Reports Server (NTRS)
Schultz, E. Eugene, Jr.
1986-01-01
Symbol simplification and contrasting enhance viewer's ability to detect particular symbol. Report describes experiments designed to indicate how various decluttering methods affect viewer's abilities to distinguish essential from nonessential features on computer-generated graphic displays. Results indicate partial removal of nonessential graphic features through symbol simplification effective in decluttering as total removal of nonessential graphic features.
COMPUTATION OF AIRCRAFT FLOW FIELDS BY A MULTIGRID EULER METHOD
Jameson, Antony
in airplane design. In this paper a new multigrid finite volume method for the solution of the Euler equations transonic flow fields about fighter-type aircraft is described in the paper. The compressible Euler is of paramount importance to the airplane designer. Accuracy and speed are the qualities of a computer code which
pyro: Python-based tutorial for computational methods for hydrodynamics
NASA Astrophysics Data System (ADS)
Zingale, Michael
2015-07-01
pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.
Computational Methods for Atmospheric Science, ATS607 Colorado State University
Collett Jr., Jeffrey L.
. The project is fairly open ended. You to base your project on your research, another atmospheric science topicComputational Methods for Atmospheric Science, ATS607 Colorado State University Department of Atmospheric Science, Spring 2015 Wednesdays and Fridays @ 11:00 12:15 Room: ENGR Research Center (ERC
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Xie, Bo; Bugg, Julie M
2009-09-01
An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at a state university. One hundred and thirty-one older adults aged 54-89 participated in the study between September 2007 and July 2008. Key findings include: a) participants had overwhelmingly positive perceptions of the training program; b) after learning about two NIH websites (http://nihseniorhealth.gov and http://medlineplus.gov) from the training, many participants started using these online resources to find high quality health and medical information and, further, to guide their decision-making regarding a health- or medically-related matter; and c) computer anxiety significantly decreased (p < .001) while computer interest and efficacy significantly increased (p = .001 and p < .001, respectively) from pre- to post-training, suggesting statistically significant improvements in computer attitudes between pre- and post-training. The findings have implications for public libraries, LIS academic programs, and other organizations interested in providing similar programs in their communities. PMID:20161649
Merrill, Jacqueline; Bakken, Suzanne; Rockoff, Maxine; Gebbie, Kristine; Carley, Kathleen
2007-01-01
In this case study we describe a method that has potential to provide systematic support for public health information management. Public health agencies depend on specialized information that travels throughout an organization via communication networks among employees. Interactions that occur within these networks are poorly understood and are generally unmanaged. We applied organizational network analysis, a method for studying communication networks, to assess the method’s utility to support decision making for public health managers, and to determine what links existed between information use and agency processes. Data on communication links among a health department’s staff was obtained via survey with a 93% response rate, and analyzed using Organizational Risk Analyzer (ORA) software. The findings described the structure of information flow in the department’s communication networks. The analysis succeeded in providing insights into organizational processes which informed public health managers’ strategies to address problems and to take advantage of network strengths. PMID:17098480
Computer support to Navy Public Works Departments for their utilities function
Fowler, B.
1980-12-01
This thesis explores the requirements for Automated Data Processing (ADP) support to Navy Public Works Departments in their role as Utilities managers for Navy and Marine Corps Shores Stations. Utilities function tasks which can benefit from ADP support are described. Results of a survey questionnaire sent to all sizable Public Works Departments are analyzed and Public Works Department utilities function existing ADP support and additional support requirements are profiled. Alternative sources for Public Works Department utilities ADP support are reviewed in light of the survey results. These alternatives are; Base Engineering Support, Technical (BEST) Program for software development to be used by large computer installations for Public Works Department utilities support, BEST Program for acquisition of minicomputer hardware and development of software support, Navy Regional Data ADP Center (NARDAC) batch processed and timeshare support, Shipboard Non-tactical ADP Program (SNAP) support and commercial timeshare service support. Recommendations are made for target Public Works Department criteria, utilities function support system ADP requirements and further study of ADP support sources.
NASA Astrophysics Data System (ADS)
Piotrowski, Adam P.; Napiorkowski, Jaros?aw J.
2011-09-01
SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other Evolutionary Computation-based algorithms. The Levenberg-Marquardt optimization must be considered as the most efficient one due to its speed. Its drawback due to possible sticking in poor local optimum can be overcome by applying a multi-start approach.
Variational-moment method for computing magnetohydrodynamic equilibria
Lao, L.L.
1983-08-01
A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed.
The spectral-element method, Beowulf computing, and global seismology.
Komatitsch, Dimitri; Ritsema, Jeroen; Tromp, Jeroen
2002-11-29
The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables us to simulate broadband seismograms without intrinsic restrictions on the level of heterogeneity or the frequency content. PMID:12459579
Computation of Pressurized Gas Bearings Using CE/SE Method
NASA Technical Reports Server (NTRS)
Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.
2003-01-01
The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.
Three-dimensional cardiac computational modelling: methods, features and applications.
Lopez-Perez, Alejandro; Sebastian, Rafael; Ferrero, Jose M
2015-01-01
The combination of computational models and biophysical simulations can help to interpret an array of experimental data and contribute to the understanding, diagnosis and treatment of complex diseases such as cardiac arrhythmias. For this reason, three-dimensional (3D) cardiac computational modelling is currently a rising field of research. The advance of medical imaging technology over the last decades has allowed the evolution from generic to patient-specific 3D cardiac models that faithfully represent the anatomy and different cardiac features of a given alive subject. Here we analyse sixty representative 3D cardiac computational models developed and published during the last fifty years, describing their information sources, features, development methods and online availability. This paper also reviews the necessary components to build a 3D computational model of the heart aimed at biophysical simulation, paying especial attention to cardiac electrophysiology (EP), and the existing approaches to incorporate those components. We assess the challenges associated to the different steps of the building process, from the processing of raw clinical or biological data to the final application, including image segmentation, inclusion of substructures and meshing among others. We briefly outline the personalisation approaches that are currently available in 3D cardiac computational modelling. Finally, we present examples of several specific applications, mainly related to cardiac EP simulation and model-based image analysis, showing the potential usefulness of 3D cardiac computational modelling into clinical environments as a tool to aid in the prevention, diagnosis and treatment of cardiac diseases. PMID:25928297
Digital data storage systems, computers, and data verification methods
Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.
2005-12-27
Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.
Computing the Casimir energy using the point-matching method
Lombardo, F. C.; Mazzitelli, F. D. [Departamento de Fisica Juan Jose Giambiagi, FCEyN University of Buenos Aires, Facultad de Ciencias Exactas y Naturales, Ciudad Universitaria, Pabellon I, 1428 Buenos Aires (Argentina); Vazquez, M. [Computer Applications on Science and Engineering Department, Barcelona Supercomputing Center (BSC), 29, Jordi Girona 08034 Barcelona (Spain); Villar, P. I. [Departamento de Fisica Juan Jose Giambiagi, FCEyN University of Buenos Aires, Facultad de Ciencias Exactas y Naturales, Ciudad Universitaria, Pabellon I, 1428 Buenos Aires (Argentina); Computer Applications on Science and Engineering Department, Barcelona Supercomputing Center (BSC), 29, Jordi Girona 08034 Barcelona (Spain)
2009-09-15
We use a point-matching approach to numerically compute the Casimir interaction energy for a two perfect-conductor waveguide of arbitrary section. We present the method and describe the procedure used to obtain the numerical results. At first, our technique is tested for geometries with known solutions, such as concentric and eccentric cylinders. Then, we apply the point-matching technique to compute the Casimir interaction energy for new geometries such as concentric corrugated cylinders and cylinders inside conductors with focal lines.
Application of finite element method to hypersonic nozzle flow computations
NASA Astrophysics Data System (ADS)
Koschel, W.; Rick, W.; Bikker, S.
1992-02-01
An explicit Taylor-Galerkin Finite Element Method (FEM) algorithm, used for the solution of Euler/Navier-Stokes equations, is applied for the computation of steady-state frozen equilibrium flow in single expansion ramp nozzles (SERN) and in plug nozzles for hypersonic propulsion systems. External flow conditions are taken into account. For the determination of nozzle performance a detailed 2D/3D-flow analysis in regions with complex geometries was performed using unstructured computational grids with adaptive mesh refinement. Some results for the investigated nozzle configurations at different flight conditions are presented and discussed. Additionally, thrust vectoring by modification of the lower nozzle flap shape was studied.
A new method for computing Moore-Penrose inverse matrices
NASA Astrophysics Data System (ADS)
Toutounian, F.; Ataei, A.
2009-06-01
The Moore-Penrose inverse of an arbitrary matrix (including singular and rectangular) has many applications in statistics, prediction theory, control system analysis, curve fitting and numerical analysis. In this paper, an algorithm based on the conjugate Gram-Schmidt process and the Moore-Penrose inverse of partitioned matrices is proposed for computing the pseudoinverse of an m×n real matrix A with m>=n and rank r<=n. Numerical experiments show that the resulting pseudoinverse matrix is reasonably accurate and its computation time is significantly less than that of pseudoinverses obtained by the other methods for large sparse matrices.
The class polynomial HK of a quadratic imaginary field p-adic method for computing HK
Belding, Juliana
The class polynomial HK of a quadratic imaginary field p-adic method for computing HK A p-adic algorithm to compute the canonical lift for p inert in K Algorithms to compute HK (X) Computing the Hilbert for computing HK A p-adic algorithm to compute the canonical lift for p inert in K Algorithms to compute HK (X
29 CFR 779.266 - Methods of computing annual volume of sales or business.
Code of Federal Regulations, 2014 CFR
2014-07-01
...2014-07-01 false Methods of computing annual volume of sales or business. 779.266 Section...Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No...
29 CFR 779.266 - Methods of computing annual volume of sales or business.
Code of Federal Regulations, 2012 CFR
2012-07-01
...2012-07-01 false Methods of computing annual volume of sales or business. 779.266 Section...Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No...
29 CFR 779.342 - Methods of computing annual volume of sales.
Code of Federal Regulations, 2014 CFR
2014-07-01
...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...
29 CFR 779.342 - Methods of computing annual volume of sales.
Code of Federal Regulations, 2013 CFR
2013-07-01
...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...
29 CFR 779.342 - Methods of computing annual volume of sales.
Code of Federal Regulations, 2011 CFR
2011-07-01
...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...
29 CFR 779.342 - Methods of computing annual volume of sales.
Code of Federal Regulations, 2012 CFR
2012-07-01
...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...
29 CFR 779.266 - Methods of computing annual volume of sales or business.
Code of Federal Regulations, 2013 CFR
2013-07-01
...2013-07-01 false Methods of computing annual volume of sales or business. 779.266 Section...Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No...
29 CFR 779.266 - Methods of computing annual volume of sales or business.
Code of Federal Regulations, 2011 CFR
2011-07-01
...2011-07-01 false Methods of computing annual volume of sales or business. 779.266 Section...Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No...
38 CFR 3.25 - Parent's dependency and indemnity compensation (DIC)-Method of payment computation.
Code of Federal Regulations, 2011 CFR
2011-07-01
...Parent's dependency and indemnity compensation (DIC)-Method of payment computation. 3...Parent's dependency and indemnity compensation (DIC)—Method of payment computation. Monthly payments of parents' DIC shall be computed in accordance with...
38 CFR 3.25 - Parent's dependency and indemnity compensation (DIC)-Method of payment computation.
Code of Federal Regulations, 2014 CFR
2014-07-01
...Parent's dependency and indemnity compensation (DIC)-Method of payment computation. 3...Parent's dependency and indemnity compensation (DIC)—Method of payment computation. Monthly payments of parents' DIC shall be computed in accordance with...
38 CFR 3.25 - Parent's dependency and indemnity compensation (DIC)-Method of payment computation.
Code of Federal Regulations, 2012 CFR
2012-07-01
...Parent's dependency and indemnity compensation (DIC)-Method of payment computation. 3...Parent's dependency and indemnity compensation (DIC)—Method of payment computation. Monthly payments of parents' DIC shall be computed in accordance with...
38 CFR 3.25 - Parent's dependency and indemnity compensation (DIC)-Method of payment computation.
Code of Federal Regulations, 2013 CFR
2013-07-01
...Parent's dependency and indemnity compensation (DIC)-Method of payment computation. 3...Parent's dependency and indemnity compensation (DIC)—Method of payment computation. Monthly payments of parents' DIC shall be computed in accordance with...
29 CFR 779.342 - Methods of computing annual volume of sales.
Code of Federal Regulations, 2010 CFR
2010-07-01
...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...
Calculational Methods The computational methods used in this work were the same as those used in
Holzwarth, Natalie
Calculational Methods The computational methods used in this work were the same as those used electrolytes for use in recharge- able batteries and other applications. The films have compositions close to that of crystalline Li3PO4 and ionic conductivities of 10-6 S/cm. In previous work,2 we investigated detailed
Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment
Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon
2013-01-01
Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906
Srinivasan, D.; Chang, C.S.; Liew, A.C.
1995-11-01
This paper describes the implementation and forecasting results of a hybrid fuzzy neural technique, which combines neural network modeling, and techniques from fuzzy logic and fuzzy set theory for electric load forecasting. The strengths of this powerful technique lie in its ability to forecast accurately on weekdays, as well as, on weekends, public holidays, and days before and after public holidays. Furthermore, use of fuzzy logic effectively handles the load variations due to special events. The Fuzzy-Neural Network (FNN) has been extensively tested on actual data obtained from a power system for 24-hour ahead prediction based on forecast weather information. Very impressive results, with an average error of 0.62% on weekdays, 0.83% on Saturdays and 1.17% on Sundays and public holidays have been obtained. This approach avoids complex mathematical calculations and training on many years of data, and is simple to implement on a personal computer.
Bayesian statistical methods in public health and medicine.
Etzioni, R D; Kadane, J B
1995-01-01
This article reviews the Bayesian statistical approach to the design and analysis of research studies in the health sciences. The central idea of the Bayesian method is the use of study data to update the state of knowledge about a quantity of interest. In study design, the Bayesian approach explicitly incorporates expressions for the loss resulting from an incorrect decision at the end of the study. The Bayesian method also provides a flexible framework for the monitoring of sequential clinical trials. We present several examples of Bayesian methods in practice including a study of disease progression in AIDS, a comparison of two therapies in a clinical trial, and a case-control study investigating the link between dietary factors and breast cancer. PMID:7639872
Experiences using DAKOTA stochastic expansion methods in computational simulations.
Templeton, Jeremy Alan; Ruthruff, Joseph R.
2012-01-01
Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.
Benchmarking Gas Path Diagnostic Methods: A Public Approach
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene
2008-01-01
Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.
Domain decomposition methods for the parallel computation of reacting flows
NASA Technical Reports Server (NTRS)
Keyes, David E.
1988-01-01
Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.
Computational methods to determine the structure of hydrogen storage materials
NASA Astrophysics Data System (ADS)
Mueller, Tim
2009-03-01
To understand the mechanisms and thermodynamics of material-based hydrogen storage, it is important to know the structure of the material and the positions of the hydrogen atoms within the material. Because hydrogen can be difficult to resolve experimentally computational research has proven to be a valuable tool to address these problems. We discuss different computational methods for identifying the structure of hydrogen materials and the positions of hydrogen atoms, and we illustrate the methods with specific examples. Through the use of ab-initio molecular dynamics, we identify molecular hydrogen binding sites in the metal-organic framework commonly known as MOF-5 [1]. We present a method to identify the positions of atomic hydrogen in imide structures using a novel type of effective Hamiltonian. We apply this new method to lithium imide (Li2NH), a potentially important hydrogen storage material, and demonstrate that it predicts a new ground state structure [2]. We also present the results of a recent computational study of the room-temperature structure of lithium imide in which we suggest a new structure that reconciles the differences between previous experimental and theoretical studies. [4pt] [1] T. Mueller and G. Ceder, Journal of Physical Chemistry B 109, 17974 (2005). [0pt] [2] T. Mueller and G. Ceder, Physical Review B 74 (2006).
ERIC Educational Resources Information Center
Adams, Stephen T.
2003-01-01
The "Convince Me" computer environment supports critical thinking by allowing users to create and evaluate computer-based representations of arguments. This study investigates theoretical and design considerations pertinent to using "Convince Me" as an educational tool to support reasoning about public policy issues. Among computer environments…
Informed public choices for low-carbon electricity portfolios using a computer decision tool.
Mayer, Lauren A Fleishman; Bruine de Bruin, Wändi; Morgan, M Granger
2014-04-01
Reducing CO2 emissions from the electricity sector will likely require policies that encourage the widespread deployment of a diverse mix of low-carbon electricity generation technologies. Public discourse informs such policies. To make informed decisions and to productively engage in public discourse, citizens need to understand the trade-offs between electricity technologies proposed for widespread deployment. Building on previous paper-and-pencil studies, we developed a computer tool that aimed to help nonexperts make informed decisions about the challenges faced in achieving a low-carbon energy future. We report on an initial usability study of this interactive computer tool. After providing participants with comparative and balanced information about 10 electricity technologies, we asked them to design a low-carbon electricity portfolio. Participants used the interactive computer tool, which constrained portfolio designs to be realistic and yield low CO2 emissions. As they changed their portfolios, the tool updated information about projected CO2 emissions, electricity costs, and specific environmental impacts. As in the previous paper-and-pencil studies, most participants designed diverse portfolios that included energy efficiency, nuclear, coal with carbon capture and sequestration, natural gas, and wind. Our results suggest that participants understood the tool and used it consistently. The tool may be downloaded from http://cedmcenter.org/tools-for-cedm/informing-the-public-about-low-carbon-technologies/ . PMID:24564708
Balázs, Bánhelyi
Research Support in Hungary Machine scheduling LED public lighting Microsimulation in public, 2011 Industrial Innovation Problems #12;Research Support in Hungary Machine scheduling LED public Innovation Problems #12;Research Support in Hungary Machine scheduling LED public lighting Microsimulation
Reducing Total Power Consumption Method in Cloud Computing Environments
Kuribayashi, Shin-ichi
2012-01-01
The widespread use of cloud computing services is expected to increase the power consumed by ICT equipment in cloud computing environments rapidly. This paper first identifies the need of the collaboration among servers, the communication network and the power network, in order to reduce the total power consumption by the entire ICT equipment in cloud computing environments. Five fundamental policies for the collaboration are proposed and the algorithm to realize each collaboration policy is outlined. Next, this paper proposes possible signaling sequences to exchange information on power consumption between network and servers, in order to realize the proposed collaboration policy. Then, in order to reduce the power consumption by the network, this paper proposes a method of estimating the volume of power consumption by all network devices simply and assigning it to an individual user.
Improved diffraction computation with a hybrid C-RCWA-method
NASA Astrophysics Data System (ADS)
Bischoff, Joerg
2009-03-01
The Rigorous Coupled Wave Approach (RCWA) is acknowledged as a well established diffraction simulation method in electro-magnetic computing. Its two most essential applications in the semiconductor industry are in optical scatterometry and optical lithography simulation. In scatterometry, it is the standard technique to simulate spectra or diffraction responses for gratings to be characterized. In optical lithography simulation, it is an effective alternative to supplement or even to replace the FDTD for the calculation of light diffraction from thick masks as well as from wafer topographies. Unfortunately, the RCWA shows some serious disadvantages particularly for the modelling of grating profiles with shallow slopes and multilayer stacks with many layers such as extreme UV masks with large number of quarter wave layers. Here, the slicing may become a nightmare and also the computation costs may increase dramatically. Moreover, the accuracy is suffering due to the inadequate staircase approximation of the slicing in conjunction with the boundary conditions in TM polarization. On the other hand, the Chandezon Method (C-Method) solves all these problems in a very elegant way, however, it fails for binary patterns or gratings with very steep profiles where the RCWA works excellent. Therefore, we suggest a combination of both methods as plug-ins in the same scattering matrix coupling frame. The improved performance and the advantages of this hybrid C-RCWA-Method over the individual methods is shown with some relevant examples.
A hierarchical method for molecular docking using cloud computing.
Kang, Ling; Guo, Quan; Wang, Xicheng
2012-11-01
Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs. PMID:23017886
Advanced Computational Aeroacoustics Methods for Fan Noise Prediction
NASA Technical Reports Server (NTRS)
Envia, Edmane (Technical Monitor); Tam, Christopher
2003-01-01
Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Implementation of an ADI method on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
The implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, an SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the FLEX/32 and CRAY/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.
Practical methods to improve the development of computational software
Osborne, A. G.; Harding, D. W.; Deinert, M. R.
2013-07-01
The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)
Computation of multi-material interactions using point method
Zhang, Duan Z; Ma, Xia; Giguere, Paul T
2009-01-01
Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations. Numerical examples are given to demonstrate the new scheme.
Statistical methods for dealing with publication bias in meta-analysis.
Jin, Zhi-Chao; Zhou, Xiao-Hua; He, Jia
2015-01-30
Publication bias is an inevitable problem in the systematic review and meta-analysis. It is also one of the main threats to the validity of meta-analysis. Although several statistical methods have been developed to detect and adjust for the publication bias since the beginning of 1980s, some of them are not well known and are not being used properly in both the statistical and clinical literature. In this paper, we provided a critical and extensive discussion on the methods for dealing with publication bias, including statistical principles, implementation, and software, as well as the advantages and limitations of these methods. We illustrated a practical application of these methods in a meta-analysis of continuous support for women during childbirth. PMID:25363575
International Association for Cryptologic Research (IACR)
of MPKCs is becoming one of the main themes of this area. The piece in hand (PH, for short) matrix methodNonlinear Piece In Hand Perturbation Vector Method for Enhancing Security of Multivariate Public University 11327 Kasuga, Bunkyo-ku, Tokyo, 1128551 Japan Abstract. The piece in hand (PH) is a general
Inverse Problem Methods as a Public Health Tool in Pneumococcal Vaccination
Inverse Problem Methods as a Public Health Tool in Pneumococcal Vaccination Karyn L. Sutton1,2 , H these methods to the study of pneumococcal vaccination strategies as a relevant example which poses many vaccine policies through the estimation of parameters if vaccine history is recorded along with infection
Methods of Conserving Heating Energy Utilized in Thirty-One Public School Systems.
ERIC Educational Resources Information Center
Davis, Kathy Eggers
The Memphis City School System was notified by Memphis Light, Gas, and Water that it was necessary to reduce its consumption of natural gas during the winter of 1975-76. A survey was developed and sent to 44 large public school systems to determine which methods of heating energy conservation were used most frequently and which methods were most…
Computational anatomical methods as applied to ageing and dementia.
Thompson, P M; Apostolova, L G
2007-12-01
The cellular hallmarks of Alzheimer's disease (AD) accumulate in the living brain up to 30 years before the characteristic symptoms of dementia can be identified. Brain changes in AD are difficult to distinguish from those in normal ageing, and this has led to the development of powerful computational methods to extract statistical information on the brain changes that are characteristic of AD, mild cognitive impairment (MCI) and different dementia subtypes. Time-lapse maps can be built to show how the disease spreads in the brain, and where treatment affects the disease trajectory. Here, we review three computational approaches to map brain deficits in AD: cortical thickness maps, tensor-based morphometry and hippocampal/ventricular surface modelling. Anatomical structures, modelled as three-dimensional geometrical surfaces, are mathematically combined across subjects for group or interval comparisons. Mathematical concepts from computational surface modelling, fluid mechanics and multivariate statistics are exploited to distinguish disease from normal variations in brain structure. These methods yield insight into the dynamics of AD and MCI, showing where brain changes correlate with cognitive or behavioural changes such as language dysfunction or apathy. We describe cortical and hippocampal changes that distinguish dementia subtypes (such as Lewy-body dementia, HIV-associated dementia and AD), and we describe brain changes that predict recovery or decline in those at risk. Finally, we indicate which computational methods are powerful enough to track dementia in clinical trials, on the basis of their efficiency and sensitivity to early change, and the detail in the measures they provide. PMID:18445748
Estimating cost-effectiveness in public health: a summary of modelling and valuation methods
2012-01-01
It is acknowledged that economic evaluation methods as they have been developed for Health Technology Assessment do not capture all the costs and benefits relevant to the assessment of public health interventions. This paper reviews methods that could be employed to measure and value the broader set of benefits generated by public health interventions. It is proposed that two key developments are required if this vision is to be achieved. First, there is a trend to modelling approaches that better capture the effects of public health interventions. This trend needs to continue, and economists need to consider a broader range of modelling techniques than are currently employed to assess public health interventions. The selection and implementation of alternative modelling techniques should be facilitated by the production of better data on the behavioural outcomes generated by public health interventions. Second, economists are currently exploring a number of valuation paradigms that hold the promise of more appropriate valuation of public health interventions outcomes. These include the capabilities approach and the subjective well-being approach, both of which offer the possibility of broader measures of value than the approaches currently employed by health economists. These developments should not, however, be made by economists alone. These questions, in particular what method should be used to value public health outcomes, require social value judgements that are beyond the capacity of economists. This choice will require consultation with policy makers, and perhaps even the general public. Such collaboration would have the benefit of ensuring that the methods developed are useful for decision makers. PMID:22943762
A numerical method to compute interior transmission eigenvalues
NASA Astrophysics Data System (ADS)
Kleefeld, Andreas
2013-10-01
In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber-Krahn type inequalities for larger transmission eigenvalues that are not yet available.
Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems
Cai, Wei
2014-05-15
Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.
Publicity and public relations
NASA Technical Reports Server (NTRS)
Fosha, Charles E.
1990-01-01
This paper addresses approaches to using publicity and public relations to meet the goals of the NASA Space Grant College. Methods universities and colleges can use to publicize space activities are presented.
Graphics processing unit acceleration of computational electromagnetic methods
NASA Astrophysics Data System (ADS)
Inman, Matthew
The use of Graphical Processing Units (GPU's) for scientific applications has been evolving and expanding for the decade. GPU's provide an alternative to the CPU in the creation and execution of the numerical codes that are often relied upon in to perform simulations in computational electromagnetics. While originally designed purely to display graphics on the users monitor, GPU's today are essentially powerful floating point co-processors that can be programmed not only to render complex graphics, but also perform the complex mathematical calculations often encountered in scientific computing. Currently the GPU's being produced often contain hundreds of separate cores able to access large amounts of high-speed dedicated memory. By utilizing the power offered by such a specialized processor, it is possible to drastically speed up the calculations required in computational electromagnetics. This increase in speed allows for the use of GPU based simulations in a variety of situations that the computational time has heretofore been a limiting factor in, such as in educational courses. Many situations in teaching electromagnetics often rely upon simple examples of problems due to the simulation times needed to analyze more complex problems. The use of GPU based simulations will be shown to allow demonstrations of more advanced problems than previously allowed by adapting the methods for use on the GPU. Modules will be developed for a wide variety of teaching situations utilizing the speed of the GPU to demonstrate various techniques and ideas previously unrealizable.
An analytical method for computing atomic contact areas in biomolecules.
Mach, Paul; Koehl, Patrice
2013-01-15
We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. PMID:22965816
Computational Methods for Nucleosynthesis and Nuclear Energy Generation
W. R. Hix; F. -K. Thielemann
1999-06-29
This review concentrates on the two principle methods used to evolve nuclear abundances within astrophysical simulations, evolution via rate equations and via equilibria. Because in general the rate equations in nucleosynthetic applications form an extraordinarily stiff system, implicit methods have proven mandatory, leading to the need to solve moderately sized matrix equations. Efforts to improve the performance of such rate equation methods are focused on efficient solution of these matrix equations, by making best use of the sparseness of these matrices. Recent work to produce hybrid schemes which use local equilibria to reduce the computational cost of the rate equations is also discussed. Such schemes offer significant improvements in the speed of reaction networks and are accurate under circumstances where calculations with complete equilibrium fail.
A Computational Method for Identifying Yeast Cell Cycle Transcription Factors.
Wu, Wei-Sheng
2016-01-01
The eukaryotic cell cycle is a complex process and is precisely regulated at many levels. Many genes specific to the cell cycle are regulated transcriptionally and are expressed just before they are needed. To understand the cell cycle process, it is important to identify the cell cycle transcription factors (TFs) that regulate the expression of cell cycle-regulated genes. Here, we describe a computational method to identify cell cycle TFs in yeast by integrating current ChIP-chip, mutant, transcription factor-binding site (TFBS), and cell cycle gene expression data. For each identified cell cycle TF, our method also assigned specific cell cycle phases in which the TF functions and identified the time lag for the TF to exert regulatory effects on its target genes. Moreover, our method can identify novel cell cycle-regulated genes as a by-product. PMID:26254926
On implicit Runge-Kutta methods for parallel computations
NASA Technical Reports Server (NTRS)
Keeling, Stephen L.
1987-01-01
Implicit Runge-Kutta methods which are well-suited for parallel computations are characterized. It is claimed that such methods are first of all, those for which the associated rational approximation to the exponential has distinct poles, and these are called multiply explicit (MIRK) methods. Also, because of the so-called order reduction phenomenon, there is reason to require that these poles be real. Then, it is proved that a necessary condition for a q-stage, real MIRK to be A sub 0-stable with maximal order q + 1 is that q = 1, 2, 3, or 5. Nevertheless, it is shown that for every positive integer q, there exists a q-stage, real MIRK which is I-stable with order q. Finally, some useful examples of algebraically stable MIRKs are given.
COMSAC: Computational Methods for Stability and Control. Part 2
NASA Technical Reports Server (NTRS)
Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)
2004-01-01
The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.
ERIC Educational Resources Information Center
Bessey, Barbara L.; And Others
Graphical methods for displaying data, as well as available computer software and hardware, are reviewed. The authors have emphasized the types of graphs which are most relevant to the needs of the National Center for Education Statistics (NCES) and its readers. The following types of graphs are described: tabulations, stem-and-leaf displays,…
An Overview of a Decade of Journal Publications about Culture and Human-Computer Interaction (HCI)
NASA Astrophysics Data System (ADS)
Clemmensen, Torkil; Roese, Kerstin
In this paper, we analyze the concept of human-computer interaction in cultural and national contexts. Building and extending upon the framework for understanding research in usability and culture by Honold [3], we give an overview of publications in culture and HCI between 1998 and 2008, with a narrow focus on high-level journal publications only. The purpose is to review current practice in how cultural HCI issues are studied, and to analyse problems with the measures and interpretation of this studies. We find that Hofstede's cultural dimensions has been the dominating model of culture, participants have been picked because they could speak English, and most studies have been large scale quantitative studies. In order to balance this situation, we recommend that more researchers and practitioners do qualitative, empirical work studies.
PREFACE: Theory, Modelling and Computational methods for Semiconductors
NASA Astrophysics Data System (ADS)
Migliorato, Max; Probert, Matt
2010-04-01
These conference proceedings contain the written papers of the contributions presented at the 2nd International Conference on: Theory, Modelling and Computational methods for Semiconductors. The conference was held at the St Williams College, York, UK on 13th-15th Jan 2010. The previous conference in this series took place in 2008 at the University of Manchester, UK. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in Semiconductor science and technology, where there is a substantial potential for time saving in R&D. The development of high speed computer architectures is finally allowing the routine use of accurate methods for calculating the structural, thermodynamic, vibrational and electronic properties of semiconductors and their heterostructures. This workshop ran for three days, with the objective of bringing together UK and international leading experts in the field of theory of group IV, III-V and II-VI semiconductors together with postdocs and students in the early stages of their careers. The first day focused on providing an introduction and overview of this vast field, aimed particularly at students at this influential point in their careers. We would like to thank all participants for their contribution to the conference programme and these proceedings. We would also like to acknowledge the financial support from the Institute of Physics (Computational Physics group and Semiconductor Physics group), the UK Car-Parrinello Consortium, Accelrys (distributors of Materials Studio) and Quantumwise (distributors of Atomistix). The Editors Acknowledgements Conference Organising Committee: Dr Matt Probert (University of York) and Dr Max Migliorato (University of Manchester) Programme Committee: Dr Marco Califano (University of Leeds), Dr Jacob Gavartin (Accelrys Ltd, Cambridge), Dr Stanko Tomic (STFC Daresbury Laboratory), Dr Gabi Slavcheva (Imperial College London) Proceedings edited and compiled by Dr Max Migliorato and Dr Matt Probert
A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data
NASA Technical Reports Server (NTRS)
Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.
2011-01-01
A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.
A multigrid nonoscillatory method for computing high speed flows
NASA Technical Reports Server (NTRS)
Li, C. P.; Shieh, T. H.
1993-01-01
A multigrid method using different smoothers has been developed to solve the Euler equations discretized by a nonoscillatory scheme up to fourth order accuracy. The best smoothing property is provided by a five-stage Runge-Kutta technique with optimized coefficients, yet the most efficient smoother is a backward Euler technique in factored and diagonalized form. The singlegrid solution for a hypersonic, viscous conic flow is in excellent agreement with the solution obtained by the third order MUSCL and Roe's method. Mach 8 inviscid flow computations for a complete entry probe have shown that the accuracy is at least as good as the symmetric TVD scheme of Yee and Harten. The implicit multigrid method is four times more efficient than the explicit multigrid technique and 3.5 times faster than the single-grid implicit technique. For a Mach 8.7 inviscid flow over a blunt delta wing at 30 deg incidence, the CPU reduction factor from the three-level multigrid computation is 2.2 on a grid of 37 x 41 x 73 nodes.
Computational Solid Mechanics using a Vertex-based Finite Volume Method
Taylor, Gary
Computational Solid Mechanics using a Vertex-based Finite Volume Method G. A. Taylor, C. Bailey and using nite volume (FV) methods for computational solid mechanics (CSM). These methods are proving]. As a contemporary, the FV method has similarlyestablished itself within the eld of computational uid dynamics (CFD
NSDL National Science Digital Library
2008-02-01
This is an article from The Physiologist. "If you have published an article in one of the APS research journals in the second half of 2007, you may have noticed that the time it took from acceptance of your manuscript to final publication was much shorter than in the past. That is because the Publications Department decreased that time from an average of four months to two and a half months."
Computation of Sound Propagation by Boundary Element Method
NASA Technical Reports Server (NTRS)
Guo, Yueping
2005-01-01
This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.
Computational Studies of Protein Aggregation: Methods and Applications
NASA Astrophysics Data System (ADS)
Morriss-Andrews, Alex; Shea, Joan-Emma
2015-04-01
Protein aggregation involves the self-assembly of normally soluble proteins into large supramolecular assemblies. The typical end product of aggregation is the amyloid fibril, an extended structure enriched in Î²-sheet content. The aggregation process has been linked to a number of diseases, most notably Alzheimer's disease, but fibril formation can also play a functional role in certain organisms. This review focuses on theoretical studies of the process of fibril formation, with an emphasis on the computational models and methods commonly used to tackle this problem.
Assessment of nonequilibrium radiation computation methods for hypersonic flows
NASA Technical Reports Server (NTRS)
Sharma, Surendra
1993-01-01
The present understanding of shock-layer radiation in the low density regime, as appropriate to hypersonic vehicles, is surveyed. Based on the relative importance of electron excitation and radiation transport, the hypersonic flows are divided into three groups: weakly ionized, moderately ionized, and highly ionized flows. In the light of this division, the existing laboratory and flight data are scrutinized. Finally, an assessment of the nonequilibrium radiation computation methods for the three regimes in hypersonic flows is presented. The assessment is conducted by comparing experimental data against the values predicted by the physical model.
A new method to compute lunisolar perturbations in satellite motions
NASA Technical Reports Server (NTRS)
Kozai, Y.
1973-01-01
A new method to compute lunisolar perturbations in satellite motion is proposed. The disturbing function is expressed by the orbital elements of the satellite and the geocentric polar coordinates of the moon and the sun. The secular and long periodic perturbations are derived by numerical integrations, and the short periodic perturbations are derived analytically. The perturbations due to the tides can be included in the same way. In the Appendix, the motion of the orbital plane for a synchronous satellite is discussed; it is concluded that the inclination cannot stay below 7 deg.
Fan Flutter Computations Using the Harmonic Balance Method
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Thomas, Jeffrey P.; Reddy, T.S.R.
2009-01-01
An experimental forward-swept fan encountered flutter at part-speed conditions during wind tunnel testing. A new propulsion aeroelasticity code, based on a computational fluid dynamics (CFD) approach, was used to model the aeroelastic behavior of this fan. This threedimensional code models the unsteady flowfield due to blade vibrations using a harmonic balance method to solve the Navier-Stokes equations. This paper describes the flutter calculations and compares the results to experimental measurements and previous results from a time-accurate propulsion aeroelasticity code.
Method and apparatus for managing transactions with connected computers
Goldsmith, Steven Y. (Albuquerque, NM); Phillips, Laurence R. (Corrales, NM); Spires, Shannon V. (Albuquerque, NM)
2003-01-01
The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.
Immersed boundary conditions method for computational fluid dynamics problems
NASA Astrophysics Data System (ADS)
Husain, Syed Zahid
This dissertation presents implicit spectrally-accurate algorithms based on the concept of immersed boundary conditions (IBC) for solving a range of computational fluid dynamics (CFD) problems where the physical domains involve boundary irregularities. Both fixed and moving irregularities are considered with particular emphasis placed on the two-dimensional moving boundary problems. The physical model problems considered are comprised of the Laplace operator, the biharmonic operator and the Navier-Stokes equations, and thus cover the most commonly encountered types of operators in CFD analyses. The IBC algorithm uses a fixed and regular computational domain with flow domain immersed inside the computational domain. Boundary conditions along the edges of the time-dependent flow domain enter the algorithm in the form of internal constraints. Spectral spatial discretization for two-dimensional problems is based on Fourier expansions in the stream-wise direction and Chebyshev expansions in the normal-to-the-wall direction. Up to fourth-order implicit temporal discretization methods have been implemented. The IBC algorithm is shown to deliver the theoretically predicted accuracy in both time and space. Construction of the boundary constraints in the IBC algorithm provides degrees of freedom in excess of that required to formulate a closed system of algebraic equations. The 'classical IBC formulation' works by retaining number boundary constraints that are just sufficient to form a closed system of equations. The use of additional boundary constraints leads to the 'over-determined formulation' of the IBC algorithm. Over-determined systems are explored in order to improve the accuracy of the IBC method and to expand its applicability to more extreme geometries. Standard direct over-determined solvers based on evaluation of pseudo-inverses of the complete coefficient matrices have been tested on three model problems, namely, the Laplace equation, the biharmonic equation and the Navier-Stokes equations. In all cases tested the over-determined formulations based on standard solvers were found to improve the accuracy and the range of applicability of the IBC method. Efficient linear solvers suitable for the spectral implementation of the IBC method have been developed and tested in the context of two-dimensional steady and unsteady Stokes flow in the presence of fixed boundary irregularities. These solvers can work with the classical as well as the over-determined formulations of the method. Significant acceleration of the computations as well as significant reduction of the memory requirements have been accomplished by taking advantage of the structure of the coefficient matrix resulting from the implementation of theIBC algorithm. Performances of the new solvers have been compared with the standard direct solvers and are shown to be of up to two orders of magnitude better. It has been determined that the new methods are by at least an order of magnitude faster than the iterative methods while removing restrictions based on the convergence criteria and thus expanding the severity of the geometries that can be dealt with using theIBC algorithm. The performance of theIBC method combined with the new solvers has been compared with the performance of a method based on the generation of the boundary conforming grids, and is found to be better by at least two orders of magnitude. Application of the new solvers to the unsteady problems also results in performance improvement of up to two orders of magnitude. Possible applications of the IBC algorithm for analyzing physical problems have also been presented. The advantage of using IBC algorithm is illustrated by considering its application to two physical problems, which are - i) analysis of the effects of distributed roughness on friction factor and ii) analysis of traveling wave instability in wavy channels. These examples clearly show the attractiveness of the IBC algorithm for studying effects of a large array of boundary geometries on the flow field. (Abstract shortened by UMI.)
Dosimetry methods for multi-detector computed tomography.
Gancheva, M; Dyakov, I; Vassileva, J; Avramova-Cholakova, S; Taseva, D
2015-07-01
The aim of this study is to compare four dosimetry methods for wide-beam multi-detector computed tomography (MDCT) in terms of computed tomography dose index free in air (CTDIfree-in-air) and CTDI measured in phantom (CTDIphantom). The study was performed with Aquilion One 320-detector row CT (Toshiba), Ingenuity 64-detector row CT (Philips) and Aquilion 64 64-detector row CT (Toshiba). In addition to the standard dosimetry, three other dosimetry methods were also applied. The first method, suggested by International Electrotechnical Commission (IEC) for MDCT, includes free-in-air measurements with a standard 100-mm CT pencil ion chamber, stepped through the X-ray beam, along the z-axis, at intervals equal to its sensitive length. Two cases were studied-with an integration length of 200 mm and with a standard polimetil metakrilat (PMMA) dosimetry phantom. The second approach comprised measurements with a twice-longer phantom and two 100-mm chambers positioned and fixed against each other, forming a detection length of 200 mm. As a third method, phantom measurements were performed to study the real-dose profile along z-axis using thermoluminescent detectors. Fabricated PMMA tube of a total length of 300 mm in cylindrical shape containing LiF detectors was used. CTDIfree-in-air measured with an integration length of 300 mm for 160 mm wide beam was by 194 % higher than the same quantity measured using the standard method. For an integration length of 200 mm, the difference was 18 % for 40 mm wide beam and 14 % for 32 mm wide beam in comparison with the standard CTDI measurement. For phantom measurements, the IEC method resulted in difference of 41 % for the beam width 160 mm, 19 % for the beam width 40 mm and 18 % for the beam width 32 mm compared with the method for CTDIvol. CTDI values from direct measurement in the phantom central hole with two chambers differ by 20 % from the calculated values by the IEC method. Dose profile for beam widths of 40, 32 and 16 mm, and analysis and conclusions are presented. PMID:25889607
Fast method for computing pore size distributions of model materials.
Bhattacharya, Supriyo; Gubbins, Keith E
2006-08-29
Recently developed atomistic models of highly disordered nanoporous materials offer hope for a much more realistic description of the pore morphology and topology in such materials; however, a factor limiting their application has been the computationally intensive characterization of the models, particularly determination of the pore size distribution. We report a new technique for fast computation of pore size distributions of model materials from knowledge of the molecular coordinates. The pore size distribution (PSD) is defined as the statistical distribution of the radius of the largest sphere that can be fitted inside a pore at a given point. Using constrained nonlinear optimization, we calculate the maximum radii of test particles at random points inside the pore cavity. The final pore size distribution is then obtained by sampling the test particle radii using Monte Carlo integration. The computation time depends on factors such as the number of atoms, the sampling resolution, and the desired accuracy. However, even for large systems, PSDs with very high accuracy (>99.9%) are obtained in less than 24 h on a 3 GHz Pentium IV processor. The technique is validated by applying it to model structures, whose pore size distributions are already known. We then apply this method to investigate the pore structures of several mesoporous silica models such as SBA-15 and mesostructured cellular foams. PMID:16922556
An experiment in hurricane track prediction using parallel computing methods
NASA Technical Reports Server (NTRS)
Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.
1994-01-01
The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.
Applications of Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.
2004-01-01
Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.
Parallel computation of meshless methods for explicit dynamic analysis.
Danielson, K. T.; Hao, S.; Liu, W. K.; Uras, R. A.; Li, S.; Reactor Engineering; Northwestern Univ.; Waterways Experiment Station
2000-03-10
A parallel computational implementation of modern meshless methods is presented for explicit dynamic analysis. The procedures are demonstrated by application of the Reproducing Kernel Particle Method (RKPM). Aspects of a coarse grain parallel paradigm are detailed for a Lagrangian formulation using model partitioning. Integration points are uniquely defined on separate processors and particle definitions are duplicated, as necessary, so that all support particles for each point are defined locally on the corresponding processor. Several partitioning schemes are considered and a reduced graph-based procedure is presented. Partitioning issues are discussed and procedures to accommodate essential boundary conditions in parallel are presented. Explicit MPI message passing statements are used for all communications among partitions on different processors. The effectiveness of the procedure is demonstrated by highly deformable inelastic example problems.
Computing thermal Wigner densities with the phase integration method
NASA Astrophysics Data System (ADS)
Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.
2014-08-01
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
Computational analysis of methods for reduction of induced drag
NASA Technical Reports Server (NTRS)
Janus, J. M.; Chatterjee, Animesh; Cave, Chris
1993-01-01
The purpose of this effort was to perform a computational flow analysis of a design concept centered around induced drag reduction and tip-vortex energy recovery. The flow model solves the unsteady three-dimensional Euler equations, discretized as a finite-volume method, utilizing a high-resolution approximate Riemann solver for cell interface flux definitions. The numerical scheme is an approximately-factored block LU implicit Newton iterative-refinement method. Multiblock domain decomposition is used to partition the field into an ordered arrangement of blocks. Three configurations are analyzed: a baseline fuselage-wing, a fuselage-wing-nacelle, and a fuselage-wing-nacelle-propfan. Aerodynamic force coefficients, propfan performance coefficients, and flowfield maps are used to qualitatively access design efficacy. Where appropriate, comparisons are made with available experimental data.
Ventricular hemodynamics using cardiac computed tomography and optical flow method.
Lin, Yang-Hsien; Huang, Yung-Hui; Lin, Kang-Ping; Liu, Juhn-Cherng; Huang, Tzung-Chi
2014-01-01
Ventricular hemodynamics plays an important role in assessing cardiac function in clinical practice. The aim of this study was to determine the ventricular hemodynamics based on contrast movement in the left ventricle (LV) between the phases in a cardiac cycle recorded using an electrocardiography (ECG) with cardiac computed tomography (CT) and optical flow method. Cardiac CT data were acquired at 120 kV and 280 mA with a 350 ms gantry rotation, which covered one cardiac cycle, on the 640-slice CT scanner with ECG for a selected patient without heart disease. Ventricular hemodynamics (mm/phase) were calculated using the optical flow method based on contrast changes with ECG phases in anterior-posterior, lateral and superior-inferior directions. Local hemodynamic information of the LV with color coating was presented. The visualization of the functional information made the hemodynamic observation easy. PMID:24463391
Training in survey and research methods within a Master of Public Health program.
Neumark, Yehuda; Friedlander, Yechiel
2002-01-01
Sound decision-making and practice in public health, as in other disciplines, is contingent upon information that is properly collected, analyzed, and interpreted. We describe the content and teaching methods of a graduate course in investigative methods in public health taught within the framework of a Master of Public Health (MPH) program. Following the progressive steps of carrying out research, we highlight the main concepts and skills that a student of public health should be exposed to. This includes the formulation of the study purpose and objectives, basic study designs, definition and selection of the study population and study variables, issues related to the actual collection of data in the field including the reliability and validity of the information, and preparing the data for analysis. We describe the teaching methods that are employed including frontal lectures, individual and group-based exercises, and the use of simulated data to develop skills in the critical reading of published literature and data analysis. The integration of the learned concepts and tools into course workshops and dissertation work is also addressed. Together with training in epidemiology, statistics and other quantitative and qualitative methodologies, this course provides a solid basis for MPH graduates to tackle the public health challenges that await them. PMID:12613708
Radiation Transport Computation in Stochastic Media: Method and Application
NASA Astrophysics Data System (ADS)
Liang, Chao
Stochastic media, characterized by the stochastic distribution of inclusions in a background medium, are typical radiation transport media encountered in natural or engineering systems. In the community of radiation transport computation, there is always a demand of accurate and efficient methods that can account for the nature of the stochastic distribution. In this dissertation, we focus on methodology development for the radiation transport computation that is applied to neutronic analyses of nuclear reactor designs characterized by the stochastic distribution of particle fuel. Reactor concepts with the employment of a fuel design consisting of a random heterogeneous mixture of fissile material and non-fissile moderator are constantly proposed. Key physical quantities such as core criticality and power distribution, reactivity control design parameters, depletion and fuel burn-up need to be carefully evaluated. In order to meet these practical requirements, we first need to develop accurate and fast computational methods that can effectively account for the stochastic nature of double heterogeneity configuration. A Monte Carlo based method called Chord Length Sampling (CLS) method is considered to be a promising method for analyzing those TRISO-type fueled reactors. Although the CLS method has been proposed for more than two decades and much research has been conducted to enhance its applicability, further efforts are still needed to address some key research gaps that exist for the CLS method. (1) There is a general lack of thorough investigation of the factors that give rise to the inaccuracy of the CLS method found by many researchers. The accuracy of the CLS method depends on the optical and geometric properties of the system. In some specific scenarios, considerable inaccuracies have been reported. However, no research has been providing a clear interpretation of the reasons responsible for the inaccuracy in the reported scenarios. Furthermore, no any correction methods have been proposed or developed to improve the accuracy of the CLS in all the applied scenarios. (2) Previous CLS method only deals with the on-the-fly sample of fuel particles in analyzing TRISO-type fueled reactors. Within the fuel particle, which consists of a fuel kernel and a coating, conventional Monte Carlo simulations apply. This strategy may not fully achieve the highest computational efficiency since extra simulation time is taken for tracking neutrons in the coating region. The coating region has negligible neutronic effect on the overall reactor core performance. This indicates a possible strategy to further increase the computational efficiency by directly sampling fuel kernels on-the-fly in the CLS simulations. In order to test the new strategy, a new model of the chord length distribution function is needed, which requires new research effort to develop and test the new model. (3) The previous evaluations and applications of the CLS method have been limited to single-type single-size fuel particle systems, i.e. only one type of fuel particles with constant size is assumed in the fuel zone, which is the case for typical VHTR designs. In practice, however, for different application purposes, two or more types of TRISO fuel particles may be loaded in the same fuel zone, e.g. fissile fuel particles and fertile fuel particles are used for transmutation purpose in some reactors. Moreover, the fuel particle size may not be kept constant and can vary with a range. Typical design containing such fuel particles can be found in the FSV reactor. Therefore, it is desired to develop new computational model to treat multi-type poly-sized particle systems in the neutornic analysis. This requires extending the current CLS method to on-the-fly sample not only the location of the fuel particle, but also the type and size of the fuel particles in order to be applied to a broad range of reactor designs in neutronic analyses. New sampling functions need to be developed for the extended on-the-fly sampling strategy. This Ph.D. dissertation addressed these
Turner, Ken
Kenneth J. Turner. Analysing Interactive Voice Services (pre-publication version), Computer Voice Services Kenneth J. Turner Computing Science and Mathematics, University of Stirling, Stirling FK9 Specification), Service, VoiceXML (Voice eXtensible Markup Language) Email address: kjt@cs.stir.ac.uk (Kenneth J
A Critical Review of Computer-Assisted Learning in Public Health via the Internet, 1999-2008
ERIC Educational Resources Information Center
Corda, Kirsten W.; Polacek, Georgia N. L. J.
2009-01-01
Computers and the internet have been utilized as viable avenues for public health education delivery. Yet the effectiveness, e.g., behavior change, from use of these tools has been limited. Previous reviews have focused on single health topics such as smoking cessation and weight loss. This review broadens the scope to consider computer-assisted…
Parallel computation of multigroup reactivity coefficient using iterative method
Susmikanti, Mike; Dewayatna, Winter
2013-09-09
One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.
Parallel computation of multigroup reactivity coefficient using iterative method
NASA Astrophysics Data System (ADS)
Susmikanti, Mike; Dewayatna, Winter
2013-09-01
One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.
34 CFR 682.304 - Methods for computing interest benefits and special allowance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 4 2011-07-01 2011-07-01 false Methods for computing interest benefits and special allowance. 682.304 Section...Interest and Special Allowance § 682.304 Methods for computing interest benefits and special allowance. (a)...
34 CFR 682.304 - Methods for computing interest benefits and special allowance.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 4 2012-07-01 2012-07-01 false Methods for computing interest benefits and special allowance. 682.304 Section...Interest and Special Allowance § 682.304 Methods for computing interest benefits and special allowance. (a)...
34 CFR 682.304 - Methods for computing interest benefits and special allowance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 4 2013-07-01 2013-07-01 false Methods for computing interest benefits and special allowance. 682.304 Section...Interest and Special Allowance § 682.304 Methods for computing interest benefits and special allowance. (a)...
34 CFR 682.304 - Methods for computing interest benefits and special allowance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 4 2014-07-01 2014-07-01 false Methods for computing interest benefits and special allowance. 682.304 Section...Interest and Special Allowance § 682.304 Methods for computing interest benefits and special allowance. (a)...
PUBLICATIONS --G. CAGINALP--January 2010 Phase Field (Diffuse Interface Model)-Computational
Çaginalp, Gunduz
). "Phase field computations of single-needle crystals, crystal growth and motion by mean curvature" (with E approach to phase boundaries by spreading: single needle, crystal growth and motion by mean curvature," Fields Inst Comm 5, 67-83 (1996) in Pattern Formation: Symmetry Methods and Applications, American Math
Correction to the publication Error analysis of a finite element method for the
Correction to the publication Error analysis of a finite element method for the Willmore flow version of the analysis in Lemma 3.2 together with Lemma A.1 it is not difficult to obtain the estimate eu for pointing out to us the problematic choice of the discrete initial value u0h. Institut fÂ¨ur Analysis und
Relationship of Instructional Methods to Student Engagement in Two Public High Schools
ERIC Educational Resources Information Center
Johnson, Lisa S.
2008-01-01
This study investigated the argument that schools that emphasize relational learning are better able to serve the motivational needs of adolescents. Matched-pair samples (n=80) from two public secondary schools were compared using the experience sampling method (ESM). Students attending a "non-traditional" school (which employed group decision…
Ab initio methods for nuclear properties - a computational physics approach
NASA Astrophysics Data System (ADS)
Maris, Pieter
2011-04-01
A microscopic theory for the structure and reactions of light nuclei poses formidable challenges for high-performance computing. Several ab-initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab-initio no-core full configuration (NCFC) approach is based on basis space expansion methods and uses Slater determinants of single-nucleon basis functions to express the nuclear wave function. In this approach, the quantum many-particle problem becomes a large sparse matrix eigenvalue problem. The eigenvalues of this matrix give us the binding energies, and the corresponding eigenvectors the nuclear wave functions. These wave functions can be employed to evaluate experimental quantities. In order to reach numerical convergence for fundamental problems of interest, the matrix dimension often exceeds 1 billion, and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. I discuss different strategies for distributing and solving this large sparse matrix on current multicore computer architectures, including methods to deal with with memory bottleneck. Several of these strategies have been implemented in the code MFDn, which is a parallel fortran code for nuclear structure calculations. I will show scaling behavior and compare the performance of the pure MPI version with the hybrid MPI/OpenMP code on Cray XT4 and XT5 platforms. For large core counts (typically 5,000 and above), the hybrid version is more efficient than pure MPI. With this code, we have been able to predict properties of the unstable nucleus 14F, which have since been confirmed by experiments. I will also give an overview of other recent results for nuclei in the A = 6 to 16 range with 2- and 3-body interactions. Supported in part by US DOE Grant DE-FC02-09ER41582.
Pfeffermann, Danny; Landsman, Victoria
2011-01-01
In observational studies the assignment of units to treatments is not under control. Consequently, the estimation and comparison of treatment effects based on the empirical distribution of the responses can be biased since the units exposed to the various treatments could differ in important unknown pretreatment characteristics, which are related to the response. An important example studied in this article is the question of whether private schools offer better quality of education than public schools. In order to address this question we use data collected in the year 2000 by OECD for the Programme for International Student Assessment (PISA). Focusing for illustration on scores in mathematics of 15-years old pupils in Ireland, we find that the raw average score of pupils in private schools is higher than of pupils in public schools. However, application of a newly proposed method for observational studies suggests that the less able pupils tend to enroll in public schools, such that their lower scores is not necessarily an indication of bad quality of the public schools. Indeed, when comparing the average score in the two types of schools after adjusting for the enrollment effects, we find quite surprisingly that public schools perform better on average. This outcome is supported by the methods of instrumental variables and latent variables, commonly used by econometricians for analyzing and evaluating social programs. PMID:22242110
Computational modeling of multicellular constructs with the material point method.
Guilkey, James E; Hoying, James B; Weiss, Jeffrey A
2006-01-01
Computational modeling of the mechanics of cells and multicellular constructs with standard numerical discretization techniques such as the finite element (FE) method is complicated by the complex geometry, material properties and boundary conditions that are associated with such systems. The objectives of this research were to apply the material point method (MPM), a meshless method, to the modeling of vascularized constructs by adapting the algorithm to accurately handle quasi-static, large deformation mechanics, and to apply the modified MPM algorithm to large-scale simulations using a discretization that was obtained directly from volumetric confocal image data. The standard implicit time integration algorithm for MPM was modified to allow the background computational grid to remain fixed with respect to the spatial distribution of material points during the analysis. This algorithm was used to simulate the 3D mechanics of a vascularized scaffold under tension, consisting of growing microvascular fragments embedded in a collagen gel, by discretizing the construct with over 13.6 million material points. Baseline 3D simulations demonstrated that the modified MPM algorithm was both more accurate and more robust than the standard MPM algorithm. Scaling studies demonstrated the ability of the parallel code to scale to 200 processors. Optimal discretization was established for the simulations of the mechanics of vascularized scaffolds by examining stress distributions and reaction forces. Sensitivity studies demonstrated that the reaction force during simulated extension was highly sensitive to the modulus of the microvessels, despite the fact that they comprised only 10.4% of the volume of the total sample. In contrast, the reaction force was relatively insensitive to the effective Poisson's ratio of the entire sample. These results suggest that the MPM simulations could form the basis for estimating the modulus of the embedded microvessels through a parameter estimation scheme. Because of the generality and robustness of the modified MPM algorithm, the relative ease of generating spatial discretizations from volumetric image data, and the ability of the parallel computational implementation to scale to large processor counts, it is anticipated that this modeling approach may be extended to many other applications, including the analysis of other multicellular constructs and investigations of cell mechanics. PMID:16095601
Automatic heart positioning method in computed tomography scout images.
Li, Hong; Liu, Kaihua; Sun, Hang; Bao, Nan; Wang, Xu; Tian, Shi; Qi, Shouliang; Kang, Yan
2014-01-01
Computed tomography (CT) radiation dose can be reduced significantly by region of interest (ROI) CT scan. Automatically positioning the heart in CT scout images is an essential step to realize the ROI CT scan of the heart. This paper proposed a fully automatic heart positioning method in CT scout image, including the anteroposterior (A-P) scout image and lateral scout image. The key steps were to determine the feature points of the heart and obtaining part of the heart boundary on the A-P scout image, and then transform the part of the boundary into polar coordinate system and obtain the whole boundary of the heart using slant elliptic equation curve fitting. For heart positioning on the lateral image, the top and bottom boundary obtained from A-P image can be inherited. The proposed method was tested on a clinical routine dataset of 30 cases (30 A-P scout images and 30 lateral scout images). Experimental results show that 26 cases of the dataset have achieved a very good positioning result of the heart both in the A-P scout image and the lateral scout image. The method may be helpful for ROI CT scan of the heart. PMID:25227037
Computational methods for the detection of cis-regulatory modules.
Van Loo, Peter; Marynen, Peter
2009-09-01
Metazoan transcription regulation occurs through the concerted action of multiple transcription factors that bind co-operatively to cis-regulatory modules (CRMs). The annotation of these key regulators of transcription is lagging far behind the annotation of the transcriptome itself. Here, we give an overview of existing computational methods to detect these CRMs in metazoan genomes. We subdivide these methods into three classes: CRM scanners screen sequences for CRMs based on predefined models that often consist of multiple position weight matrices (PWMs). CRM builders construct models of similar CRMs controlling a set of co-regulated or co-expressed genes. CRM genome screeners screen sequences or complete genomes for CRMs as homotypic or heterotypic clusters of binding sites for any combination of transcription factors. We believe that CRM scanners are currently the most advanced methods, although their applicability is limited. Finally, we argue that CRM builders that make use of PWM libraries will benefit greatly from future advances and will prove to be most instrumental for the annotation of regulatory regions in metazoan genomes. PMID:19498042
Computer simulations of enzyme catalysis: methods, progress, and insights.
Warshel, Arieh
2003-01-01
Understanding the action of enzymes on an atomistic level is one of the important aims of modern biophysics. This review describes the state of the art in addressing this challenge by simulating enzymatic reactions. It considers different modeling methods including the empirical valence bond (EVB) and more standard molecular orbital quantum mechanics/molecular mechanics (QM/MM) methods. The importance of proper configurational averaging of QM/MM energies is emphasized, pointing out that at present such averages are performed most effectively by the EVB method. It is clarified that all properly conducted simulation studies have identified electrostatic preorganization effects as the source of enzyme catalysis. It is argued that the ability to simulate enzymatic reactions also provides the chance to examine the importance of nonelectrostatic contributions and the validity of the corresponding proposals. In fact, simulation studies have indicated that prominent proposals such as desolvation, steric strain, near attack conformation, entropy traps, and coherent dynamics do not account for a major part of the catalytic power of enzymes. Finally, it is pointed out that although some of the issues are likely to remain controversial for some time, computer modeling approaches can provide a powerful tool for understanding enzyme catalysis. PMID:12574064
Computational methods for the verification of adaptive control systems
NASA Astrophysics Data System (ADS)
Prasanth, Ravi K.; Boskovic, Jovan; Mehra, Raman K.
2004-08-01
Intelligent and adaptive control systems will significantly challenge current verification and validation (V&V) processes, tools, and methods for flight certification. Although traditional certification practices have produced safe and reliable flight systems, they will not be cost effective for next-generation autonomous unmanned air vehicles (UAVs) due to inherent size and complexity increases from added functionality. Affordable V&V of intelligent control systems is by far the most important challenge in the development of UAVs faced by both commercial and military aerospace industry in the United States. This paper presents a formal modeling framework for a class of adaptive control systems and an associated computational scheme. The class of systems considered include neural network-based flight control systems and vehicle health management systems. This class of systems and indeed all adaptive systems are hybrid systems whose continuum dynamics is nonlinear. Our computational procedure is iterative and each iteration has two sequential steps. The first step is to derive an approximating finite-state automaton whose behaviors contain the behaviors of the hybrid system. The second step is to check if the language accepted by the approximating automaton is empty (emptiness checking). The iterations are terminated if the language accepted is empty; otherwise, the approximation is refined and the iteration is continued. This procedure will never produce an "error-free" certificate when the actual system contains errors which is an important requirement in V&V of safety critical systems.
Computational Methods and Challenges for Large-Scale Circuit Mapping
Helmstaedter, Moritz; Mitra, Partha
2012-01-01
Summary The connectivity architecture of neuronal circuits is essential to understand how brains work, yet our knowledge about the neuronal wiring diagrams remains limited and partial. Technical breakthroughs in labeling and imaging methods starting more than a century ago have advanced knowledge in the field. However, the volume of data associated with imaging a whole brain or a significant fraction thereof, with electron or light microscopy, has only recently become amenable to digital storage and analysis. A mouse brain imaged at light microscopic resolution is about a terabyte of data, and 1 mm3 of the brain at EM resolution is about half a petabyte. This has given rise to a new field of research, computational analysis of large scale neuroanatomical data sets, with goals that include reconstructions of the morphology of individual neurons as well as entire circuits. The problems encountered include large data management, segmentation and 3D reconstruction, computational geometry and workflow management allowing for hybrid approaches combining manual and algorithmic processing. Here we review this growing field of neuronal data analysis with emphasis on reconstructing neurons from EM data cubes. PMID:22221862
Inter-Domain Redundancy Path Computation Methods Based on PCE
NASA Astrophysics Data System (ADS)
Hayashi, Rie; Oki, Eiji; Shiomoto, Kohei
This paper evaluates three inter-domain redundancy path computation methods based on PCE (Path Computation Element). Some inter-domain paths carry traffic that must be assured of high quality and high reliability transfer such as telephony over IP and premium virtual private networks (VPNs). It is, therefore, important to set inter-domain redundancy paths, i. e. primary and secondary paths. The first scheme utilizes an existing protocol and the basic PCE implementation. It does not need any extension or modification. In the second scheme, PCEs make a virtual shortest path tree (VSPT) considering the candidates of primary paths that have corresponding secondary paths. The goal is to reduce blocking probability; corresponding secondary paths may be found more often after a primary path is decided; no protocol extension is necessary. In the third scheme, PCEs make a VSPT considering all candidates of primary and secondary paths. Blocking probability is further decreased since all possible candidates are located, and the sum of primary and secondary path cost is reduced by choosing the pair with minimum cost among all path pairs. Numerical evaluations show that the second and third schemes offer only a few percent reduction in blocking probability and path pair total cost, while the overheads imposed by protocol revision and increase of the amount of calculation and information to be exchanged are large. This suggests that the first scheme, the most basic and simple one, is the best choice.
Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety
Broadhead, B.L.; Childs, R.L.; Rearden, B.T.
1999-09-20
Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.
Methods and computer readable medium for improved radiotherapy dosimetry planning
Wessol, Daniel E.; Frandsen, Michael W.; Wheeler, Floyd J.; Nigg, David W.
2005-11-15
Methods and computer readable media are disclosed for ultimately developing a dosimetry plan for a treatment volume irradiated during radiation therapy with a radiation source concentrated internally within a patient or incident from an external beam. The dosimetry plan is available in near "real-time" because of the novel geometric model construction of the treatment volume which in turn allows for rapid calculations to be performed for simulated movements of particles along particle tracks therethrough. The particles are exemplary representations of alpha, beta or gamma emissions emanating from an internal radiation source during various radiotherapies, such as brachytherapy or targeted radionuclide therapy, or they are exemplary representations of high-energy photons, electrons, protons or other ionizing particles incident on the treatment volume from an external source. In a preferred embodiment, a medical image of a treatment volume irradiated during radiotherapy having a plurality of pixels of information is obtained.
Software Defects, Scientific Computation and the Scientific Method
CERN. Geneva
2011-01-01
Computation has rapidly grown in the last 50 years so that in many scientific areas it is the dominant partner in the practice of science. Unfortunately, unlike the experimental sciences, it does not adhere well to the principles of the scientific method as espoused by, for example, the philosopher Karl Popper. Such principles are built around the notions of deniability and reproducibility. Although much research effort has been spent on measuring the density of software defects, much less has been spent on the more difficult problem of measuring their effect on the output of a program. This talk explores these issues with numerous examples suggesting how this situation might be improved to match the demands of modern science. Finally it develops a theoretical model based on an amalgam of statistical mechanics and Hartley/Shannon information theory which suggests that software systems have strong implementation independent behaviour and supports the widely observed phenomenon that defects clust...
ERIC Educational Resources Information Center
Jairam, Dharmananda; Kiewra, Kenneth A.
2010-01-01
This study used self-report and observation techniques to investigate how students study computer-based materials. In addition, it examined if a study method called SOAR can facilitate computer-based learning. SOAR is an acronym that stands for the method's 4 theoretically driven and empirically supported components: select (S), organize (O),…
Brom, Cyril
in Computers & Education. Changes resulting from the publishing process, such as peer review, editing Robust than Previously Thought? Computers and Education. Advance online publication. doi: 10.1016/j was accepted for publication in Computers & Education (2014), DOI: 10.1016/j.compedu.2013.11.013. The paper
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-13
...by Public Law 100-503; Notice of a Computer Matching Program AGENCY: Office of Financial...Information System (PARIS) notice of a computer matching program between the Department...amended by Public Law 100-503, the Computer Matching and Privacy Protection Act...
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This practice facilitates the interoperability of computed radiography (CR) imaging and data acquisition equipment by specifying image data transfer and archival storage methods in commonly accepted terms. This practice is intended to be used in conjunction with Practice E2339 on Digital Imaging and Communication in Nondestructive Evaluation (DICONDE). Practice E2339 defines an industrial adaptation of the NEMA Standards Publication titled Digital Imaging and Communications in Medicine (DICOM, see http://medical.nema.org), an international standard for image data acquisition, review, storage and archival storage. The goal of Practice E2339, commonly referred to as DICONDE, is to provide a standard that facilitates the display and analysis of NDE results on any system conforming to the DICONDE standard. Toward that end, Practice E2339 provides a data dictionary and a set of information modules that are applicable to all NDE modalities. This practice supplements Practice E2339 by providing information objec...
Development of computational methods for heavy lift launch vehicles
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Ryan, James S.
1993-01-01
The research effort has been focused on the development of an advanced flow solver for complex viscous turbulent flows with shock waves. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. A new computer program named CENS3D has been developed for viscous turbulent flows with discontinuities. Details of the code are described in Appendix A and Appendix B. With the developments of the numerical algorithm and dissipation model, the simulation of three-dimensional viscous compressible flows has become more efficient and accurate. The results of the research are expected to yield a direct impact on the design process of future liquid fueled launch systems.
Gyrokinetic Theory and Computational Methods for Electromagnetic Perturbations in Tokamaks
NASA Astrophysics Data System (ADS)
Qin, H.; Tang, W. M.; Rewoldt, G.
1998-11-01
A general gyrokinetic formalism and computational methods have been developed for electromagnetic perturbations in toroidal plasmas. This formalism and the associated numerical code represent the first self-consistent, comprehensive, fully kinetic model for treating both MHD instabilities and electromagnetic drift waves(H. Qin, W. M. Tang, and G. Rewoldt, Phys. Plasmas 5), 1035 (1998). The gyrokinetic system of equations is derived by phase-space Lagrangian Lie perturbation methods. An important component missing from previous gyrokinetic theories, the gyrokinetic perpendicular dynamics, is identified and developed. The corresponding numerical code, KIN-2DEM, has been systematically benchmarked against the high-n FULL code, the PEST code, and the NOVA-K code for kinetic ballooning modes, internal kink modes, and TAEs, respectively. For the internal kink mode, it is found that kinetic effects due to trapped ions can significantly modify the ? vs. q0 curve. For the destabilization of the TAEs by energetic particles, comparisons have been made between the non-perturbative, fully kinetic KIN-2DEM results and the perturbative hybrid NOVA-K results.
Method of adaptive nodes for coarse-node computations
Tzanos, C.P.
1987-01-01
The analysis with the COMMIX program of liquid-metal reactor intermediate heat exchanger (IHX) transients that are characterized by low flows, and especially imbalanced low flows, shows that if a coarse-node structure is used, the predicted temperatures are significantly different than those given by a fine-node structure. If a fine-node structure is used, for problems that involve a large part of the plant, the computation time becomes excessive. In these IHX problems, high-temperature gradients develop along a very short length at one end of the exchanger, and the temperature distribution is practically flat in the remaining length of the unit. For a general-purpose thermal-hydraulic code like COMMIX, a method of general applicability is desirable. This paper presents a method of adaptive node structure that is based on general principles and gives an accurate solution with a few nodes. At this stage it has been applied only to one-dimensional problems. Its application to two and three dimensions is expected to be straightforward.
Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA
2011-10-11
Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.
Research on assessment methods for urban public transport development in China.
Zou, Linghong; Dai, Hongna; Yao, Enjian; Jiang, Tian; Guo, Hongwei
2014-01-01
In recent years, with the rapid increase in urban population, the urban travel demands in Chinese cities have been increasing dramatically. As a result, developing comprehensive urban transport systems becomes an inevitable choice to meet the growing urban travel demands. In urban transport systems, public transport plays the leading role to promote sustainable urban development. This paper aims to establish an assessment index system for the development level of urban public transport consisting of a target layer, a criterion layer, and an index layer. Review on existing literature shows that methods used in evaluating urban public transport structure are dominantly qualitative. To overcome this shortcoming, fuzzy mathematics method is used for describing qualitative issues quantitatively, and AHP (analytic hierarchy process) is used to quantify expert's subjective judgment. The assessment model is established based on the fuzzy AHP. The weight of each index is determined through the AHP and the degree of membership of each index through the fuzzy assessment method to obtain the fuzzy synthetic assessment matrix. Finally, a case study is conducted to verify the rationality and practicability of the assessment system and the proposed assessment method. PMID:25530756
Research on Assessment Methods for Urban Public Transport Development in China
Zou, Linghong; Guo, Hongwei
2014-01-01
In recent years, with the rapid increase in urban population, the urban travel demands in Chinese cities have been increasing dramatically. As a result, developing comprehensive urban transport systems becomes an inevitable choice to meet the growing urban travel demands. In urban transport systems, public transport plays the leading role to promote sustainable urban development. This paper aims to establish an assessment index system for the development level of urban public transport consisting of a target layer, a criterion layer, and an index layer. Review on existing literature shows that methods used in evaluating urban public transport structure are dominantly qualitative. To overcome this shortcoming, fuzzy mathematics method is used for describing qualitative issues quantitatively, and AHP (analytic hierarchy process) is used to quantify expert's subjective judgment. The assessment model is established based on the fuzzy AHP. The weight of each index is determined through the AHP and the degree of membership of each index through the fuzzy assessment method to obtain the fuzzy synthetic assessment matrix. Finally, a case study is conducted to verify the rationality and practicability of the assessment system and the proposed assessment method. PMID:25530756
Zhang, Jun; Hsieh, Jui-Hua; Zhu, Hao
2014-01-01
In vitro bioassays have been developed and are currently being evaluated as potential alternatives to traditional animal toxicity models. Already, the progress of high throughput screening techniques has resulted in an enormous amount of publicly available bioassay data having been generated for a large collection of compounds. When a compound is tested using a collection of various bioassays, all the testing results can be considered as providing a unique bio-profile for this compound, which records the responses induced when the compound interacts with different cellular systems or biological targets. Profiling compounds of environmental or pharmaceutical interest using useful toxicity bioassay data is a promising method to study complex animal toxicity. In this study, we developed an automatic virtual profiling tool to evaluate potential animal toxicants. First, we automatically acquired all PubChem bioassay data for a set of 4,841 compounds with publicly available rat acute toxicity results. Next, we developed a scoring system to evaluate the relevance between these extracted bioassays and animal acute toxicity. Finally, the top ranked bioassays were selected to profile the compounds of interest. The resulting response profiles proved to be useful to prioritize untested compounds for their animal toxicity potentials and form a potential in vitro toxicity testing panel. The protocol developed in this study could be combined with structure-activity approaches and used to explore additional publicly available bioassay datasets for modeling a broader range of animal toxicities. PMID:24950175
A FAST NEW PUBLIC CODE FOR COMPUTING PHOTON ORBITS IN A KERR SPACETIME
Dexter, Jason [Department of Physics, University of Washington, Seattle, WA 98195-1560 (United States); Agol, Eric [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States)], E-mail: jdexter@u.washington.edu
2009-05-10
Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. We present a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semianalytically for the first time. We discuss the method, its implementation in a freely available FORTRAN code, and its application to toy problems from the literature.
Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?
ERIC Educational Resources Information Center
Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.
2011-01-01
This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…
Discrete Logarithms in Finite Fields Some Algorithms for Computing New Public Key Cryptosystem
NASA Astrophysics Data System (ADS)
Trendafilov, Ivan D.; Durcheva, Mariana I.
2010-10-01
Let p be a prime, Fp be a finite field, g be a primitive element of Fp and let h be a nonzero element of Fp. The discrete logarithm problem (DLP) is the problem of finding that an exponent k for which gk?h (mod p). The well-known problem of computing discrete logarithms has additional importance in recent years due to its applicability in cryptography. Several cryptographic systems would become insecure if an efficient discrete logarithm algorithm were discovered. In this paper are discused some known algorithms in this area. Most public key cryptosystems have been constructed based on abelian groups. Here we introduce how the discrete logarithm problem over a group can be seen as a special instance of an action by an abelian semigroup on finite set. The proposed new public key cryptosystem generalized the semigroup action problem due to Rosenlicht (see [8]) and shows how every semigroup action by an abelian semigroup gives rise to a Diffie-Hellman key exchange.
Wavelet method for CT colonography computer-aided polyp detection.
Li, Jiang; Van Uitert, Robert; Yao, Jianhua; Petrick, Nicholas; Franaszek, Marek; Huang, Adam; Summers, Ronald M
2008-08-01
Computed tomographic colonography (CTC) computer aided detection (CAD) is a new method to detect colon polyps. Colonic polyps are abnormal growths that may become cancerous. Detection and removal of colonic polyps, particularly larger ones, has been shown to reduce the incidence of colorectal cancer. While high sensitivities and low false positive rates are consistently achieved for the detection of polyps sized 1 cm or larger, lower sensitivities and higher false positive rates occur when the goal of CAD is to identify "medium"-sized polyps, 6-9 mm in diameter. Such medium-sized polyps may be important for clinical patient management. We have developed a wavelet-based postprocessor to reduce false positives for this polyp size range. We applied the wavelet-based postprocessor to CTC CAD findings from 44 patients in whom 45 polyps with sizes of 6-9 mm were found at segmentally unblinded optical colonoscopy and visible on retrospective review of the CT colonography images. Prior to the application of the wavelet-based postprocessor, the CTC CAD system detected 33 of the polyps (sensitivity 73.33%) with 12.4 false positives per patient, a sensitivity comparable to that of expert radiologists. Fourfold cross validation with 5000 bootstraps showed that the wavelet-based postprocessor could reduce the false positives by 56.61% (p <0.001), to 5.38 per patient (95% confidence interval [4.41, 6.34]), without significant sensitivity degradation (32/45, 71.11%, 95% confidence interval [66.39%, 75.74%], p=0.1713). We conclude that this wavelet-based postprocessor can substantially reduce the false positive rate of our CTC CAD for this important polyp size range. PMID:18777913
The Repeated Replacement Method: A Pure Lagrangian Meshfree Method for Computational Fluid Dynamics
Walker, Wade A.
2012-01-01
In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids’ tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly “chops out” fluid from active areas and replaces it with new “flattened” fluid cells with the same mass, momentum, and energy. We call the new cells “flattened” because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175
Soft Computing Explains Heuristic Numerical Methods in Data Processing and in Logic Programming
Kreinovich, Vladik
Soft Computing Explains Heuristic Numerical Methods in Data Processing and in Logic Programming soft comput- ing approaches explain and justify heuristic nu- merical methods used in data processing fixed point theorems, etc. Introduction What is soft computing good for? Tradi- tional viewpoint. When
NEWTON METHOD FOR RIEMANNIAN CENTROID COMPUTATION IN NATURALLY REDUCTIVE HOMOGENEOUS SPACES
Instituto de Sistemas e Robotica
NEWTON METHOD FOR RIEMANNIAN CENTROID COMPUTATION IN NATURALLY REDUCTIVE HOMOGENEOUS SPACES Ricardo Newton scheme for the centroid computation. This is achieved by exploiting a formula that we introduce the quadratic convergence of the Newton method derived herein. These computer simulation results show that
Method for Computing Protein Binding Affinity CHARLES F. F. KARNEY,1
Karney, Charles
Method for Computing Protein Binding Affinity CHARLES F. F. KARNEY,1 JASON E. FERRARA,1 STEPHAN to compute the binding affinity of a ligand to a protein. The method involves extending configuration space energy of binding. © 2004 Wiley Periodicals, Inc. J Comput Chem 26: 243251, 2005 Key words: free energy
Computer Simulation of Aqueous Block Copolymer Assemblies: Length Scales and Methods
Nielsen, Steven O.
HIGHLIGHT Computer Simulation of Aqueous Block Copolymer Assemblies: Length Scales and Methods-mail: discher@seas. upenn.edu) ABSTRACT: Atomistic, coarse- grain, and mesoscopic computer simulation methods research focuses on the use of computational techniques to study the mechanical properties and drug
Methods, Metrics and Motivation for a Green Computer Science Program
Way, Thomas
and, more recently, economically sensible [13]. "Going Green" implies reducing your energy use and pollution footprint. The technology community, specifically computer users, have popularized the term "Green Computing," which is the reduction of the pollution and energy footprint of computers [19]. While the goal
Non-unitary probabilistic quantum computing circuit and method
NASA Technical Reports Server (NTRS)
Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)
2009-01-01
A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.
Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery
Luttman, A.
2012-03-30
The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.
Pellacini, Fabio
IEEE Computer Graphics and Applications Toward Evaluating Progressive Rendering Methods design tasks. A progressive renderer avoids pre-computation completely. Instead, it gradually im- proves Charles University in Prague 3 Sapienza University of Rome Abstract Progressive rendering is becoming
NASA Astrophysics Data System (ADS)
Petoussi-Henss, Nina; Bolch, Wesley E.; Eckerman, Keith F.; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G.; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria
2014-09-01
ICRP Publication 116 on ‘Conversion coefficients for radiological protection quantities for external radiation exposures’, provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantities revealed that the latter quantities continue to provide a good approximation of effective dose for photons, neutrons and electrons for the ‘conventional’ energy ranges considered previously (ICRP 1996, ICRU 1998), but not at the higher energies of ICRP Publication 116.
Petoussi-Henss, Nina; Bolch, Wesley E; Eckerman, Keith F; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria
2014-09-21
ICRP Publication 116 on 'Conversion coefficients for radiological protection quantities for external radiation exposures', provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantities revealed that the latter quantities continue to provide a good approximation of effective dose for photons, neutrons and electrons for the 'conventional' energy ranges considered previously (ICRP 1996, ICRU 1998), but not at the higher energies of ICRP Publication 116. PMID:25144220
Buszynski, M.E.
1996-12-31
Many proponents of gas pipeline studies using the public roadway for their facilities have trouble encouraging public participation. Problems resulting from a lack of public involvement are documented. A public participation process designed to gather meaningful public input is presented through a case study of a public roadway pipeline study in southern Ontario. Techniques are outlined to effectively stimulate public interest and document the public involvement process. Recommendations are made as to the transferability of this process to other jurisdictions.
An efficient method for calculation of cooling in Lagrange computational gas dynamics
E. P. Kurbatov
2007-05-15
A new method for computation of gas cooling for Lagrange approach is suggested. The method is based on precalculation of cooling law for known cooling function. Unlike implicit methods, this method is very efficient, it is an one-step method which is even more accurate than implicit methods of the same order.
A method for computing the leading-edge suction in a higher-order panel method
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Manro, M. E.
1984-01-01
Experimental data show that the phenomenon of a separation induced leading edge vortex is influenced by the wing thickness and the shape of the leading edge. Both thickness and leading edge shape (rounded rather than point) delay the formation of a vortex. Existing computer programs used to predict the effect of a leading edge vortex do not include a procedure for determining whether or not a vortex actually exists. Studies under NASA Contract NAS1-15678 have shown that the vortex development can be predicted by using the relationship between the leading edge suction coefficient and the parabolic nose drag. The linear theory FLEXSTAB was used to calculate the leading edge suction coefficient. This report describes the development of a method for calculating leading edge suction using the capabilities of the higher order panel methods (exact boundary conditions). For a two dimensional case, numerical methods were developed using the double strength and downwash distribution along the chord. A Gaussian quadrature formula that directly incorporates the logarithmic singularity in the downwash distribution, at all panel edges, was found to be the best method.
Computer technology to assist states, tribes, and clients in the design of wellhead and source water protection areas for public water supply wells is being developed through two distinct SubTasks: (Sub task 1) developing a web-based wellhead decision support system, WellHEDSS, t...
ERIC Educational Resources Information Center
Amodeo, Luiza B.; Martin, Jeanette
To a large extent the Southwest can be described as a rural area. Under these circumstances, programs for public understanding of technology become, first of all, exercises in logistics. In 1982, New Mexico State University introduced a program to inform teachers about computer technology. This program takes microcomputers into rural classrooms…
Bitar, D; Che, D; Capek, I; de Valk, H; Saura, C
2011-02-01
One of the objectives of the surveillance systems implemented by the French National Institute for Public Health Surveillance is to detect communicable diseases and to reduce their impact. For emerging infections, the detection and risk analysis pose specific challenges due to lack of documented criteria for the event. The surveillance systems detect a variety of events, or "signals" which represent a potential risk, such as a novel germ, a pathogen which may disseminate in a non-endemic area, or an abnormal number of cases for a well-known disease. These signals are first verified and analyzed, then classified as: potential public health threat, event to follow-up, or absence of threat. Through various examples, we illustrate the method and criteria which are used to analyze and classify these events considered to be emerging. The examples highlight the importance of host characteristics and exposure in groups at particular risk, such as professionals in veterinarian services, health care workers, travelers, immunodepressed patients, etc. The described method should allow us to identify future needs in terms of surveillance and to improve timeliness, quality of expertise, and feedback information regarding the public health risk posed by events which are insufficiently documented. PMID:21251782
16.901 Computational Methods in Aerospace Engineering, Spring 2003
Darmofal, David L.
Introduction to computational techniques arising in aerospace engineering. Applications drawn from aerospace structures, aerodynamics, dynamics and control, and aerospace systems. Techniques include: numerical integration ...
A front-tracking method for computation of interfacial flows with soluble surfactants
Muradoglu, Metin
A front-tracking method for computation of interfacial flows with soluble surfactants Metin-difference/front-tracking method is developed for computations of interfacial flows with soluble surfactants. The method is designed to solve the evolution equations of the interfacial and bulk surfactant concentrations together
Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.
ERIC Educational Resources Information Center
Heald, Emerson F.
1978-01-01
Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)
A Wavefront-Based Gaussian Beam Method for Computing High Frequency Wave Propagation Problems
Runborg, Olof
of the method is illustrated with two numerical examples. Keywords: wave propagation, high frequency, asymptoticA Wavefront-Based Gaussian Beam Method for Computing High Frequency Wave Propagation Problems, Sweden Abstract We present a novel wavefront method based on Gaussian beams for computing high frequency
3D modeling method for computer animate based on modified weak structured light method
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei
2010-11-01
A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.
ISE 407: Computational Methods in Optimization Dr. Ted Ralphs
Ralphs, Ted
structures, design and analysis of algorithms (sequential and parallel), programming paradigms and languages applications with a focus on numerical analysis and practical issues that arise in floating point computation, development tools and environments, numerical analysis, and matrix computations. 3 Course Objectives The goals
Parallel Computing Environments and Methods for Power Distribution System Simulation
Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.
2005-11-10
The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.
Evaluating Computer Automated Scoring: Issues, Methods, and an Empirical Illustration
ERIC Educational Resources Information Center
Yang, Yongwei; Buckendahl, Chad W.; Juszkiewicz, Piotr J.; Bhola, Dennison S.
2005-01-01
With the continual progress of computer technologies, computer automated scoring (CAS) has become a popular tool for evaluating writing assessments. Research of applications of these methodologies to new types of performance assessments is still emerging. While research has generally shown a high agreement of CAS system generated scores with those…
High-order finite-difference methods in computational electromagnetics
D. W. Zingg
1997-01-01
There exists a class of problems in computational electromagnetics (CEM) which require very large computer resources. These problems are characterized by a geometry which has a large electrical size, i.e., the dimensions of the scatterer greatly exceed the wavelength of the incident electromagnetic wave. An example is the radar cross-section analysis of an entire airplane with an incident wave having
Students' Attitudes towards Control Methods in Computer-Assisted Instruction.
ERIC Educational Resources Information Center
Hintze, Hanne; And Others
1988-01-01
Describes study designed to investigate dental students' attitudes toward computer-assisted teaching as applied in programs for oral radiology in Denmark. Programs using personal computers and slide projectors with varying degrees of learner and teacher control are described, and differences in attitudes between male and female students are…
Kepler, Grace Martinelli
Reduced Order Computational Methods for Electromagnetic Material Interrogation Using Pulsed Signals of a pulsed planar electromagnetic wave of a dielectric slab with a supraconductive backing. Previous work
ERIC Educational Resources Information Center
Barclay, Donald A.
This book, while necessarily concerning itself with computer technology, approaches technology as a tool for providing public-service and helps librarians and others effectively manage public-access computers. The book is organized to progress from more technological to more managerial topics. The first chapter--which answers the question, "What…
A novel computational method for inferring dynamic genetic regulatory trajectories
Reeder, Christopher Campbell
2008-01-01
We present a novel method called Time Series Affinity Propagation (TSAP) for inferring regulatory states and trajectories from time series genomic data. This method builds on the Affinity Propagation method of Frey and ...
ERIC Educational Resources Information Center
Bachman, Charles A.
2010-01-01
While private sector organizations have implemented enterprise resource planning (ERP) systems since the mid 1990s, ERP implementations within the public sector lagged by several years. This research conducted a mixed method, comparative assessment of post "go-live" ERP implementations between public and private sector organization. Based on a…
Chu, Kevin T.
to avoid computation of the full Jacobian. While Jacobian-free methods, such as the NewtonKrylov method: Analytical Jacobian Numerical methods Matrix calculus Newton's method Integro-differential equations a b iteration of the classical formulation of Newton's method. Unfortunately, calculation of the Jacobian can
Edwards, Kyle T
2014-07-01
In recent years, liberal democratic societies have struggled with the question of how best to balance expertise and democratic participation in the regulation of emerging technologies. This study aims to explain how national deliberative ethics committees handle the practical tension between scientific expertise, ethical expertise, expert patient input, and lay public input by explaining two institutions' processes for determining the legitimacy or illegitimacy of reasons in public policy decision-making: that of the United Kingdom's Human Fertilisation and Embryology Authority (HFEA) and the United States' American Society for Reproductive Medicine (ASRM). The articulation of these 'methods of legitimation' draws on 13 in-depth interviews with HFEA and ASRM members and staff conducted in January and February 2012 in London and over Skype, as well as observation of an HFEA deliberation. This study finds that these two institutions employ different methods in rendering certain arguments legitimate and others illegitimate: while the HFEA attempts to 'balance' competing reasons but ultimately legitimizes arguments based on health and welfare concerns, the ASRM seeks to 'filter' out arguments that challenge reproductive autonomy. The notably different structures and missions of each institution may explain these divergent approaches, as may what Sheila Jasanoff (2005) terms the distinctive 'civic epistemologies' of the US and the UK. Significantly for policy makers designing such deliberative committees, each method differs substantially from that explicitly or implicitly endorsed by the institution. PMID:24833251
Hamad, Rita; Pomeranz, Jennifer L.; Siddiqi, Arjumand; Basu, Sanjay
2015-01-01
Objective Analyzing news media allows obesity policy researchers to understand popular conceptions about obesity, which is important for targeting health education and policies. A persistent dilemma is that investigators have to read and manually classify thousands of individual news articles to identify how obesity and obesity-related policy proposals may be described to the public in the media. We demonstrate a novel method called “automated content analysis” that permits researchers to train computers to “read” and classify massive volumes of documents. Methods We identified 14,302 newspaper articles that mentioned the word “obesity” during 2011–2012. We examined four states that vary in obesity prevalence and policy (Alabama, California, New Jersey, and North Carolina). We tested the reliability of an automated program to categorize the media’s “framing” of obesity as an individual-level problem (e.g., diet) and/or an environmental-level problem (e.g., obesogenic environment). Results The automated program performed similarly to human coders. The proportion of articles with individual-level framing (27.7–31.0%) was higher than the proportion with neutral (18.0–22.1%) or environmental-level framing (16.0–16.4%) across all states and over the entire study period (p<0.05). Conclusion We demonstrate a novel approach to the study of how obesity concepts are communicated and propagated in news media. PMID:25522013
Methodical Approaches to Teaching of Computer Modeling in Computer Science Course
ERIC Educational Resources Information Center
Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina
2015-01-01
The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…
A New Method of Building Keyboarding Speed on the Computer.
ERIC Educational Resources Information Center
Sharp, Walter M.
1998-01-01
Use of digraphs (pairs of letters representing single speech sounds) in keyboarding is facilitated by computer technology allowing analysis of speed between keystrokes. New software programs provide a way to develop keyboarding speed. (SK)
COMPUTATIONAL METHODS FOR ELECTRONIC HEALTH RECORD-DRIVEN PHENOTYPING
Page Jr., C. David
., Professor, Computer Science, Biostatistics and Medical Informatics Eneida A. Mendonca, Associate Professor David L. DeMets, Professor, Biostatistics and Medical Informatics Murray Brilliant, Senior Scientist dollars on patient related medical research. Accurately classifying patients into categories representing
Teach High School Students Computer Algebra Methods Frank Rioux
Rioux, Frank
. The total area (yellow plus green) is 450, so my two equations are given below. This is the most important term papers and social networking purposes. It's time they were taught how to use the computer to solve
Edinburgh Research Explorer Cluster-based computational methods for mass univariate
Millar, Andrew J.
the publisher pdf) Published In: Journal of Neuroscience Methods General rights Copyright for the publications-related brain potentials/fields: A simulation study' Journal of Neuroscience Methods, vol 250, pp. 85Â93., 10. J Neurosci Methods (2014), http://dx.doi.org/10.1016/j.jneumeth.2014.08.003 ARTICLE IN PRESSG Model
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
One-to-One Computing in Public Schools: Lessons from "Laptops for All" Programs
ERIC Educational Resources Information Center
Abell Foundation, 2008
2008-01-01
The basic tenet of one-to-one computing is that the student and teacher have Internet-connected, wireless computing devices in the classroom and optimally at home as well. Also known as "ubiquitous computing," this strategy assumes that every teacher and student has her own computing device and obviates the need for moving classes to computer…
An Analysis of Resource Costs in a Public Computing Grid John A. Chandy
Chandy, John A.
has been termed utility computing in the sense that computing cycles are a utility service such as electricity. More recently, the term cloud computing has been adopted to reflect the fact that computing resources are not local to the user but are instead in the Internet "cloud". Users of these cloud computing
Computer Decision-Support Systems for Public Argumentation: Assessing Deliberative Legitimacy
McBurney, Peter
the Internet, to enable democratic participation in public policy decision-making processes (e.g. Ess 1996-assisted argumentation have drawn on dialectical models of argumentation. When used to assist public policy planning In this paper we are concerned with recent attempts to develop decision-support systems for processes of public
On the error of computing ab + cd using Cornea, Harrison and Tang's method
Paris-Sud XI, Université de
On the error of computing ab + cd using Cornea, Harrison and Tang's method Jean-Michel Muller CNRS 2013 Abstract In their book Scientific Computing on The Itanium [1], Cornea, Harrison and Tang, was introduced by Cornea, Harrison and Tang in their book Scientific Comput- ing on The Itanium [1]. Cornea et al
A Fast Method for Real-time Computation of Approximated Global Illumination
Lv Weiwei; Jian Lu; Xuehui Liu; Enhua Wu
2009-01-01
We present a fast method for real-time computation of approximated global illumination for fully dynamic scenes under area light sources. To accelerate the computation, we use simplified models to calculate the indirect illumination, while render the direct illumination with original complex models. After direct illumination is computed with convolution soft shadow maps algorithm, color, position and normal textures are generated,
Numerical Modeling of Earth Systems An introduction to computational methods with focus on
Becker, Thorsten W.
. . . . . . . . . . . . . . . . . . . . . 27 2.3.4 Guiding philosophy in writing a computer program . . . . . . . . . 28 2.3.5 Guidelines Languages . . . . . . . . . . . . . . . . . . . . . 26 2.3.3 Elements of a computer programNumerical Modeling of Earth Systems An introduction to computational methods with focus on solid
Computer Help at Home: Methods and Motivations for Informal Technical Support
Edwards, Keith
1 Computer Help at Home: Methods and Motivations for Informal Technical Support Erika Shehan Poole for computer help. But what influences whether and how a "helper" will provide help? To answer this question identity as a computer expert and accountability to one's social network determine who receives help
The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances
NASA Technical Reports Server (NTRS)
Beltran, Adriana; Salvador, James
1997-01-01
In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.
This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...
Ainy, Elaheh; Soori, Hamid; Ganjali, Mojtaba; Baghfalaki, Taban
2015-01-01
Background and Aim: To allocate resources at the national level and ensure the safety level of roads with the aim of economic efficiency, cost calculation can help determine the size of the problem and demonstrate the economic benefits resulting from preventing such injuries. This study was carried out to elicit the cost of traffic injuries among Iranian drivers of public vehicles. Materials and Methods: In a cross-sectional study, 410 drivers of public vehicles were randomly selected from all the drivers in city of Tehran, Iran. The research questionnaire was prepared based on the standard for willingness to pay (WTP) method (stated preference (SP), contingent value (CV), and revealed preference (RP) model). Data were collected along with a scenario for vehicle drivers. Inclusion criteria were having at least high school education and being in the age range of 18 to 65 years old. Final analysis of willingness to pay was carried out using Weibull model. Results: Mean WTP was 3,337,130 IRR among drivers of public vehicles. Statistical value of life was estimated 118,222,552,601,648 IRR, for according to 4,694 dead drivers, which was equivalent to 3,940,751,753 $ based on the dollar free market rate of 30,000 IRR (purchase power parity). Injury cost was 108,376,366,437,500 IRR, equivalent to 3,612,545,548 $. In sum, injury and death cases came to 226,606,472,346,449 IRR, equivalent to 7,553,549,078 $. Moreover in 2013, cost of traffic injuries among the drivers of public vehicles constituted 1.25% of gross national income, which was 604,300,000,000$. WTP had a significant relationship with gender, daily payment, more payment for time reduction, more pay to less traffic, and minibus drivers. Conclusion: Cost of traffic injuries among drivers of public vehicles included 1.25% of gross national income, which was noticeable; minibus drivers had less perception of risk reduction than others. PMID:26157655
Opinions of the Dutch public on palliative sedation: a mixed-methods approach
van der Kallen, Hilde TH; Raijmakers, Natasja JH; Rietjens, Judith AC; van der Male, Alex A; Bueving, Herman J; van Delden, Johannes JM; van der Heide, Agnes
2013-01-01
Background Palliative sedation is defined as deliberately lowering a patient’s consciousness, to relieve intolerable suffering from refractory symptoms at the end of life. Palliative sedation is considered a last resort intervention in end-of-life care that should not be confused with euthanasia. Aim To inform healthcare professionals about attitudes of the general public regarding palliative sedation. Design and setting A cross-sectional survey among members of the Dutch general public followed by qualitative interviews. Method One thousand nine hundred and sixty members of the general public completed the questionnaire, which included a vignette describing palliative sedation (response rate 78%); 16 participants were interviewed. Results In total, 22% of the responders indicated knowing the term ‘palliative sedation’. Qualitative data showed a variety of interpretations of the term. Eighty-one per cent of the responders agreed with the provision of sedatives as described in a vignette of a patient with untreatable pain and a life expectancy of <1 week who received sedatives to alleviate his suffering. This percentage was somewhat lower for a patient with a life expectancy of <1 month (74%, P = 0.007) and comparable in the case where the physician gave sedatives with the aim of ending the patient’s life (79%, P = 0.54). Conclusion Most of the general public accept the use of palliative sedation at the end of life, regardless of a potential life-shortening effect. However, confusion exists about what palliative sedation represents. This should be taken into account by healthcare professionals when communicating with patients and their relatives on end-of-life care options. PMID:24152482
Computational solution of acoustic radiation problems by Kussmaul's boundary element method
NASA Astrophysics Data System (ADS)
Kirkup, S. M.; Henwood, D. J.
1992-10-01
The problem of computing the properties of the acoustic field exterior to a vibrating surface for the complete wavenumber range by the boundary element method is considered. A particular computational method based on the Kussmaul formulation is described. The method is derived through approximating the surface by a set of planar triangles and approximating the surface functions by a constant on each element. The method is successfully applied to test problems and to the Ricardo crankcase simulation rig.
Multi-centred mixed-methods PEPFAR HIV care & support public health evaluation: study protocol
2010-01-01
Background A public health response is essential to meet the multidimensional needs of patients and families affected by HIV disease in sub-Saharan Africa. In order to appraise curret provision of HIV care and support in East Africa, and to provide evidence-based direction to future care programming, and Public Health Evaluation was commissioned by the PEPFAR programme of the US Government. Methods/Design This paper described the 2-Phase international mixed methods study protocol utilising longitudinal outcome measurement, surveys, patient and family qualitative interviews and focus groups, staff qualitative interviews, health economics and document analysis. Aim 1) To describe the nature and scope of HIV care and support in two African countries, including the types of facilities available, clients seen, and availability of specific components of care [Study Phase 1]. Aim 2) To determine patient health outcomes over time and principle cost drivers [Study Phase 2]. The study objectives are as follows. 1) To undertake a cross-sectional survey of service configuration and activity by sampling 10% of the facilities being funded by PEPFAR to provide HIV care and support in Kenya and Uganda (Phase 1) in order to describe care currently provided, including pharmacy drug reviews to determine availability and supply of essential drugs in HIV management. 2) To conduct patient focus group discussions at each of these (Phase 1) to determine care received. 3) To undertake a longitudinal prospective study of 1200 patients who are newly diagnosed with HIV or patients with HIV who present with a new problem attending PEPFAR care and support services. Data collection includes self-reported quality of life, core palliative outcomes and components of care received (Phase 2). 4) To conduct qualitative interviews with staff, patients and carers in order to explore and understand service issues and care provision in more depth (Phase 2). 5) To undertake document analysis to appraise the clinical care procedures at each facility (Phase 2). 6) To determine principle cost drivers including staff, overhead and laboratory costs (Phase 2). Discussion This novel mixed methods protocol will permit transparent presentation of subsequent dataset results publication, and offers a substantive model of protocol design to measure and integrate key activities and outcomes that underpin a public health approach to disease management in a low-income setting. PMID:20920241
Multi-Level iterative methods in computational plasma physics
Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.
1999-03-01
Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD.
Health promotion through healthy public policy: the contribution of complementary research methods.
McKinlay, J B
1992-01-01
The focus of the "new public health" is moving from the level of individuals to the level of organizations, communities, and broader social policies. Traditional quantitative methods which are appropriate at the level of individual behaviour change, require adaptation and refinement when sociopolitical change becomes the mechanism for health promotion. Because of their training and experience, health services researchers and health educators, especially psychologists, are understandably resistant to making necessary methodologic changes. Well-designed and carefully conducted qualitative studies, using techniques such as ethnographic interviews, participant observation, case studies, or focus group activities, are required to complement quantitative approaches. These studies can fill gaps where quantitative techniques are suboptimal or even inappropriate. Hard qualitative techniques can also support soft quantitative methods. Their utility in process evaluation is now beyond dispute. Recent work at the New England Research Institute is used to illustrate the role of qualitative research in the evaluation of health promotion through planned sociopolitical change. PMID:1423118