Sample records for methods publications computer

  1. Communicating the Impact of Free Access to Computers and the Internet in Public Libraries: A Mixed Methods Approach to Developing Outcome Indicators

    Microsoft Academic Search

    Samantha Becker; Michael D. Crandall; Karen E. Fisher

    2009-01-01

    The U.S. IMPACT studies have two research projects underway that employ a mixed method research design to develop and validate performance indicators related specifically to the outcomes of public access computing (PAC) use in public libraries. Through the use of a nationwide telephone survey (n  =  1130), four case studies, and a nationwide Internet survey of PAC users administered through

  2. considered for publication in Logical Methods in Computer Science LOGICS FOR UNRANKED TREES: AN OVERVIEW

    E-print Network

    Libkin, Leonid

    grammars. All these formalisms have found numerous applications in verification, program analysis, logic are a particular kind of feature structures that have been investigated by computational linguists [Bla94, Car92 connection to automata models, and quite often to temporal and modal logics, especially when one describes

  3. List of Free Computer-Related Publications

    NSDL National Science Digital Library

    The List of Free Computer-Related Publications includes hardcopy magazines, newspapers, and journals related to computing which can be subscribed to free of charge. Each entry contains a brief overview of that publication, including its primary focus, typical content, publication frequency, subscription information, as well as an (admittedly) subjective overall rating. Note that some publications have qualifications you must meet in order for the subscription to be free.

  4. Combinatorial and Computational Geometry MSRI Publications

    E-print Network

    Vempala, Santosh

    infinitely often for n 2, but only a finite number of times for n 3. Random walks also provide a general approach to sampling a geometric distri- bution. To sample a given distribution, we set up a random walkCombinatorial and Computational Geometry MSRI Publications Volume 52, 2005 Geometric Random Walks

  5. 2008 College of Engineering & Computer Science Publications

    E-print Network

    Wu, Shin-Tson

    1 2008 College of Engineering & Computer Science Publications Books Chebbykin, O., G. Bedny and W Craiger, P., Training and Education in Digital Forensics, in: J. Barbara (Ed). Handbook of Digital. Pollitt, A Virtual Digital Forensics Lab, in: I. Ray and S. Shenoi (Eds). Advances in Digital Forensics IV

  6. Systems Science Methods in Public Health

    PubMed Central

    Luke, Douglas A.; Stamatakis, Katherine A.

    2012-01-01

    Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example in preparing for global pandemics. However, systems science study designs and analytic methods remain underutilized and are not widely featured in public health curricula or training. In this review we present an argument for the utility of systems science methods in public health, introduce three important systems science methods (system dynamics, network analysis, and agent-based modeling), and provide three case studies where these methods have been used to answer important public health science questions in the areas of infectious disease, tobacco control, and obesity. PMID:22224885

  7. Closing the "Digital Divide": Building a Public Computing Center

    ERIC Educational Resources Information Center

    Krebeck, Aaron

    2010-01-01

    The public computing center offers an economical and environmentally friendly model for providing additional public computer access when and where it is needed. Though not intended to be a replacement for a full-service branch, the public computing center does offer a budget-friendly option for quickly expanding high-demand services into the…

  8. Publication Bias in the Computer Science Education Research Literature

    Microsoft Academic Search

    Justus J. Randolph; Roman Bednarik

    2008-01-01

    Publication bias is the tendency for investigations with primarily nonstatis- tically significant findings to be withheld from the research record. Because publication bias has serious negative consequences for research and practice, we gathered informa- tion about the prevalence and predictors of publication bias in the computer science education literature. From an initial random sample of 352 recent computer science education

  9. Computational Methods Development at Ames

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  10. Special Publication 500-293 US Government Cloud Computing

    E-print Network

    Special Publication 500-293 (Draft) US Government Cloud Computing Technology Roadmap Volume II and Dawn Leaf NIST Cloud Computing Program Information Technology Laboratory #12;This page left Publication 500-293 (Draft) US Government Cloud Computing Technology Roadmap Volume II Release 1.0 (Draft

  11. Public Library Public Access Computing and Internet Access: Factors Which Contribute to Quality Services and Resources

    Microsoft Academic Search

    John Carlo Bertot; Denise M. Davis

    2007-01-01

    This article explores a number of variables which can contribute to the quality of public access computing and Internet services that public libraries provide their communities. Through this exploration, the article offers several insights and implications for the development and implementation of high quality public access computing and Internet services which increasingly technology-savvy users expect from service providers. A key

  12. Methods for computing color anaglyphs

    Microsoft Academic Search

    David F. McAllister; Ya Zhou; Sophia Sullivan

    2010-01-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances

  13. Method for Tracking Core-Contributed Publications

    PubMed Central

    Loomis, Cynthia A.; Curchoe, Carol Lynn

    2012-01-01

    Accurately tracking core-contributed publications is an important and often difficult task. Many core laboratories are supported by programmatic grants (such as Cancer Center Support Grant and Clinical Translational Science Awards) or generate data with instruments funded through S10, Major Research Instrumentation, or other granting mechanisms. Core laboratories provide their research communities with state-of-the-art instrumentation and expertise, elevating research. It is crucial to demonstrate the specific projects that have benefited from core services and expertise. We discuss here the method we developed for tracking core contributed publications. PMID:23204927

  14. Secure Mobile Computing Via Public Terminals

    Microsoft Academic Search

    Richard Sharp; James Scott; Alastair R. Beresford

    2006-01-01

    The rich interaction capabilities of public terminals can make them more convenient to use than small personal devices, such as smart phones. However, the use of public terminals to handle personal data may compromise privacy. We present a system that enables users to access their applications and data securely using a combination of public terminals and a more trusted, personal

  15. Special Publication 500-293 US Government Cloud Computing

    E-print Network

    Special Publication 500-293 (Draft) US Government Cloud Computing Technology Roadmap Volume I Release 1.0 (Draft) High-Priority Requirements to Further USG Agency Cloud Computing Adoption Lee Badger Sokol, Jin Tong, Fred Whiteside and Dawn Leaf NIST Cloud Computing Program Information Technology

  16. A Computer-Assisted Instruction in Teaching Abstract Statistics to Public Affairs Undergraduates

    ERIC Educational Resources Information Center

    Ozturk, Ali Osman

    2012-01-01

    This article attempts to demonstrate the applicability of a computer-assisted instruction supported with simulated data in teaching abstract statistical concepts to political science and public affairs students in an introductory research methods course. The software is called the Elaboration Model Computer Exercise (EMCE) in that it takes a great…

  17. Computer Game Criticism: A Method for Computer Game Analysis

    Microsoft Academic Search

    Lars Konzack

    2002-01-01

    In this paper, we describe a method to analyse computer games. The analysis method is based on computer games in particular and not some kind of transfer from other fi eld or studies - even though of course it is inspired from other kinds of analysis methods from varying fi elds of studies. The method is based on seven different

  18. Computational Methods for Simulating Quantum Computers H. De Raedt

    E-print Network

    mechanics and quantum chemistry, it is well known that simulating an interacting quantum many-body systemComputational Methods for Simulating Quantum Computers H. De Raedt and K. Michielsen Department to simulate quantum computers. It covers the basic concepts of quantum computation and quantum algorithms

  19. Computational Methods for Gravitational Lensing

    E-print Network

    Charles R. Keeton

    2001-02-20

    Modern applications of strong gravitational lensing require the ability to use precise and varied observational data to constrain complex lens models. I discuss two sets of computational methods for lensing calculations. The first is a new algorithm for solving the lens equation for general mass distributions. This algorithm makes it possible to apply arbitrarily complicated models to observed lenses. The second is an evaluation of techniques for using observational data including positions, fluxes, and time delays of point-like images, as well as maps of extended images, to constrain models of strong lenses. The techniques presented here are implemented in a flexible and user-friendly software package called gravlens, which is made available to the community.

  20. Evolution as Computation Evolutionary Theory (accepted for publication)

    E-print Network

    Mayfield, John

    1/21/05 1 Evolution as Computation Evolutionary Theory (accepted for publication) By: John E: jemayf@iastate.edu Key words: Evolution, Computation, Complexity, Depth Running head: Evolution of evolution must include life and also non-living processes that change over time in a manner similar

  1. How You Can Protect Public Access Computers "and" Their Users

    ERIC Educational Resources Information Center

    Huang, Phil

    2007-01-01

    By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and…

  2. Computer-Assisted Management of Instruction in Veterinary Public Health

    ERIC Educational Resources Information Center

    Holt, Elsbeth; And Others

    1975-01-01

    Reviews a course in Food Hygiene and Public Health at the University of Illinois College of Veterinary Medicine in which students are sequenced through a series of computer-based lessons or autotutorial slide-tape lessons, the computer also being used to route, test, and keep records. Since grades indicated mastery of the subject, the course will…

  3. BOOK REVIEW Computational Photography: Methods and Applications.

    E-print Network

    Schettini, Raimondo

    BOOK REVIEW Computational Photography: Methods and Applications. By Rastislav Lukac, Boca Raton, FL of computational photography is given by Wikipedia, which is also used by the book's Editor to begin his editorial introduction: ``Computational photography refers broadly to computational imaging techniques that enhance

  4. Component Analysis Methods for Computer Vision and

    E-print Network

    Botea, Adi

    1 Component Analysis Methods for Computer Vision and Pattern Recognition Fernando De la TorreFernando De la Torre Computer Vision and Pattern Recognition Easter SchoolComputer Vision and Pattern Vision and Pattern Recognition Easter SchoolComputer Vision and Pattern Recognition Easter School March

  5. Computational methods for stealth design

    Microsoft Academic Search

    Cable

    1992-01-01

    A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development

  6. Computational methods in wind power meteorology

    E-print Network

    Computational methods in wind power meteorology Bo Hoffmann Jørgensen, Søren Ott, Niels Nørmark, Jakob Mann and Jake Badger Title: Computational methods in wind power meteorology Department: Wind in connection with the project called Computational meth- ods in wind power meteorology which was supported

  7. Methods and applications in computational protein design

    E-print Network

    Biddle, Jason Charles

    2010-01-01

    In this thesis, we summarize our work on applications and methods for computational protein design. First, we apply computational protein design to address the problem of degradation in stored proteins. Specifically, we ...

  8. Wildlife software: procedures for publication of computer software

    USGS Publications Warehouse

    Samuel, M.D.

    1990-01-01

    Computers and computer software have become an integral part of the practice of wildlife science. Computers now play an important role in teaching, research, and management applications. Because of the specialized nature of wildlife problems, specific computer software is usually required to address a given problem (e.g., home range analysis). This type of software is not usually available from commercial vendors and therefore must be developed by those wildlife professionals with particular skill in computer programming. Current journal publication practices generally prevent a detailed description of computer software associated with new techniques. In addition, peer review of journal articles does not usually include a review of associated computer software. Thus, many wildlife professionals are usually unaware of computer software that would meet their needs or of major improvements in software they commonly use. Indeed most users of wildlife software learn of new programs or important changes only by word of mouth.

  9. Computational methods for stealth design

    SciTech Connect

    Cable, V.P. (Lockheed Advanced Development Co., Sunland, CA (United States))

    1992-08-01

    A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.

  10. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...2013-07-01 2013-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

  11. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...2012-07-01 2012-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

  12. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...2014-07-01 2014-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

  13. Teaching Practical Public Health Evaluation Methods

    ERIC Educational Resources Information Center

    Davis, Mary V.

    2006-01-01

    Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…

  14. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W. (Veguita, NM); Ober, Curtis C. (Los Lunas, NM)

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  15. Theoretical and computational methods in statistical mechanics

    E-print Network

    Friedland, Shmuel

    Theoretical and computational methods in statistical mechanics Shmuel Friedland Univ. Illinois and computational methods in statistical mechanicsBerkeley, October 26, 2009 1 / 32 #12;Overview Motivation: Ising in statistical mechanicsBerkeley, October 26, 2009 2 / 32 #12;Figure: Uri Natan Peled, Photo - December 2006

  16. Computational methods for reentry trajectories

    Microsoft Academic Search

    L. Anselmo; C. Pardini

    2004-01-01

    The trajectory modeling of uncontrolled satellites close to reentry in the atmosphere is still a challenging activity. Tracking data may be sparse and not particularly accurate, the objects complicate shape and unknown attitude evolution may render quite tricky the aerodynamic computations and, last but not the least, the models used to predict the air density at the altitudes of interest,

  17. BOINC: A System for Public-Resource Computing and Storage

    Microsoft Academic Search

    David P. Anderson

    2004-01-01

    BOINC (Berkeley Open Infrastructure for Network Com- puting) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify how their resources are allocated among these projects. We describe the goals

  18. Computer mediated communication and publication productivity among faculty

    Microsoft Academic Search

    Joel Cohen

    1996-01-01

    Investigates whether faculty who use computer mediated communication (CMC) achieve greater scholarly productivity as measured by publications and a higher incidence in the following prestige factors: receipt of awards; service on a regional or national committee of a professional organization; service on an editorial board of a refereed journal; service as a principal investigator on an externally funded project; or

  19. Computer Mediated Communication and Publication Productivity among Faculty.

    ERIC Educational Resources Information Center

    Cohen, Joel

    1996-01-01

    Reports the results of a study that investigated whether faculty who use computer mediated communication (CMC) achieve greater scholarly productivity as measured by publications and numerous other prestige factors. Findings that indicate positive results from CMC are described, implications for faculty and academic libraries are discussed, and…

  20. Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

    ERIC Educational Resources Information Center

    Jayakar, Krishna; Park, Eun-A

    2012-01-01

    The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

  1. Gender and Public Access Computing: An International Perspective

    Microsoft Academic Search

    Allison Terry; Ricardo Gomez

    2011-01-01

    Information and Communication Technologies (ICTs), and public access to computers with Internet connectivity in particular, can assist community development efforts and help bridge the so-called digital divide. However, use of ICT is not gender neutral. Technical, social, and cultural barriers emphasize women’s exclusion from the benefits of ICT for development. This paper offers a qualitative analysis of the benefits of

  2. Computational and theoretical methods for protein folding.

    PubMed

    Compiani, Mario; Capriotti, Emidio

    2013-12-01

    A computational approach is essential whenever the complexity of the process under study is such that direct theoretical or experimental approaches are not viable. This is the case for protein folding, for which a significant amount of data are being collected. This paper reports on the essential role of in silico methods and the unprecedented interplay of computational and theoretical approaches, which is a defining point of the interdisciplinary investigations of the protein folding process. Besides giving an overview of the available computational methods and tools, we argue that computation plays not merely an ancillary role but has a more constructive function in that computational work may precede theory and experiments. More precisely, computation can provide the primary conceptual clues to inspire subsequent theoretical and experimental work even in a case where no preexisting evidence or theoretical frameworks are available. This is cogently manifested in the application of machine learning methods to come to grips with the folding dynamics. These close relationships suggested complementing the review of computational methods within the appropriate theoretical context to provide a self-contained outlook of the basic concepts that have converged into a unified description of folding and have grown in a synergic relationship with their computational counterpart. Finally, the advantages and limitations of current computational methodologies are discussed to show how the smart analysis of large amounts of data and the development of more effective algorithms can improve our understanding of protein folding. PMID:24187909

  3. COMPUTER METHODS NORTH-HOLLAND

    E-print Network

    Elperin, Tov

    THE CHOICE OF UNIFORMLY DISTRIBUTED SEQUENCES TO BE USED IN THE RANDOM CHOICE METHOD T. ELPERIN and 0. IGRA by Igra and Gottlieb [5]). The solution was repeated with four different sampling algorithms. The names

  4. Computational Methods for Failure Analysis and Life Prediction

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (compiler); Harris, Charles E. (compiler); Housner, Jerrold M. (compiler); Hopkins, Dale A. (compiler)

    1993-01-01

    This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

  5. Publicly Accessible Computers: An Exploratory Study of the Determinants of Transactional Website Use in Public Locations

    Microsoft Academic Search

    A. D. Rensel; J. M. Abbas; H. Raghav Rao

    2006-01-01

    Businesses and governments are continuing to expand the use of the internet to provide a wide range of information and transactional services to consumers. These changes present barriers to people without internet connections in their homes. Public libraries provide a source of access to these resources however it is not clear if people are willing to use computers in these

  6. A direct method to computational acoustics

    Microsoft Academic Search

    R. Rabenstein; A. Zayati

    1999-01-01

    The exact knowledge of the sound field within an enclosure is essential for a number of applications in electro-acoustics. Conventional methods for the assessment of room acoustics model the sound propagation in analogy to the propagation of light. More advanced computational methods rely on the numerical solution of the wave equation. A recently presented method is based on multidimensional wave

  7. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  8. The Contingent Valuation Method in Public Libraries

    ERIC Educational Resources Information Center

    Chung, Hye-Kyung

    2008-01-01

    This study aims to present a new model measuring the economic value of public libraries, combining the dissonance minimizing (DM) and information bias minimizing (IBM) format in the contingent valuation (CV) surveys. The possible biases which are tied to the conventional CV surveys are reviewed. An empirical study is presented to compare the model…

  9. Methods Towards Invasive Human Brain Computer Interfaces

    E-print Network

    Methods Towards Invasive Human Brain Computer Interfaces Thomas Navin Lal1 , Thilo Hinterberger2 there has been growing interest in the develop- ment of Brain Computer Interfaces (BCIs). The field has mainly been driven by the needs of completely paralyzed patients to communicate. With a few exceptions

  10. Semianalytical method of satellite orbit computation

    Microsoft Academic Search

    W.-L. Yang

    1978-01-01

    Artificial satellite orbit calculation is classically analyzed in terms of short-period, long-period, and secular-variation components. While the classical analytical method is sufficiently accurate for the calculation of the short-period perturbation, this paper emphasizes the numerical integration method for computing long-period and secular-variation components. The accuracy achieved by linear extrapolation, the Euler method, and the Runge-Kutta method is discussed.

  11. Computational Methods Systems of Nonlinear Equations

    E-print Network

    Huber, Manfred

    not be modeled linearly and require nonlinear equations Iterative methods for single equations do not directly1 Computational Methods Systems of Nonlinear Equations © Manfred Huber 2011 #12;© Manfred Huber 2011 2 Systems of Nonlinear Equations Systems of linear equations can often be used to describe

  12. Plasma Diagnostics Using Computed Tomography Method

    Microsoft Academic Search

    N. Denisova

    2009-01-01

    In the last decade, the range of applications of various low-temperature plasma sources has steadily grown. New applications require development of advanced diagnostic methods to control plasma parameters. Computed tomography is a powerful method which can provide much useful information on the plasma structure and its evolution in time. From a mathematical point of view, tomographic reconstruction is an ill-posed

  13. Teacher Perspectives on the Current State of Computer Technology Integration into the Public School Classroom

    ERIC Educational Resources Information Center

    Zuniga, Ramiro

    2009-01-01

    Since the introduction of computers into the public school arena over forty years ago, educators have been convinced that the integration of computer technology into the public school classroom will transform education. Joining educators are state and federal governments. Public schools and others involved in the process of computer technology…

  14. Computing discharge using the index velocity method

    USGS Publications Warehouse

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression techniques in which the mean cross-sectional velocity for the standard section is related to the measured index velocity. Most ratings are simple-linear regressions, but more complex ratings may be necessary in some cases. Once the rating is established, validation measurements should be made periodically. Over time, validation measurements may provide additional definition to the rating or result in the creation of a new rating. The computation of discharge is the last step in the index velocity method, and in some ways it is the most straight-forward step. This step differs little from the steps used to compute discharge records for stage-discharge gaging stations. The ratings are entered into database software used for records computation, and continuous records of discharge are computed.

  15. Updated Panel-Method Computer Program

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1995-01-01

    Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

  16. Computer-intensive methods in statistical analysis

    Microsoft Academic Search

    D. N. Politis

    1998-01-01

    As far back as the late 1970s, the impact of affordable, high-speed computers on the theory and practice of modern statistics was recognized by Efron (1979, 1982). As a result, the bootstrap and other computer-intensive statistical methods (such as subsampling and the jackknife) have been developed extensively since that time and now constitute very powerful (and intuitive) tools to do

  17. Methods for scalable optical quantum computation

    E-print Network

    Tal Mor; Nadav Yoran

    2006-03-14

    We propose a scalable method for implementing linear optics quantum computation using the ``linked-state'' approach. Our method avoids the two-dimensional spread of errors occurring in the preparation of the linked-state. Consequently, a proof is given for the scalability of this modified linked-state model, and an exact expression for the efficiency of the method is obtained. Moreover, a considerable improvement in the efficiency, relative to the original linked-state method, is achieved. The proposed method is applicable to Nielsen's optical ``cluster-state'' approach as well.

  18. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. PMID:23877155

  19. Understanding Nuclear Receptors Using Computational Methods

    PubMed Central

    Ai, Ni; Krasowski, Matthew D.; Welsh, William J; Ekins, Sean

    2010-01-01

    Nuclear receptors (NRs) are important targets for therapeutic drugs. NRs regulate transcriptional activities through binding to ligands and interacting with a number of regulating proteins. Computational methods can provide insights into essential ligand-receptor and protein-protein interactions. These in turn have facilitated the discovery of novel agonists and antagonists with high affinity and specificity as well as aiding in the prediction of toxic side effects of drugs by identifying possible off-target interactions. Here, we review the application of computational methods towards several clinically important NRs (with special emphasis on PXR) and discuss their use for screening and predicting the toxic side effects of xenobiotics. PMID:19429508

  20. Understanding nuclear receptors using computational methods.

    PubMed

    Ai, Ni; Krasowski, Matthew D; Welsh, William J; Ekins, Sean

    2009-05-01

    Nuclear receptors (NRs) are important targets for therapeutic drugs. NRs regulate transcriptional activities through binding to ligands and interacting with several regulating proteins. Computational methods can provide insights into essential ligand-receptor and protein-protein interactions. These in turn have facilitated the discovery of novel agonists and antagonists with high affinity and specificity as well as have aided in the prediction of toxic side effects of drugs by identifying possible off-target interactions. Here, we review the application of computational methods toward several clinically important NRs (with special emphasis on PXR) and discuss their use for screening and predicting the toxic side effects of xenobiotics. PMID:19429508

  1. ComputerTown, USA?? Using Personal Computers in the Public Library.

    ERIC Educational Resources Information Center

    Zamora, Ramon

    1981-01-01

    Describes the development of a project to disseminate information about and provide hands-on experience with personal computers through classes and workshops for both adults and children at the Menlo Park Public Library. Funding and implementation are discussed, as well as how such projects can be started by other libraries. (BK)

  2. 77 FR 26509 - Notice of Public Meeting-Cloud Computing Forum & Workshop V

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-04

    ...Technology Notice of Public Meeting--Cloud Computing Forum & Workshop V AGENCY: National...SUMMARY: NIST announces the Cloud Computing Forum & Workshop V to be held...information on the U.S. Government (USG) Cloud Computing Technology Roadmap...

  3. 76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ...Technology Notice of Public Meeting--Cloud Computing Forum & Workshop IV AGENCY...SUMMARY: NIST announces the Cloud Computing Forum & Workshop IV to be held...information on the U.S. Government (USG) Cloud Computing Technology Roadmap...

  4. Efficient Methods to Compute Genomic Predictions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  5. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (editor); Housner, Jerrold M. (editor); Tanner, John A. (editor); Hayduk, Robert J. (editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  6. Public Participation GIS: A new method for national park planning

    Microsoft Academic Search

    Greg Brown; Delene Weber

    2011-01-01

    This paper describes research to evaluate the use of a public participation geographic information system (PPGIS) methodology for national park planning. Visitor perceptions of park experiences, environmental impacts, and facility needs were collected via an internet-based mapping method for input into a national park planning decision support system. The PPGIS method presupposes that consistent with the dominant statutory framework, national

  7. COMPUTING DISCRETE LOGARITHMS WITH THE PARALLELIZED KANGAROO METHOD

    E-print Network

    Bernstein, Daniel

    COMPUTING DISCRETE LOGARITHMS WITH THE PARALLELIZED KANGAROO METHOD EDLYN TESKE Abstract. The Pollard kangaroo method computes discrete logarithms in arbitrary cyclic groups. It is applied. This makes the kangaroo method the most powerful method to solve the discrete logarithm problem

  8. Dynamic Computed Tomography, an algebraic reconstruction method with

    E-print Network

    Promayon, Emmanuel

    Dynamic Computed Tomography, an algebraic reconstruction method with deformation compensation Sofia . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Computed Tomography . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Dynamic Computed . . . . . . . . . . . . . . . . . . . . . 9 2 Basic tools for Computed Tomography 10 2.1 The Radon Transform

  9. Ecological validity and the study of publics: The case for organic public engagement methods.

    PubMed

    Gehrke, Pat J

    2014-01-01

    This essay argues for a method of public engagement grounded in the criteria of ecological validity. Motivated by what Hammersly called the responsibility that comes with intellectual authority: "to seek, as far as possible, to ensure the validity of their conclusions and to participate in rational debate about those conclusions" (1993: 29), organic public engagement follows the empirical turn in citizenship theory and in rhetorical studies of actually existing publics. Rather than shaping citizens into either the compliant subjects of the cynical view or the deliberatively disciplined subjects of the idealist view, organic public engagement instead takes Asen's advice that "we should ask: how do people enact citizenship?" (2004: 191). In short, organic engagement methods engage publics in the places where they already exist and through those discourses and social practices by which they enact their status as publics. Such engagements can generate practical middle-range theories that facilitate future actions and decisions that are attentive to the local ecologies of diverse publics. PMID:23887250

  10. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  11. Strictly Deterministic Sampling Methods in Computer Graphics

    Microsoft Academic Search

    Alexander Keller

    2001-01-01

    We introduce a strictly deterministic, meaning non-random, rendering method, which performs superior to state of the art Monte Carlo techniques. Its simple and elegant implementation on parallel computer architectures is capable of simulating anti-aliasing, motion blur, depth of field, area light sources, glossy reflection and trans- mission, participating media, and global illumination. We provide a self-contained exposition of the underlying

  12. Local Search Methods for Quantum Computers

    E-print Network

    Tad Hogg; Mehmet Yanik

    1998-02-16

    Local search algorithms use the neighborhood relations among search states and often perform well for a variety of NP-hard combinatorial search problems. This paper shows how quantum computers can also use these neighborhood relations. An example of such a local quantum search is evaluated empirically for the satisfiability (SAT) problem and shown to be particularly effective for highly constrained instances. For problems with an intermediate number of constraints, it is somewhat less effective at exploiting problem structure than incremental quantum methods, in spite of the much smaller search space used by the local method.

  13. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  14. Computations of entropy bounds: Multidimensional geometric methods

    SciTech Connect

    Makaruk, H.E.

    1998-02-01

    The entropy bounds for constructive upper bound on the needed number-of-bits for solving a dichotomy is represented by the quotient of two multidimensional solid volumes. For minimization of this upper bound exact calculation of the volume of this quotient is needed. Three methods for exact computing of the volume of a given nD volume are presented: (1) general method for calculation any nD volume by slicing it into volumes of decreasing dimension is presented; (2) a method applying appropriate curvilinear coordinate system is described for volume bounded by symmetrical curvilinear hypersurfaces (spheres, cones, hyperboloids, ellipsoids, cylinders, etc.); and (3) an algorithm for dividing any nD complex into simplices and computing of the volume of the simplices is presented, supplemented by a general formula for calculation of volume of an nD simplex. These mathematical methods enable exact calculation of volume of any complicated multidimensional solids. The methods allow for the calculation of the minimal volume and lead to tighter bounds on the needed number-of-bits.

  15. Accelerated matrix element method with parallel computing

    NASA Astrophysics Data System (ADS)

    Schouten, D.; DeAbreu, A.; Stelzer, B.

    2015-07-01

    The matrix element method utilizes ab initio calculations of probability densities as powerful discriminants for processes of interest in experimental particle physics. The method has already been used successfully at previous and current collider experiments. However, the computational complexity of this method for final states with many particles and degrees of freedom sets it at a disadvantage compared to supervised classification methods such as decision trees, k nearest-neighbor, or neural networks. This note presents a concrete implementation of the matrix element technique using graphics processing units. Due to the intrinsic parallelizability of multidimensional integration, dramatic speedups can be readily achieved, which makes the matrix element technique viable for general usage at collider experiments.

  16. Accelerated Matrix Element Method with Parallel Computing

    E-print Network

    Doug Schouten; Adam DeAbreu; Bernd Stelzer

    2014-07-30

    The matrix element method utilizes ab initio calculations of probability densities as powerful discriminants for processes of interest in experimental particle physics. The method has already been used successfully at previous and current collider experiments. However, the computational complexity of this method for final states with many particles and degrees of freedom sets it at a disadvantage compared to supervised classification methods such as decision trees, k nearest-neighbour, or neural networks. This note presents a concrete implementation of the matrix element technique using graphics processing units. Due to the intrinsic parallelizability of multidimensional integration, dramatic speedups can be readily achieved, which makes the matrix element technique viable for general usage at collider experiments.

  17. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...2014-04-01 false Method and timing for real-time public reporting. 43.3 Section 43...TRADING COMMISSION (CONTINUED) REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...

  18. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... false Method and timing for real-time public reporting. 43.3 Section...COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...

  19. 17 CFR 43.3 - Method and timing for real-time public reporting.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... false Method and timing for real-time public reporting. 43.3 Section...COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...

  20. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  1. 76 FR 67418 - Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ...Special Publication 500-293, US Government Cloud Computing Technology Roadmap, Release...Special Publication 500-293, US Government Cloud Computing Technology Roadmap, Release...agencies to accelerate their adoption of cloud computing. The roadmap has been...

  2. Review of Computational Stirling Analysis Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  3. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  4. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  5. Computational method for free surface hydrodynamics

    SciTech Connect

    Hirt, C.W.; Nichols, B.D.

    1980-01-01

    There are numerous flow phenomena in pressure vessel and piping systems that involve the dynamics of free fluid surfaces. For example, fluid interfaces must be considered during the draining or filling of tanks, in the formation and collapse of vapor bubbles, and in seismically shaken vessels that are partially filled. To aid in the analysis of these types of flow phenomena, a new technique has been developed for the computation of complicated free-surface motions. This technique is based on the concept of a local average volume of fluid (VOF) and is embodied in a computer program for two-dimensional, transient fluid flow called SOLA-VOF. The basic approach used in the VOF technique is briefly described, and compared to other free-surface methods. Specific capabilities of the SOLA-VOF program are illustrated by generic examples of bubble growth and collapse, flows of immiscible fluid mixtures, and the confinement of spilled liquids.

  6. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  7. Awareness of Accessibility Barriers in Computer-Based Instructional Materials and Faculty Demographics at South Dakota Public Universities

    ERIC Educational Resources Information Center

    Olson, Christopher

    2013-01-01

    Advances in technology and course delivery methods have enabled persons with disabilities to enroll in higher education at an increasing rate. Federal regulations state persons with disabilities must be granted equal access to the information contained in computer-based instructional materials, but faculty at the six public universities in South…

  8. A new spectral method to compute FCN

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Huang, C. L.

    2014-12-01

    Free core nutation (FCN) is a rotational modes of the earth with fluid core. All traditional theoretical methods produce FCN period near 460 days with PREM, while the precise observations (VLBI + SG tides) say it should be near 430 days. In order to fill this big gap, astronomers and geophysicists give various assumptions, e.g., increasing core-mantle-boundary (CMB) flattening by about 5%, a strong coupling between nutation and geomagnetic field near CMB, viscous coupling, or topographical coupling etc. Do we really need these unproved assumptions? or is it only the problem of these traditional theoretical methods themselves? Earth models (e.g. PREM) provide accurate and robust profiles of physical parameters, like density and Lame parameters, but their radial derivatives, which are also used in all traditional methods to calculate normal modes (e.g.. FCN), nutation and tides of non-rigid earth theoretically, are not so trustable as the parameters themselves. A new multiple layer spectral method is proposed and applied to the computation of normal modes, to avoid these problems. This new method can solve not only one order ellipsoid but also irregular asymmetric 3D earth model. Our primary result of the FCN period is 435 sidereal days.

  9. Review: Computer Methods in Membrane Biomechanics.

    PubMed

    Humphrey, J. D.

    1998-01-01

    The purpose of this paper is twofold: first, to review analytical, experimental, and numerical methods for studying the nonlinear, pseudoelastic behavior of membranes of interest in biomechanics, and second, to present illustrative examples from the literature for a variety of biomembranes (e.g., skin, pericardium, pleura, aneurysms, and cells) as well as elastomeric membranes used in balloon catheters and new cell stretching tests. Although a membrane approach affords great simplifications in comparison to the three-dimensional theory of nonlinear elasticity, associated problems are still challenging. Computer-based methods are essential, therefore, for performing the requisite experiments, analyzing data, and solving boundary and initial value problems. Emphasis is on stable equilibria although material instabilities and elastodynamics are discussed. PMID:11264803

  10. Introduction to the Theory of Computation Public Key Cryptography and RSA

    E-print Network

    Gallier, Jean

    CIS511 Introduction to the Theory of Computation Public Key Cryptography and RSA Jean Gallier April 30, 2010 #12;2 #12;Chapter 1 Public Key Cryptography; The RSA System 1.1 The RSA System Ever since a message to another party, J, say Julia. 3 #12;4 CHAPTER 1. PUBLIC KEY CRYPTOGRAPHY; THE RSA SYSTEM However

  11. Multispectrum method and the computation of vapor equation

    Microsoft Academic Search

    Zhongzhen Ji; Bin Wang

    1997-01-01

    In order to improve the practicality of spectral method and the efficiency of computation, the multi-spectrum method is proposed\\u000a on the basis of multi-grid method. Coarse spectra are used to compute the slow nonlinear part (including physical process),\\u000a while fine spectra are used to compute the fast linear part. This method not only can reduce computation time, but also can

  12. Computational Methods in Biomechanics and Physics A Dissertation

    E-print Network

    Lapin, Sergey

    Computational Methods in Biomechanics and Physics A Dissertation Presented to the Faculty Doctor of Philosophy By Serguei Lapin May 2005 #12;Computational Methods in Biomechanics and Physics;Computational Methods in Biomechanics and Physics. An Abstract of a Dissertation Presented to the Faculty

  13. Multiresolution Reproducing Kernel Particle Method for Computational Fluid Dynamics

    E-print Network

    Liu, Wing Kam

    Multiresolution Reproducing Kernel Particle Method for Computational Fluid Dynamics Wing Kam Liu­ cal Method in Fluids. Abstract Multiresolution analysis based on Reproducing Kernel Particle Method in fluid dynamics. KEY WORDS: meshless kernel particle method, multiresolution analysis, wavelets

  14. Computational Optimization Methods The course covers typical computational optimization methods widely used in many

    E-print Network

    Cheng, Jianlin Jack

    application in deep learning networks Assignments There is one reading assignment for each topic. Students methods widely used in many computing domains, such as bioinformatics, data mining and machine learning programming and its applications in graph theory and sequence alignment 4. Linear programming, integer

  15. Computational Optimization Methods The course covers typical computational optimization methods widely used in many

    E-print Network

    Cheng, Jianlin Jack

    methods widely used in many computing domains, such as data mining, machine learning and bioinformatics in deep learning networks Assignments There is one reading assignment for each of some topics. Students-world applications in one or more domains. An active, problem-solving based teach and learning format will be applied

  16. Safe and Reliable Computer Control Systems Concepts and Methods

    Microsoft Academic Search

    Henrik Thane

    1996-01-01

    The introduction of computers into safety-critical control systems lays a heavy burden on the software designers. The public and the legislators demand reliable and safe computer control systems, equal to or better than the mechanical or electromechanical parts they replace. The designers must have a thorough understanding of the system and more accurate software design and verification techniques than have

  17. Parallel computation of a domain decomposition method

    SciTech Connect

    Chin, R.C.Y.; Hedstrom, G.W.; Scroggs, J.S.; Sorensen, D.C.

    1987-04-01

    We present a parallel algorithm for the efficient solution of a singularly perturbed parabolic partial differential equation. The method is based upon domain decomposition that is dictated by singular perturbation analysis. A transformation is made to a coordinate system induced by the characteristics of the reduced hyperbolic equation. Asymptotic analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. This reduces the number and size of the domains where the full equation must be solved. Parallelism is evident at two levels. Domain decomposition provides parallelism at the highest level, and within each domain there is ample opportunity to exploit additional parallelism. Important features of the method include independent solution of the characteristic curves at the lowest level and low communication requirements between processes devoted to solving in the various domains. Tightly coupled processes are only required in domains where the full equation must be solved. We discuss the implementation and some aspects of the performance of this algorithm on existing parallel computers. We also touch upon certain aspects of iterative solution of nonlinear problems.

  18. Children's in-library use of computers in an urban public library

    Microsoft Academic Search

    Melissa Gross; Eliza T. Dresang; Leslie E. Holtb

    2004-01-01

    This article describes children's use of networked technology in three branches of an urban public library. Direct observations of their use of computers and data gathered from brief interviews with them were recorded using personal digital assistants (PDAs). Findings suggest that (1) the largest proportion of children's use of computers is for access to games, (2) use of computers for

  19. 77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-18

    ...Public Meeting--Cloud Computing and Big Data Forum and Workshop AGENCY: National...announces a Cloud Computing and Big Data Forum and Workshop to be held on Tuesday...workshop. The NIST Cloud Computing and Big Data Forum and Workshop will bring...

  20. Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing

    E-print Network

    Hou, Y. Thomas

    Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing Cong Wang, Qian large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging efficient. I. INTRODUCTION Cloud Computing has been envisioned as the next- generation architecture

  1. Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing

    E-print Network

    Hou, Y. Thomas

    Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing Qian Wang, IEEE, and Jin Li Abstract--Cloud Computing has been envisioned as the next-generation architecture the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party

  2. Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing

    E-print Network

    Hou, Y. Thomas

    1 Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing Qian Wang, IEEE, and Jin Li Abstract--Cloud Computing has been envisioned as the next-generation architecture the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party

  3. Studies on the zeros of Bessel functions and methods for their computation

    NASA Astrophysics Data System (ADS)

    Kerimov, M. K.

    2014-09-01

    The zeros of Bessel functions play an important role in computational mathematics, mathematical physics, and other areas of natural sciences. Studies addressing these zeros (their properties, computational methods) can be found in various sources. This paper offers a detailed overview of the results concerning the real zeros of the Bessel functions of the first and second kinds and general cylinder functions. The author intends to publish several overviews on this subject. In this first publication, works dealing with real zeros are analyzed. Primary emphasis is placed on classical results, which are still important. Some of the most recent publications are also discussed.

  4. Computational methods for analyzing human genetic variation

    E-print Network

    Bansal, Vikas

    2008-01-01

    Human Genetic Variation by Vikas Bansal Doctor of Philosophy in Computer Science and EngineeringHuman Genetic Variation A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Computer Science and Engineering

  5. SIMULATING ELASTIC LIGHT SCATTERING USING HIGH PERFORMANCE COMPUTING METHODS

    E-print Network

    and Peter M. A. Sloot Parallel Scientific Computing and Simulation Group Department of Computer Systems level. The merits of parallel computing are demonstrated on the basis of CD simulations of systemsSIMULATING ELASTIC LIGHT SCATTERING USING HIGH PERFORMANCE COMPUTING METHODS Alfons G. Hoekstra

  6. A computer method for perspective drawing 

    E-print Network

    Haynes, Herbert Ray

    1966-01-01

    to demonstrate the capabilities of the program. All of the drawings were produced on a California Computer Products model 565 digital incremental plotter driven by an IBM 1401 computer. The computing required to prepare the drawings was performed by an IBM... 7094 computer, which recorded the commands for controlling the plotter onto magnetic tape. The tape was then read by the IBM 1401 which con- trolled the plotting of' the drawings directly on reproduction masters. Since the plott, er is an incremental...

  7. Meshless methods for computational fluid dynamics

    Microsoft Academic Search

    Aaron Jon Katz

    2009-01-01

    While the generation of meshes has always posed challenges for computational scientists, the problem has become more acute in recent years. Increased computational power has enabled scientists to tackle problems of increasing size and complexity. While algorithms have seen great advances, mesh generation has lagged behind, creating a computational bottleneck. For industry and government looking to impact current and future

  8. Computational Methods for Protein Identification from Mass Spectrometry Data

    Microsoft Academic Search

    Leo McHugh; Jonathan W. Arthur; Johanna McEntyre

    2008-01-01

    Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different

  9. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  10. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  11. A public data hub for benchmarking common brain-computer interface algorithms

    NASA Astrophysics Data System (ADS)

    Zander, Thorsten O.; Ihme, Klas; Gärtner, Matti; Rötting, Matthias

    2011-04-01

    Methods of statistical machine learning have recently proven to be very useful in contemporary brain-computer interface (BCI) research based on the discrimination of electroencephalogram (EEG) patterns. Because of this, many research groups develop new algorithms for both feature extraction and classification. However, until now, no large-scale comparison of these algorithms has been accomplished due to the fact that little EEG data is publicly available. Therefore, we at Team PhyPA recorded 32-channel EEGs, electromyograms and electrooculograms of 36 participants during a simple finger movement task. The data are published on our website www.phypa.org and are freely available for downloading. We encourage BCI researchers to test their algorithms on these data and share their results. This work also presents exemplary benchmarking procedures of common feature extraction methods for slow cortical potentials and event-related desynchronization as well as for classification algorithms based on these features.

  12. An Immersed Boundary Method for Computing Anisotropic Permeability of

    E-print Network

    Al Hanbali, Ahmad

    = - k µf · pf (k: permeability tensor) k: measure of flAn Immersed Boundary Method for Computing Anisotropic Permeability of Structured Porous Media David for Computing Anisotropic Permeability of Structured Porous Media #12;Outline 1 Averaged transport in porous

  13. Who's in the Queue? A Demographic Analysis of Public Access Computer Users and Uses in U.S. Public Libraries. Research Brief Number 4

    ERIC Educational Resources Information Center

    Manjarrez, Carlos A.; Schoembs, Kyle

    2011-01-01

    Over the past decade, policy discussions about public access computing in libraries have focused on the role that these institutions play in bridging the digital divide. In these discussions, public access computing services are generally targeted at individuals who either cannot afford a computer and Internet access, or have never received formal…

  14. A Classification of Recent Australasian Computing Education Publications

    ERIC Educational Resources Information Center

    Computer Science Education, 2007

    2007-01-01

    A new classification system for computing education papers is presented and applied to every computing education paper published between January 2004 and January 2007 at the two premier computing education conferences in Australia and New Zealand. We find that while simple reports outnumber other types of paper, a healthy proportion of papers…

  15. COMPUTER MODELING OF MARINE WATERS WITH PUBLIC DOMAIN SOFTWARE

    Microsoft Academic Search

    Bert Rubash; Elizabeth Kilanowski

    Scientific computing developed historically within international scientific traditions governed by copyright law. By contrast, commercial property rights and patent law have regulated other forms of computing—engineering, financial, entertainment, publishing, and personal computing. Within the scientific tradition, collaboration and citation are sufficient motivation and reward, and national and international cooperation is a catalyst and sometimes a necessity. Within the commercial tradition,

  16. METHODS FOR COMPUTING HARMONIC DISTORTION IN LOW FREQUENCY POWER AMPLIFIER

    Microsoft Academic Search

    Adrian Virgil Craciun; Delia Ungureanu; Dominic Mircea Kristaly

    This paper presents few different methods for computing the nonlinear distortion of an amplifier and compares each other, in order to find the simpler one that gives an acceptable precision. An approximate method for computing the amplifier distortion is introduced. The method is based on the five-point distortion analysis and allows the designer to identify the most important low frequency

  17. Computer-Aided Dispatch System as a Decision Making Tool in Public and Private Sectors

    E-print Network

    Lee, I-Jen

    We describe in detail seven distinct areas in both public and private sectors in which a real-time computer-aided dispatch system is applicable to the allocation of scarce resources. Characteristics of a real-time ...

  18. Atomistic Method Applied to Computational Modeling of Surface Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Abel, Phillip B.

    2000-01-01

    The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on the BFS (Bozzolo, Ferrante, and Smith) method for the calculation of the energetics, consists of a small number of simple PCbased computer codes that deal with the different aspects of surface alloy formation. Two analysis modes are available within this package. The first mode provides an atom-by-atom description of real and virtual stages 1. during the process of surface alloying, based on the construction of catalogues of configurations where each configuration describes one possible atomic distribution. BFS analysis of this catalogue provides information on accessible states, possible ordering patterns, and details of island formation or film growth. More importantly, it provides insight into the evolution of the system. Software developed by the Computational Materials Group allows for the study of an arbitrary number of elements forming surface alloys, including an arbitrary number of surface atomic layers. The second mode involves large-scale temperature-dependent computer 2. simulations that use the BFS method for the energetics and provide information on the dynamic processes during surface alloying. These simulations require the implementation of Monte-Carlo-based codes with high efficiency within current workstation environments. This methodology capitalizes on the advantages of the BFS method: there are no restrictions on the number or type of elements or on the type of crystallographic structure considered. This removes any restrictions in the definition of the configuration catalogues used in the analytical calculations, thus allowing for the study of arbitrary ordering patterns, ultimately leading to the actual surface alloy structure. Moreover, the Monte Carlo numerical technique used for the large-scale simulations allows for a detailed visualization of the simulated process, the main advantage of this type of analysis being the ability to understand the underlying features that drive these processes. Because of the simplicity of the BFS method for e energetics used in these calculations, a detailed atom-by-atom analysis can be performed at any

  19. A method of billing third generation computer users

    NASA Technical Reports Server (NTRS)

    Anderson, P. N.; Hyter, D. R.

    1973-01-01

    A method is presented for charging users for the processing of their applications on third generation digital computer systems is presented. For background purposes, problems and goals in billing on third generation systems are discussed. Detailed formulas are derived based on expected utilization and computer component cost. These formulas are then applied to a specific computer system (UNIVAC 1108). The method, although possessing some weaknesses, is presented as a definite improvement over use of second generation billing methods.

  20. Improved Computational Methods for Ray Tracing

    Microsoft Academic Search

    Hank Weghorst; Gary Hooper; Donald P. Greenberg

    1984-01-01

    This paper describes algorithmic procedures that have been implemented to reduce the computational expense of producing ray-traced images. The selection of bounding volumes is examined to reduce the computational cost of the ray-intersection test. The use of object coherence, which relies on a hierarchical description of the environment, is then presented. Finally, since the building of the ray- intersection trees

  1. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

    EPA Science Inventory

    The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

  2. PUBLIC KEY ENCRYPTION CAN BE SECURE AGAINST ENCRYPTION EMULATION ATTACKS BY COMPUTATIONALLY UNBOUNDED

    E-print Network

    Shpilrain, Vladimir

    PUBLIC KEY ENCRYPTION CAN BE SECURE AGAINST ENCRYPTION EMULATION ATTACKS BY COMPUTATIONALLY why, contrary to a prevalent opinion, public key encryption can be secure against "encryption by the NSF grant DMS-0405105. 1 #12;cryptographers was the fact that encryption by the sender (Bob) can

  3. Multi-hop Access Pricing in Public Area WLANs Computer Science & Technology

    E-print Network

    Cheng, Xiuzhen "Susan"

    Multi-hop Access Pricing in Public Area WLANs Yong Cui Computer Science & Technology Tsinghua-exchange for multi-hop access in public area WLANs to encourage users to cooperate, and proposes a complete pricing that cutoff bandwidth allocation is a crucial issue in pricing strategy. Optimal bandwidth allocation schemes

  4. The Development of an Online Course to Teach Public Administrators Computer Utilization

    Microsoft Academic Search

    Janet Gubbins; Melanie Clay; Jerry Perkins

    Although there is a growing requirement that public administrators have technology skills, within the Master of Public Administration programs at most universities there are few accommodations for technology training that are both field specific and meet the demands of non-traditional graduate students. Often times the computer courses that are offered are designed to address the needs of students pursuing careers

  5. The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

    ERIC Educational Resources Information Center

    DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

    2013-01-01

    Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…

  6. Public library computer training for older adults to access high-quality Internet health information

    Microsoft Academic Search

    Bo Xie; Julie M. Bugg

    2009-01-01

    An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at

  7. Computational methods in sequence and structure prediction

    NASA Astrophysics Data System (ADS)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed a software package which is capable of designing novel protein structures at the atomic resolution. This software package allows us to perform protein structure design with a flexible backbone. The backbone flexibility includes loop region relaxation as well as a secondary structure collective mode relaxation scheme. (Abstract shortened by UMI.)

  8. Fast car/human classification methods in the computer vision tasks

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Boris V.; Malin, Ivan K.; Vizilter, Yuri V.; Huang, Shih-Chia; Kuo, Sy-Yen

    2013-04-01

    In this paper we propose a method for classification of moving objects of "human" and "car" types in computer vision systems using statistical hypotheses and integration of the results using two different decision rules. FAR-FRR graphs for all criteria and the decision rule are plotted. Confusion matrix for both ways of integration is presented. The example of the method application to the public video databases is provided. Ways of accuracy improvement are proposed.

  9. A method for evaluating computer-supported

    E-print Network

    Guerrero, Luis

    of communication, the interaction between peers, the reward system and gender dier- ences, among others [1, Colombia. International Journal of Computer Applications in Technology, Volume 19, Nos. 3/4, 2004 151

  10. Novel Methods for Communicating Plasma Science to the General Public

    NASA Astrophysics Data System (ADS)

    Zwicker, Andrew; Merali, Aliya; Wissel, S. A.; Delooper, John

    2012-10-01

    The broader implications of Plasma Science remains an elusive topic that the general public rarely discusses, regardless of their relevance to energy, the environment, and technology. Recently, we have looked beyond print media for methods to reach large numbers of people in creative and informative ways. These have included video, art, images, and music. For example, our submission to the ``What is a Flame?'' contest was ranked in the top 15 out of 800 submissions. Images of plasmas have won 3 out of 5 of the Princeton University ``Art of Science'' competitions. We use a plasma speaker to teach students of all ages about sound generation and plasma physics. We report on the details of each of these and future videos and animations under development.

  11. Methods, Metrics and Motivation for a Green Computer Science Program

    E-print Network

    Way, Thomas

    principles and approaches for reducing energy use and, ultimately, the carbon footprint. Practical methods and pollution footprint. The technology community, specifically computer users, have popularized the term "Green Computing," which is the reduction of the pollution and energy footprint of computers [19]. While the goal

  12. Investigation on reconstruction methods applied to 3D terahertz computed

    E-print Network

    Boyer, Edmond

    Investigation on reconstruction methods applied to 3D terahertz computed tomography B. Recur,3 A *em.abraham@loma.u-bordeaux1.fr Abstract: 3D terahertz computed tomography has been performed using­26408 (2010). 9. B. Ferguson, S. Wang, D. Gray, D. Abbot, and X. C. Zhang, "T-ray computed tomography," Opt

  13. Computational methods for modifying seemingly unrelated regressions models

    Microsoft Academic Search

    Erricos John Kontoghiorghes

    2004-01-01

    Computational efficient methods for updating seemingly unrelated regressions models with new observations are proposed. A recursive algorithm to solve a series of updating problems is developed. The algorithm is based on orthogonal transformations and has as main computational tool the updated generalized QR decomposition (UGQRD). Strategies to compute the orthogonal factorizations by exploiting the block-sparse structure of the matrices are

  14. PUBLICATIONS --G. CAGINALP--January 2010 Phase Field (Diffuse Interface Model)-Computational

    E-print Network

    Çaginalp, Gunduz

    ). "Phase field computations of single-needle crystals, crystal growth and motion by mean curvature" (with E analysis of phase field alloys'' (with W. Xie) in Free Boundary Problems, Theory and Application, PitmanPUBLICATIONS -- G. CAGINALP--January 2010 Phase Field (Diffuse Interface Model)- Computational

  15. Computer-Based National Information Systems. Technology and Public Policy Issues.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    A general introduction to computer based national information systems, and the context and basis for future studies are provided in this report. Chapter One, the introduction, summarizes computers and information systems and their relation to society, the structure of information policy issues, and public policy issues. Chapter Two describes the…

  16. Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing

    E-print Network

    Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing Qian Wang1, {wjlou}@ece.wpi.edu Abstract. Cloud Computing has been envisioned as the next-generation architecture security challenges, which have not been well un- derstood. This work studies the problem of ensuring

  17. The Battle to Secure Our Public Access Computers

    ERIC Educational Resources Information Center

    Sendze, Monique

    2006-01-01

    Securing public access workstations should be a significant part of any library's network and information-security strategy because of the sensitive information patrons enter on these workstations. As the IT manager for the Johnson County Library in Kansas City, Kan., this author is challenged to make sure that thousands of patrons get the access…

  18. Computing methods in applied sciences and engineering. VII

    SciTech Connect

    Glowinski, R.; Lions, J.L.

    1986-01-01

    The design of computers with fast memories, capable of up to one billion floating point operations per second, is important for the attempts being made to solve problems in Scientific Computing. The role of numerical algorithm designers is important due to the architectures and programming necessary to utilize the full potential of these machines. Efficient use of such computers requires sophisticated programming tools, and in the case of parallel computers algorithmic concepts have to be introduced. These new methods and concepts are presented.

  19. A COMPUTATIONAL METHOD FOR CLASSIFYING PHAGES Presented to the

    E-print Network

    of 10:1, it is estimated that there exist 1031 phage particles on the planet. Viruses thus are the mostA COMPUTATIONAL METHOD FOR CLASSIFYING PHAGES _______________ A Thesis Presented to the FacultyNair: A Computational Method for Classifying Phages _____________________________________________ Robert A. Edwards

  20. Monte Carlo Methods: A Computational Pattern for Our Pattern Language

    E-print Network

    California at Berkeley, University of

    Monte Carlo Methods: A Computational Pattern for Our Pattern Language Jike Chong University The Monte Carlo Methods pattern is a computational software programming pattern in Our Pattern Language (OPL tacit knowledge about software design. One can construct a pattern language using a set of related

  1. 12 CFR 227.25 - Unfair balance computation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...2010-01-01 2010-01-01 false Unfair balance computation method. 227.25 Section...Account Practices Rule § 227.25 Unfair balance computation method. (a) General rule...bank must not impose finance charges on balances on a consumer credit card account...

  2. The conjugate gradient regularization method in Computed Tomography problems

    Microsoft Academic Search

    Elena Loli Piccolomini; Fabiana Zama

    1999-01-01

    In this work we solve inverse problems coming from the area of ComputedTomography by means of regularization methods based on conjugate gradientiterations. We develop a stopping criterion which is efficient for the computationof a regularized solution for the least--squares normal equations. The stoppingrule can be suitably applied also to the Tikhonov regularization method. Wereport computational experiences based on different physical

  3. Lecture Notes in Computer Science 5333 Commenced Publication in 1973

    E-print Network

    Barbosa, Alberto

    , Petrobras Research Center, Ilha do Fundão, 21949-900, Rio de Janeiro, Brazil 2 Tecgraf ­ Computer Graphics, Brazil ismaelh@petrobras.com.br, {abraposo,mgattass}@tecgraf.puc-rio.br Abstract. We present an SOA

  4. 47 CFR 90.483 - Permissible methods and requirements of interconnecting private and public systems of...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...requirements of interconnecting private and public systems of communications. 90.483 Section... Transmitter Control Interconnected Systems § 90.483 Permissible methods and...requirements of interconnecting private and public systems of communications....

  5. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  6. Computational methods for the Boltzmann equation

    Microsoft Academic Search

    F. Gropengiesser; H. Neunzert; J. Struckmeier

    1990-01-01

    The basic ideas and practical aspects for numerical methods for solving the Boltzmann equation are presented. The main field of application considered is the reentry of the Space Shuttle in the transition from free molecular flow to continuum flow. The method used is called Finite Pointset Method (FPM) approximating the solution by finite sets of particles. Convergence results and practical

  7. Computational Methods for Analyzing Health News Coverage

    ERIC Educational Resources Information Center

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…

  8. Bootstrap methods in computer simulation experiments

    Microsoft Academic Search

    Russell C. H. Cheng

    1995-01-01

    We critically review the work that has been done in applying basic, smoothed and parametric bootstrap methods to simulation experiments. We develop a framework to classify bootstrap methods in this context and use it to compare various bootstrap schemes. Most bootstrap methods are hard to analyse theoretically. An exception is the parametric case for which a detailed analysis can be

  9. Methods for multiphase computational fluid dynamics

    Microsoft Academic Search

    B. G. M. van Wachem; A. E. Almstedt

    2003-01-01

    This paper presents an overview of the physical models for computational fluid dynamic (CFD) predictions of multiphase flows. The governing equations and closure models are derived and presented for fluid–solid flows and fluid–fluid flows, both in an Eulerian and a Lagrangian framework. Some results obtained with these equations are presented. Finally, the capabilities and limitations of multiphase CFD are discussed.

  10. Simulation methods for advanced scientific computing

    Microsoft Academic Search

    T. E. Booth; J. A. Carlson; R. A. Forster

    1998-01-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems,

  11. COMPUTATIONAL METHODS FOR PREDICTING TRANSMEMBRANE ALPHA HELICES

    E-print Network

    : COMPUTATIONAL MOLECULAR BIOLOGY FINAL PROJECT DECEMBER 6TH , 2002 #12;Introduction: Protein crystal structures. Jung, et al. (2001). "Protein structure prediction." Current Opinion in Chemical Biology 5(1): 51-56. 2." Journal of Structural Biology 134(2-3): 204-218. 4 Rost, B. (2001). "Review: Protein secondary structure

  12. Computing with DNA 413 From: Methods in Molecular Biology, vol. 132: Bioinformatics Methods and Protocols

    E-print Network

    Kari, Lila

    Computing with DNA 413 413 From: Methods in Molecular Biology, vol. 132: Bioinformatics Methods of molecular biology to solve a diffi- cult computational problem. Adleman's experiment solved an instance computations. The main idea was the encoding of data in DNA strands and the use of tools from molecular biology

  13. Computational intelligence methods for information understanding and information management

    Microsoft Academic Search

    Wøodzisøaw Duch; Norbert Jankowski

    Information management relies on knowledge acquisition methods for extraction of knowledge from data. Statistical methods traditionally used for data analysis are satisfied with predictions, while understanding of data and extraction of knowledge from data are challenging tasks that have been pursued using computational intelligence (CI) methods. Recent advances in applications of CI methods to data understanding are presented, implementation of

  14. Consensus methods: review of original methods and their main alternatives used in public health

    PubMed Central

    Bourrée, Fanny; Michel, Philippe; Salmi, Louis Rachid

    2008-01-01

    Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Données Santé Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039

  15. COMSAC: Computational Methods for Stability and Control. Part 1

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

  16. Computational methods for the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Gropengiesser, F.; Neunzert, H.; Struckmeier, J.

    1990-02-01

    The basic ideas and practical aspects for numerical methods for solving the Boltzmann equation are presented. The main field of application considered is the reentry of the Space Shuttle in the transition from free molecular flow to continuum flow. The method used is called Finite Pointset Method (FPM) approximating the solution by finite sets of particles. Convergence results and practical aspects of the algorithm are emphasized. Ideas for the transition to the Navier-Stokes domain are discussed.

  17. Generalized Multistep Methods in Satellite Orbit Computation

    Microsoft Academic Search

    James Dyer

    1968-01-01

    The theory of generalized multistep methods using an off-grid point is extended to the special second-order equation y? = f(x, y). New high-order methods for solving this equation, based on quasi-Hermite interpolating polynomials, are shown to exist, as well as new explicit generalized methods for a first-order equation. Some results in the theory of quasi-Hermite interpolation are given, and results

  18. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  19. Soft computing methods in design of superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1995-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modeled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  20. Computational methods for physical mapping of chromosomes

    SciTech Connect

    Torney, D.C.; Schenk, K.R. (Los Alamos National Lab., NM (USA)); Whittaker, C.C. (International Business Machines Corp., Albuquerque, NM (USA) Los Alamos National Lab., NM (USA)); White, S.W. (International Business Machines Corp., Kingston, NY (USA))

    1990-01-01

    A standard technique for mapping a chromosome is to randomly select pieces, to use restriction enzymes to cut these pieces into fragments, and then to use the fragments for estimating the probability of overlap of these pieces. Typically, the order of the fragments within a piece is not determined, and the observed fragment data from each pair of pieces must be permuted N1 {times} N2 ways to evaluate the probability of overlap, N1 and N2 being the observed number of fragments in the two selected pieces. We will describe computational approaches used to substantially reduce the computational complexity of the calculation of overlap probability from fragment data. Presently, about 10{sup {minus}4} CPU seconds on one processor of an IBM 3090 is required for calculation of overlap probability from the fragment data of two randomly selected pieces, with an average of ten fragments per piece. A parallel version has been written using IBM clustered FORTRAN. Parallel measurements for 1, 6, and 12 processors will be presented. This approach has proven promising in the mapping of chromosome 16 at Los Alamos National Laboratory. We will also describe other computational challenges presented by physical mapping. 4 refs., 4 figs., 1 tab.

  1. Statistical and Computational Methods for Genetic Diseases: An Overview

    PubMed Central

    Di Taranto, Maria Donata

    2015-01-01

    The identification of causes of genetic diseases has been carried out by several approaches with increasing complexity. Innovation of genetic methodologies leads to the production of large amounts of data that needs the support of statistical and computational methods to be correctly processed. The aim of the paper is to provide an overview of statistical and computational methods paying attention to methods for the sequence analysis and complex diseases.

  2. Original computer method for the experimental data processing in photoelasticity

    NASA Astrophysics Data System (ADS)

    Oanta, Emil M.; Panait, Cornel; Barhalescu, Mihaela; Sabau, Adrian; Dumitrache, Constantin; Dascalescu, Anca-Elena

    2015-02-01

    Optical methods in experimental mechanics are important because their results are accurate and they may be used for both full field interpretation and analysis of the local rapid variation of the stresses produced by the stress concentrators. Researchers conceived several graphical, analytical and numerical methods for the experimental data reduction. The paper presents an original computer method employed to compute the analytic functions of the isostatics, using the pattern of isoclinics of a photoelastic model or coating. The resulting software instrument may be included in hybrid models consisting of analytical, numerical and experimental studies. The computer-based integration of the results of these studies offers a higher level of understanding of the phenomena. A thorough examination of the sources of inaccuracy of this computer based numerical method was done and the conclusions were tested using the original computer code which implements the algorithm.

  3. Computational Methods for Collisional Plasma Physics

    SciTech Connect

    Lasinski, B F; Larson, D J; Hewett, D W; Langdon, A B; Still, C H

    2004-02-18

    Modeling the high density, high temperature plasmas produced by intense laser or particle beams requires accurate simulation of a large range of plasma collisionality. Current simulation algorithms accurately and efficiently model collisionless and collision-dominated plasmas. The important parameter regime between these extremes, semi-collisional plasmas, has been inadequately addressed to date. LLNL efforts to understand and harness high energy-density physics phenomena for stockpile stewardship require accurate simulation of such plasmas. We have made significant progress towards our goal: building a new modeling capability to accurately simulate the full range of collisional plasma physics phenomena. Our project has developed a computer model using a two-pronged approach that involves a new adaptive-resolution, ''smart'' particle-in-cell algorithm: complex particle kinetics (CPK); and developing a robust 3D massively parallel plasma production code Z3 with collisional extensions. Our new CPK algorithms expand the function of point particles in traditional plasma PIC models by including finite size and internal dynamics. This project has enhanced LLNL's competency in computational plasma physics and contributed to LLNL's expertise and forefront position in plasma modeling. The computational models developed will be applied to plasma problems of interest to LLNL's stockpile stewardship mission. Such problems include semi-collisional behavior in hohlraums, high-energy-density physics experiments, and the physics of high altitude nuclear explosions (HANE). Over the course of this LDRD project, the world's largest fully electromagnetic PIC calculation was run, enabled by the adaptation of Z3 to the Advanced Simulation and Computing (ASCI) White system. This milestone calculation simulated an entire laser illumination speckle, brought new realism to laser-plasma interaction simulations, and was directly applicable to laser target physics. For the first time, magnetic fields driven by Raman scatter have been observed. Also, Raman rescatter was observed in 2D. This code and its increased suite of dedicated diagnostics are now playing a key role in studies of short-pulse, high-intensity laser matter interactions. In addition, a momentum-conserving electron collision algorithm was incorporated into Z3. Finally, Z3's portability across diverse MPP platforms enabled it to serve the LLNL computing community as a tool for effectively utilizing new machines.

  4. A Fractional-Step Method Of Computing Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Rosenfeld, Moshe; Vinokur, Marcel

    1993-01-01

    Method of computing time-dependent flow of incompressible, viscous fluid involves numerical solution of Navier-Stokes equations on two- or three-dimensional computational grid based on generalized curvilinear coordinates. Equations of method derived in primitive-variable formulation. Dependent variables are pressure at center of each cell of computational grid and volume fluxes across faces of each cell. Volume fluxes replace Cartesian components of velocity; these fluxes correspond to contravariant components of velocity multiplied by volume of computational cell, in staggered grid. Choice of dependent variables enables simple extension of previously developed staggered-grid approach to generalized curvilinear coordinates and facilitates enforcement of conservation of mass.

  5. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  6. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...rules apply to public access use of the Internet on NARA-supplied computers? 1254...rules apply to public access use of the Internet on NARA-supplied computers? (a...computers (workstations) are available for Internet use in all NARA research...

  7. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...rules apply to public access use of the Internet on NARA-supplied computers? 1254...rules apply to public access use of the Internet on NARA-supplied computers? (a...computers (workstations) are available for Internet use in all NARA research...

  8. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...rules apply to public access use of the Internet on NARA-supplied computers? 1254...rules apply to public access use of the Internet on NARA-supplied computers? (a...computers (workstations) are available for Internet use in all NARA research...

  9. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...rules apply to public access use of the Internet on NARA-supplied computers? 1254...rules apply to public access use of the Internet on NARA-supplied computers? (a...computers (workstations) are available for Internet use in all NARA research...

  10. 36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...rules apply to public access use of the Internet on NARA-supplied computers? 1254...rules apply to public access use of the Internet on NARA-supplied computers? (a...computers (workstations) are available for Internet use in all NARA research...

  11. A Meshless Method for Computational Stochastic Mechanics

    Microsoft Academic Search

    S. Rahman; H. Xu

    2005-01-01

    This paper presents a stochastic meshless method for proba- bilistic analysis of linear-elastic structures with spatially varying random material properties. Using Karhunen-Loeve (K-L) expan- sion, the homogeneous random field representing material prop- erties was discretized by a set of orthonormal eigenfunctions and uncorrelated random variables. Two numerical methods were de- veloped for solving the integral eigenvalue problem associated with K-L

  12. Assessing public perceptions of CCS: Benefits, challenges and methods

    Microsoft Academic Search

    Thomas Roberts; Sarah Mander

    2011-01-01

    In recent years debates about public involvement in the decision making process regarding science and technology have been the focus of much debate. This paper uses the deployment of carbon capture and storage technology as a case study to explore both the theoretical and practical reasons why the public need to be consulted on such issues. It concludes that a

  13. Student Publications Enhance Teaching: Experimental Psychology and Research Methods Courses.

    ERIC Educational Resources Information Center

    Ware, Mark E.; Davis, Stephen F.

    Recent years have witnessed an increased emphasis on the professional development of undergraduate psychology students. One major thrust of this professional development has been on research that results in a convention presentation or journal publication. Research leading to journal publication is becoming a requirement for admission to many…

  14. Lecture Notes in Computer Science 7431 Commenced Publication in 1973

    E-print Network

    Schulze, Jürgen P.

    Darko Koracin Charless Fowlkes Sen Wang Min-Hyung Choi Stephan Mantler Jürgen Schulze Daniel Acevedo-mail: step@stephanmantler.com Jürgen Schulze, E-mail: jschulze@ucsd.edu Daniel Acevedo, E-mail: daniel (1998): I.3-5, H.5.2, I.2.10, J.3, F.2.2, I.3.5 LNCS Sublibrary: SL 6 ­ Image Processing, Computer

  15. Lecture Notes in Computer Science 7432 Commenced Publication in 1973

    E-print Network

    Schulze, Jürgen P.

    Darko Koracin Charless Fowlkes Sen Wang Min-Hyung Choi Stephan Mantler Jürgen Schulze Daniel Acevedo-mail: step@stephanmantler.com Jürgen Schulze, E-mail: jschulze@ucsd.edu Daniel Acevedo, E-mail: daniel (1998): I.3-5, H.5.2, I.2.10, J.3, F.2.2, I.3.5 LNCS Sublibrary: SL 6 ­ Image Processing, Computer

  16. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  17. Modified multirevolution integration methods for satellite orbit computation

    Microsoft Academic Search

    O. F. Graf; D. G. Bettis

    1975-01-01

    Multirevolution methods allow for the computation of satellite orbits in steps spanning many revolutions. The methods previously discussed in the literature are based on polynomial approximations, and as a result they will integrate exactly (excluding round-off errors) polynomial functions of a discrete independent variable. Modified methods are derived that will integrate exactly products of linear and periodic functions. Numerical examples

  18. Computational Methods for Modification of Metabolic Networks

    PubMed Central

    Tamura, Takeyuki; Lu, Wei; Akutsu, Tatsuya

    2015-01-01

    In metabolic engineering, modification of metabolic networks is an important biotechnology and a challenging computational task. In the metabolic network modification, we should modify metabolic networks by newly adding enzymes or/and knocking-out genes to maximize the biomass production with minimum side-effect. In this mini-review, we briefly review constraint-based formalizations for Minimum Reaction Cut (MRC) problem where the minimum set of reactions is deleted so that the target compound becomes non-producible from the view point of the flux balance analysis (FBA), elementary mode (EM), and Boolean models. Minimum Reaction Insertion (MRI) problem where the minimum set of reactions is added so that the target compound newly becomes producible is also explained with a similar formalization approach. The relation between the accuracy of the models and the risk of overfitting is also discussed. PMID:26106462

  19. Computational Methods for Modification of Metabolic Networks.

    PubMed

    Tamura, Takeyuki; Lu, Wei; Akutsu, Tatsuya

    2015-01-01

    In metabolic engineering, modification of metabolic networks is an important biotechnology and a challenging computational task. In the metabolic network modification, we should modify metabolic networks by newly adding enzymes or/and knocking-out genes to maximize the biomass production with minimum side-effect. In this mini-review, we briefly review constraint-based formalizations for Minimum Reaction Cut (MRC) problem where the minimum set of reactions is deleted so that the target compound becomes non-producible from the view point of the flux balance analysis (FBA), elementary mode (EM), and Boolean models. Minimum Reaction Insertion (MRI) problem where the minimum set of reactions is added so that the target compound newly becomes producible is also explained with a similar formalization approach. The relation between the accuracy of the models and the risk of overfitting is also discussed. PMID:26106462

  20. Using a portable wireless computer lab to provide outreach training to public health workers.

    PubMed

    Watson, Michael M; Timm, Donna F; Parker, Dawn M; Adams, Mararia; Anderson, Angela D; Pernotto, Dennis A; Comegys, Marianne

    2006-01-01

    Librarians at Louisiana State University Health Sciences Center in Shreveport developed an outreach program for public health workers in north Louisiana. This program provided hands-on training on how to find health information resources on the Web. Several challenges arose during this project. Public health units in the region lacked suitable teaching labs and faced limited travel budgets and tight staffing requirements, which made it impractical for public health workers to travel. One solution to these problems is a portable wireless computer lab that can be set up at each site. The outreach program utilized this approach to present on-site training to public health workers in the region. The paper discusses operational and technical issues encountered in implementing this public health outreach project. PMID:17135147

  1. Computer methods mOCh@nksand

    E-print Network

    Li, Shaofan

    -154 Moving least-square reproducing kernel methods (I) Methodology and convergence Wing-Kam Liu'.", Shaofan Li2, Ted Belytschko3 Department of Mechanical Engineering, Robert R. McCormick School of Engineering estimate is given to assess the convergence rate of the approximation. It is shown that for sufficiently

  2. Computational Methods for the Fourier Analysis of Sparse

    E-print Network

    Potts, Daniel

    Computational Methods for the Fourier Analysis of Sparse High-Dimensional Functions Lutz K to these thinner discretisations and we focus on two ma- jor topics regarding the Fourier analysis of high Fourier analysis is the fast computation of certain trigonometric sums. A straightforward evaluation

  3. Why do computer methods for grounding analysis produce anomalous results?

    Microsoft Academic Search

    Fermín Navarrina; Ignasi Colominas; Manuel Casteleiro

    2003-01-01

    Grounding systems are designed to guarantee personal security, protection of equipment, and continuity of power supply. Hence, engineers must compute the equivalent resistance of the system and the potential distribution on the earth surface when a fault condition occurs. While very crude approximations were available until the 1970s, several computer methods have been more recently proposed on the basis of

  4. Evaluation of Tracking Methods for Human-Computer Interaction

    Microsoft Academic Search

    Christopher Fagiani; Margrit Betke; James Gips

    2002-01-01

    Tracking methods are evaluated in a real-time feature tracking system used for human- computer interaction (HCI). The Camera Mouse, a HCI system that uses video input to manipulate the mouse cursor, was used as the test platform for this study. The Camera Mouse was developed to assist individuals with severe disabilities in using computers, but the technology may be used

  5. Evolutionary Computational Methods for the Design of Spectral Instruments

    Microsoft Academic Search

    Richard J. Terrile; Seungwon Lee; Giovanna Tinetti; Wolfgang Fink; Paul von Allmen; Terrance L. Huntsberger

    2008-01-01

    We have developed a technique based on evolutionary computational methods (ECM) that allows for the automated optimization of complex computationally modeled systems. We have demonstrated that complex engineering and science models can be automatically inverted by incorporating them into evolutionary frameworks and that these inversions have advantages over conventional searches by not requiring expert starting guesses (designs) and by running

  6. Computer Controlled Oral Test Administration: A Method and Example.

    ERIC Educational Resources Information Center

    Milligan, W. Lloyd

    1978-01-01

    A computer/tape recorder interface was designed, which permits automatic oral adminstration of "true-false" or "multiple-choice" type tests. This paper describes the hardware and control program software, which were developed to implement the method on a DEC PDP 11 computer. (Author/JKS)

  7. Robust Computational Methods for Shape Interrogation Takashi Maekawa

    E-print Network

    Reuter, Martin

    Robust Computational Methods for Shape Interrogation by Takashi Maekawa O.E. in Ocean Engineering Engineering, Waseda University, March, 1976 Submitted to the Department of Ocean Engineering in partial ful ............................................................................ Department of Ocean Engineering June 18, 1993 Certi ed by

  8. Platform-independent method for computer aided schematic drawings

    DOEpatents

    Vell, Jeffrey L. (Slingerlands, NY); Siganporia, Darius M. (Clifton Park, NY); Levy, Arthur J. (Fort Lauderdale, FL)

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  9. 2.093 Computer Methods in Dynamics, Fall 2002

    E-print Network

    Bathe, Klaus-Jürgen

    Formulation of finite element methods for analysis of dynamic problems in solids, structures, fluid mechanics, and heat transfer. Computer calculation of matrices and numerical solution of equilibrium equations by direct ...

  10. Transonic Flow Computations Using Nonlinear Potential Methods

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.

  11. Adaptive methods in computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Oden, J. T.

    A review is conducted of the basic components of adaptive methods applicable to very complex problems in fluid dynamics. Attention is given to ways of changing the structure of an approximation to reduce error, techniques for estimating the evolution of error in a CFD calculation, and the range of algorithms that are currently available for mesh-changing functions. Available numerical results which demonstrate the viability of these approaches are discussed.

  12. Computational method for analysis of polyethylene biodegradation

    NASA Astrophysics Data System (ADS)

    Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro

    2003-12-01

    In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to ?-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the ?-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.

  13. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  14. Computational methods with vortices - The 1988 Freeman Scholar Lecture

    NASA Astrophysics Data System (ADS)

    Sarpkaya, Turgut

    1989-03-01

    Computational methods based upon Helmholtz's concepts of vortex dynamics are reviewed which employ Lagrangian or mixed Lagrangian-Eulerian schemes, the Biot-Savart law, or vortex-in-cell methods. The theoretical basis of vortex methods is first considered, covering such topics as the evolution equations for a vortex sheet, real vortices and instabilities, smoothing techniques, and body representation. Applications of the method discussed include vortical flows in aerodynamics, separated flows about cylindrical bodies, and general three-dimensional flows.

  15. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  16. Computer systems and methods for visualizing data

    SciTech Connect

    Stolte, Chris (Palo Alto, CA); Hanrahan, Patrick (Portola Valley, CA)

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  17. Method and computer program product for maintenance and modernization backlogging

    SciTech Connect

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  18. Review of parallel computing methods and tools for FPGA technology

    NASA Astrophysics Data System (ADS)

    Cieszewski, Rados?aw; Linczuk, Maciej; Pozniak, Krzysztof; Romaniuk, Ryszard

    2013-10-01

    Parallel computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated using parallel computing techniques. Specialized parallel computer architectures are used for accelerating speci c tasks. High-Energy Physics Experiments measuring systems often use FPGAs for ne-grained computation. FPGA combines many bene ts of both software and ASIC implementations. Like software, the mapped circuit is exible, and can be recon gured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This paper presents existing methods and tools for ne-grained computation implemented in FPGA using Behavioral Description and High Level Programming Languages.

  19. A computing method for spatial accessibility based on grid partition

    NASA Astrophysics Data System (ADS)

    Ma, Linbing; Zhang, Xinchang

    2007-06-01

    An accessibility computing method and process based on grid partition was put forward in the paper. As two important factors impacting on traffic, density of road network and relative spatial resistance for difference land use was integrated into computing traffic cost in each grid. A* algorithms was inducted to searching optimum traffic cost of grids path, a detailed searching process and definition of heuristic evaluation function was described in the paper. Therefore, the method can be implemented more simply and its data source is obtained more easily. Moreover, by changing heuristic searching information, more reasonable computing result can be obtained. For confirming our research, a software package was developed with C# language under ArcEngine9 environment. Applying the computing method, a case study on accessibility of business districts in Guangzhou city was carried out.

  20. Reliability-Driven Reputation Based Scheduling for Public-Resource Computing Using GA

    Microsoft Academic Search

    Xiaofeng Wang; Chee Shin Yeo; Rajkumar Buyya; Jinshu Su

    2009-01-01

    For an application in public-resource computing environments, providing reliable scheduling based on resource reliability evaluation is becoming increasingly important. Most existing reputation models used for reliability evaluation ignore the time influence. And very few works use a robust genetic algorithm to optimize both time and reliability for a workflow application. Hence, in this paper, we propose the reliability-driven (RD) reputation,

  1. Under consideration for publication in Formal Aspects of Computing Almost ASAP Semantics: from Timed

    E-print Network

    Doyen, Laurent

    Under consideration for publication in Formal Aspects of Computing Almost ASAP Semantics: from the Almost ASAP semantics. This semantics is a relaxation of the usual ASAP 3 semantics (also called physical device no matter how fast it is. On the contrary, any correct Almost ASAP controller can

  2. Under consideration for publication in Formal Aspects of Computing Almost ASAP Semantics: from Timed

    E-print Network

    Doyen, Laurent

    Under consideration for publication in Formal Aspects of Computing Almost ASAP Semantics: from the Almost ASAP semantics. This semantics is a relaxation of the usual ASAP3 semantics (also called physical device no matter how fast it is. On the contrary, any correct Almost ASAP controller can

  3. DataSteward: Using Dedicated Compute Nodes for Scalable Data Management on Public Clouds

    E-print Network

    Paris-Sud XI, Université de

    DataSteward: Using Dedicated Compute Nodes for Scalable Data Management on Public Clouds Radu on clouds to build on their inherent elasticity and scalability. One of the critical needs in order to deal by cloud providers suffer from high latencies, trading performance for availability. One alternative

  4. Accepted for publication in International Journal of Computer Vision Color Subspaces as Photometric Invariants

    E-print Network

    Jaffe, Jules

    Accepted for publication in International Journal of Computer Vision Color Subspaces as Photometric reflectance phenomena such as specular reflections confound many vision problems since they produce image-based vision techniques to a broad class of spec- ular, non-Lambertian scenes. Using implementations of recent

  5. Accepted for publication in Spatial Cognition and Computation, 2004 Commonsense notions of proximity and

    E-print Network

    Worboys, Mike

    Accepted for publication in Spatial Cognition and Computation, 2004 Commonsense notions University of Melbourne, Victoria 3010, Australia It is desirable that formal theories of qualitative). There is a need for formal theories of spatial representa- tion and reasoning to be properly guided by the way

  6. Learning From Engineering and Computer Science About Communicating The Field To The Public

    NASA Astrophysics Data System (ADS)

    Moore, S. L.; Tucek, K.

    2014-12-01

    The engineering and computer science community has taken the lead in actively informing the public about their discipline, including the societal contributions and career opportunities. These efforts have been intensified in regards to informing underrepresented populations in STEM about engineering and computer science. Are there lessons to be learned by the geoscience community in communicating the societal impacts and career opportunities in the geosciences, especially in regards to broadening participation and meeting Next Generation Science Standards? An estimated 35 percent increase in the number of geoscientist jobs in the United States forecasted for the period between 2008 and 2018, combined with majority populations becoming minority populations, make it imperative that we improve how we increase the public's understanding of the geosciences and how we present our message to targeted populations. This talk will look at recommendations from the National Academy of Engineering's Changing the Conversation: Messages for Improving the Public Understanding of Engineering, and communication strategies by organizations such as Code.org, to highlight practices that the geoscience community can adopt to increase public awareness of the societal contributions of the geosciences, the career opportunities in the geosciences, and the importance of the geosciences in the Next Generation Science Standards. An effort to communicate geoscience to the public, Earth is Calling, will be compared and contrasted to these efforts, and used as an example of how geological societies and other organizations can engage the general public and targeted groups about the geosciences.

  7. Assessment of public health computer readiness for 2000--United States, 1999.

    PubMed

    1999-05-01

    Computer software, equipment, and other devices that contain embedded microchips that store and process dates may use two-digit years (e.g., 99 for 1999) to reduce data entry burden and save electronic storage space; these devices may not work properly when the year 2000 (Y2K) arrives. Many aspects of health-care delivery, public health surveillance and research, and critical infrastructure components (e.g., utilities and transportation services) depend on vulnerable computers. To ensure that critical public health functions will not be compromised because of Y2K problems, CDC assessed state public health agency readiness for Y2K. This report describes the findings of the assessment, which indicate that state health agencies that responded are substantially ready for Y2K and plan to reach full readiness in 1999. PMID:10363961

  8. Computational experience with penalty-barrier methods for nonlinear programming

    Microsoft Academic Search

    Marc G. Breitfeld; David F. Shanno

    1996-01-01

    It was recently shown that modified barrier methods are not only theoretically but also computationally superior to classic barrier methods when applied to general nonlinear problems. In this paper, a penalty-barrier function is presented that was designed to overcome particular problems associated with modified log-barrier functions. A quadratic extrapolation of logarithmic terms as well as handling simple bounds separately are

  9. Anisotropic Cartesian grid method for steady inviscid shocked flow computation

    Microsoft Academic Search

    Zi-Niu Wu; Ke Li

    2003-01-01

    The anisotropic Cartesian grid method, initially developed by Z.N. Wu (ICNMFD 15, 1996; CFD Review 1998, pp. 93-113) several years ago for efficiently capturing the anisotropic nature of a viscous boundary layer, is applied here to steady shocked flow computation. A finite-difference method is proposed for treating the slip wall conditions.

  10. Anisotropic Cartesian grid method for steady inviscid shocked flow computation

    NASA Astrophysics Data System (ADS)

    Wu, Zi-Niu; Li, Ke

    2003-04-01

    The anisotropic Cartesian grid method, initially developed by Z.N. Wu (ICNMFD 15, 1996; CFD Review 1998, pp. 93-113) several years ago for efficiently capturing the anisotropic nature of a viscous boundary layer, is applied here to steady shocked flow computation. A finite-difference method is proposed for treating the slip wall conditions.

  11. Multiresolution reproducing kernel particle method for computational fluid dynamics

    Microsoft Academic Search

    Sukky Jun; Dirk Thomas Sihling; Yijung Chen; Wei Hao

    1997-01-01

    Multiresolution analysis based on the reproducing kernel particle method (RKPM) is developed for computational fluid dynamics. An algorithm incorporating multiple-scale adaptive refinement is introduced. The concept of using a wavelet solution as an error indicator is also presented. A few representative numerical examples are solved to illustrate the performance of this new meshless method. Results show that the RKPM is

  12. Formal methods: mathematics, computer science or software engineering?

    Microsoft Academic Search

    Guy Tremblay

    2000-01-01

    Formal methods courses have been taught at Universite du Quebec a Montreal (UQAM), Montreal, PQ, Canada, since 1996. In the graduate program, the course was initially an INF course (computer science) and later became an MGL one (software engineering), On the other hand, until recently, the undergraduate formal methods course was a MAT course (mathematics). From these various affiliations, one

  13. Floating Points: A method for computing stipple drawings

    Microsoft Academic Search

    Oliver Deussen; Stefan Hiller; Cornelius W. A. M. Van Overveld; Thomas Strothotte

    2000-01-01

    We present a method for computer generated pen-and-ink illustrations by the simulation of stippling. In a stipple drawing, dots are used to represent tone and also material of surfaces. We create such drawings by generating an initial dot set which is then processed by a relaxation method based on Voronoi diagrams. The point patterns generated are approximations of Poisson disc

  14. Computer Experiments with Newton's Method J. Orlando Freitas*

    E-print Network

    Paris-Sud XI, Université de

    Computer Experiments with Newton's Method J. Orlando Freitas* orlando@uma.pt Escola Secundária de Francisco Franco, Funchal, Portugal J. Sousa Ramos Instituto Superior Técnico, Lisbon, Portugal Newton's method has served as one of the most fruitful paradigms in the development of complex iteration theory. H

  15. Toward a method of selecting among computational models of cognition

    Microsoft Academic Search

    Mark A. Pitt; In Jae Myung; Shaobo Zhang

    2002-01-01

    The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Computational models of cognition are increasingly being advanced as explanations of behavior. The success of this line of inquiry depends on the development of robust methods to guide the evaluation and selection of these models. This article introduces a method

  16. 42 CFR 447.205 - Public notice of changes in Statewide methods and standards for setting payment rates.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...Public notice of changes in Statewide methods and standards for setting payment... PAYMENTS FOR SERVICES Payment Methods: General Provisions § 447.205 Public notice of changes in Statewide methods and standards for setting...

  17. Constraint methods for neural networks and computer graphics

    SciTech Connect

    Platt, J.C.

    1989-01-01

    Both computer graphics and neural networks are related, in that they model natural phenomena. Physically-based models are used by computer graphics researchers to create realistic, natural animation, and neural models are used by neural network researchers to create new algorithms or new circuits. To exploit successfully these graphical and neural models, engineers want models that fulfill designer-specified goals. These goals are converted into mathematical constraints. This thesis presents constraint methods for computer graphics and neural networks. The mathematical constraint methods modify the differential equations that govern the neural or physically-based models. The constraint methods gradually enforce the constraints exactly. This thesis also described application of constrained models to real problems. The first half of this theses discusses constrained neural networks. The desired models and goals are often converted into constrained optimization problems. These optimization problems are solved using first-order differential equations. The applications of constrained neural networks include the creation of constrained circuits, error-correcting codes, symmetric edge detection for computer vision, and heuristics for the traveling salesman problem. The second half of this thesis discusses constrained computer graphics models. In computer graphics, the desired models and goals become constrained mechanical systems, which are typically simulated with second-order differential equations. The Penalty Method adds springs to the mechanical system to penalize violations of the constraints. Rate Controlled Constraints add forces and impulses to the mechanical system to fulfill the constraints with critically damped motion.

  18. What happens when someone talks in public to an audience they know to be entirely computer gen-

    E-print Network

    Slater, Mel

    designed a virtual public speaking scenario, followed by an experimental study. In this work we wanted compared to more general social interactions. A public speaking scenario involves specific stylizedWhat happens when someone talks in public to an audience they know to be entirely computer gen

  19. On Improving Qualitative Methods in Public Administration Research

    Microsoft Academic Search

    Ralph S. Brower; Mitchel Y. Abolafia; Jered B. Carr

    2000-01-01

    What do exemplary qualitative accounts look like, and how do they convince readers of their correctness? What sort of standards can be used to assess qualitative research accounts for public administration? To address these questions, the authors examined 72 recent qualitative research journal articles. Proceeding from a set of preliminary guidelines, they worked iteratively between articles and the emergent template

  20. A stochastic method for computing hadronic matrix elements

    DOE PAGESBeta

    Drach, Vincent; Jansen, Karl; Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; Hadjiyiannakou, Kyriakos; Renner, Dru B.

    2014-01-01

    We present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  1. Monte Carlo methods in applied mathematics and computational aerodynamics

    Microsoft Academic Search

    O. M. Belotserkovskii; Yu. I. Khlopkov

    2006-01-01

    A survey of the Monte Carlo methods developed in the computational aerodynamics of rarefied gases is given, and application\\u000a of these methods in unconventional fields is described. A short history of these methods is presented, and their advantages\\u000a and drawbacks are discussed. A relationship of the direct statistical simulation of aerodynamical processes with the solution\\u000a of kinetic equations is established;

  2. Robust regression methods for computer vision: A review

    Microsoft Academic Search

    Peter Meer; Doron Mintz; Azriel Rosenfeld; Dong Yoon Kim

    1991-01-01

    Regression analysis (fitting a model to noisy data) is a basic technique in computer vision, Robust regression methods that remain reliable in the presence of various types of noise are therefore of considerable importance. We review several robust estimation techniques and describe in detail the least-median-of-squares (LMedS) method. The method yields the correct result even when half of the data

  3. A Stochastic Method for Computing Hadronic Matrix Elements

    E-print Network

    Constantia Alexandrou; Simon Dinter; Vincent Drach; Kyriakos Hadjiyiannakou; Karl Jansen; Dru B. Renner

    2014-01-21

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  4. Proposed congestion control method for cloud computing environments

    E-print Network

    Kuribayashi, Shin-ichi

    2012-01-01

    As cloud computing services rapidly expand their customer base, it has become important to share cloud resources, so as to provide them economically. In cloud computing services, multiple types of resources, such as processing ability, bandwidth and storage, need to be allocated simultaneously. If there is a surge of requests, a competition will arise between these requests for the use of cloud resources. This leads to the disruption of the service and it is necessary to consider a measure to avoid or relieve congestion of cloud computing environments. This paper proposes a new congestion control method for cloud computing environments which reduces the size of required resource for congested resource type instead of restricting all service requests as in the existing networks. Next, this paper proposes the user service specifications for the proposed congestion control method, and clarifies the algorithm to decide the optimal size of required resource to be reduced, based on the load offered to the system. I...

  5. Acceptability of computerized visual analog scale, time trade-off and standard gamble rating methods in patients and the public.

    PubMed Central

    Lenert, L. A.; Sturley, A. E.

    2001-01-01

    One technique to enhance patient participation in clinical decision making is formal measurement of preferences and values. Three commonly applied methods are a visual analog scale(VAS), the standard gamble(SG), and the time trade-off(TTO). We studied participants subjective experience using computer implementations these methods using scale we call the VIBE (for Value Instrument Battery--Evaluation) that measures four aspects of user acceptance (clarity, difficulty, reasonableness, and comfort level) Studies were performed in two groups: patients with HIV infection (n=75) and a convenience sample of the general public(n=640). In the patient study, VIBE scores appeared reliable (Cronbach s alpha of 0.739, 0.826, and 0.716, for VAS, SG, and TTO ratings, respectively.) Patients acceptance of the VAS the highest, followed by the TTO and the SG method (p<0.05 for all comparisons). Despite significant enhancements in computer software for measuring SG preferences, observed differences in acceptance between SG and VAS methods were replicated in the general public study (p<0.0001 for differences). The results suggest developers of clinical decision support systems should use VAS and TTO rating methods where these methods are theoretically appropriate. PMID:11825211

  6. Extrapolation methods for accelerating PageRank computations

    Microsoft Academic Search

    Sepandar D. Kamvar; Taher H. Haveliwala; Christopher D. Manning; Gene H. Golub

    2003-01-01

    We present a novel algorithm for the fast computation of PageRank, a hyperlink-based estimate of the ''importance'' of Web pages. The original PageRank algorithm uses the Power Method to compute successive iterates that converge to the principal eigenvector of the Markov matrix representing the Web link graph. The algorithm presented here, called Quadratic Extrapolation, accelerates the convergence of the Power

  7. Computational Methods for Protein Structure Prediction and Fold Recognition

    Microsoft Academic Search

    Iwona Cymerman; Marcin Feder; Marcin Paw?owski; Michal Kurowski; Janusz Bujnicki

    Amino acid sequence analysis provides important insight into the structure of proteins,which in turn greatly facilitates the\\u000a understanding of its biochemical and cellular function. Efforts to use computational methods in predicting protein structure\\u000a based only on sequence information started 30 years ago (Nagano 1973; Chou and Fasman 1974).However, only during the last\\u000a decade, has the introduction of new computational techniques

  8. *NIH Public Access Policy: Submission Methods and How to Demonstrate Compliance July 2010 Method A: Author publishes in

    E-print Network

    Subramanian, Venkat

    *NIH Public Access Policy: Submission Methods and How to Demonstrate Compliance July 2010 Method A: Author publishes in a journal that submits all NIH- funded final published articles to PMC; no fee. Method B: Author requests a publisher to submit an individual NIH-funded final published article to PMC

  9. The Direct Lighting Computation in Global Illumination Methods

    NASA Astrophysics Data System (ADS)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  10. GRACE: Public Health Recovery Methods following an Environmental Disaster

    PubMed Central

    Svendsen, ER; Whittle, N; Wright, L; McKeown, RE; Sprayberry, K; Heim, M; Caldwell, R; Gibson, JJ; Vena, J.

    2014-01-01

    Different approaches are necessary when Community Based Participatory Research (CBPR) of environmental illness is initiated after an environmental disaster within a community. Often such events are viewed as golden scientific opportunities to do epidemiological studies. However, we believe that in such circumstances, community engagement and empowerment needs to be integrated into the public health service efforts in order for both those and any science to be successful, with special care being taken to address the immediate health needs of the community first rather than the pressing needs to answer important scientific questions. We will demonstrate how we have simultaneously provided valuable public health service, embedded generalizable scientific knowledge, and built a successful foundation for supplemental CBPR through our on-going recovery work after the chlorine gas disaster in Graniteville, South Carolina. PMID:20439226

  11. Pesticides and public health: integrated methods of mosquito management.

    PubMed Central

    Rose, R. I.

    2001-01-01

    Pesticides have a role in public health as part of sustainable integrated mosquito management. Other components of such management include surveillance, source reduction or prevention, biological control, repellents, traps, and pesticide-resistance management. We assess the future use of mosquito control pesticides in view of niche markets, incentives for new product development, Environmental Protection Agency registration, the Food Quality Protection Act, and improved pest management strategies for mosquito control. PMID:11266290

  12. Line planning in public transportation: models and methods

    Microsoft Academic Search

    Anita Schöbel

    The problem of defining suitable lines in a public transportation system (bus, railway, tram, or underground) is an important\\u000a real-world problem that has also been well researched in theory. Driven by applications, it often lacks a clear description,\\u000a but is rather stated in an informal way. This leads to a variety of different published line planning models. In this paper,

  13. AN ALGEBRAIC METHOD FOR PUBLIC-KEY CRYPTOGRAPHY

    Microsoft Academic Search

    Iris Anshel; Michael Anshel; Dorian Goldfeld

    1999-01-01

    Algebraic key establishment protocols based on the di-culty of solv- ing equations over algebraic structures are described as a theoretical basis for constructing public{key cryptosystems. A protocol is a multi{party algorithm, deflned by a sequence of steps, speci- fying the actions required of two or more parties in order to achieve a specifled objective. Furthermore, a key establishment protocol is

  14. eGovernment Services Use and Impact through Public Libraries: Preliminary Findings from a National Study of Public Access Computing in Public Libraries

    Microsoft Academic Search

    Karen E. Fisher; Samantha Becker; Michael Crandall

    2010-01-01

    eGovernment services are delivered in many settings, including public libraries, which have increasingly assumed the role of service provider for users of these services. The U.S. IMPACT Studies are examining use patterns and impacts of eGovernment services (among other uses) in populations using libraries for their primary or secondary means of Internet access. A mixed methods approach-national telephone survey (N¿1130),

  15. Cloud Computing Research and Development Trend

    Microsoft Academic Search

    Shuai Zhang; Shufen Zhang; Xuebin Chen; Xiuzhen Huo

    2010-01-01

    With the development of parallel computing, distributed computing, grid computing, a new computing model appeared. The concept of computing comes from grid, public computing and SaaS. It is a new method that shares basic framework. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer or

  16. Computational methods for the analysis of primate mobile elements

    PubMed Central

    Cordaux, Richard; Sen, Shurjo K.; Konkel, Miriam K.; Batzer, Mark A.

    2010-01-01

    Transposable elements (TE), defined as discrete pieces of DNA that can move from site to another site in genomes, represent significant components of eukaryotic genomes, including primates. Comparative genome-wide analyses have revealed the considerable structural and functional impact of TE families on primate genomes. Insights into these questions have come in part from the development of computational methods that allow detailed and reliable identification, annotation and evolutionary analyses of the many TE families that populate primate genomes. Here, we present an overview of these computational methods, and describe efficient data mining strategies for providing a comprehensive picture of TE biology in newly available genome sequences. PMID:20238080

  17. Probability computations using the SIGMA-PI method on a personal computer

    SciTech Connect

    Haskin, F.E.; Lazo, M.S.; Heger, A.S. [Univ. of New Mexico, Albuquerque, NM (US). Dept. of Chemical and Nuclear Engineering

    1990-09-30

    The SIGMA-PI ({Sigma}{Pi}) method as implemented in the SIGPI computer code, is designed to accurately and efficiently evaluate the probability of Boolean expressions in disjunctive normal form given the base event probabilities. The method is not limited to problems in which base event probabilities are small, nor to Boolean expressions that exclude the compliments of base events, nor to problems in which base events are independent. The feasibility of implementing the {Sigma}{Pi} method on a personal computer has been evaluated, and a version of the SIGPI code capable of quantifying simple Boolean expressions with independent base events on the personal computer has been developed. Tasks required for a fully functional personal computer version of SIGPI have been identified together with enhancements that could be implemented to improve the utility and efficiency of the code.

  18. I LIKE Computers versus I LIKERT Computers: Rethinking Methods for Assessing the Gender Gap in Computing.

    ERIC Educational Resources Information Center

    Morse, Frances K.; Daiute, Colette

    There is a burgeoning body of research on gender differences in computing attitudes and behaviors. After a decade of experience, researchers from both inside and outside the field of educational computing research are raising methodological and conceptual issues which suggest that perhaps researchers have shortchanged girls and women in…

  19. PSD computations using Welch's method. [Power Spectral Density (PSD)

    SciTech Connect

    Solomon, Jr, O M

    1991-12-01

    This report describes Welch's method for computing Power Spectral Densities (PSDs). We first describe the bandpass filter method which uses filtering, squaring, and averaging operations to estimate a PSD. Second, we delineate the relationship of Welch's method to the bandpass filter method. Third, the frequency domain signal-to-noise ratio for a sine wave in white noise is derived. This derivation includes the computation of the noise floor due to quantization noise. The signal-to-noise ratio and noise flood depend on the FFT length and window. Fourth, the variance the Welch's PSD is discussed via chi-square random variables and degrees of freedom. This report contains many examples, figures and tables to illustrate the concepts. 26 refs.

  20. Computational Methods for Structural Mechanics and Dynamics, part 1

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (editor); Housner, Jerrold M. (editor); Tanner, John A. (editor); Hayduk, Robert J. (editor)

    1989-01-01

    The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

  1. Computation of dendritic microstructures using a level set method

    Microsoft Academic Search

    Yung-Tae Kim; Nigel Goldenfeld; Jonathan Dantzig

    2000-01-01

    We compute time-dependent solutions of the sharp-interface model of dendritic solidification in two dimensions by using a level set method. The steady-state results are in agreement with solvability theory. Solutions obtained from the level set algorithm are compared with dendritic growth simulations performed using a phase-field model and the two methods are found to give equivalent results. Furthermore, we perform

  2. Computing elliptic membrane high frequencies by Mathieu and Galerkin methods

    Microsoft Academic Search

    Howard B. Wilson; Robert W. Scharstein

    2007-01-01

    Resonant modes of an elliptic membrane are computed for a wide range of frequencies using a Galerkin formulation. Results\\u000a are confirmed using Mathieu functions and finite-element methods. Algorithms and their implementations are described to handle\\u000a Dirichlet or Neumann boundary conditions and draw animations or contour plots of the modal surfaces. The methods agree to\\u000a four or more digit accuracy for

  3. Fast and Slow Dynamics for the Computational Singular Perturbation Method

    Microsoft Academic Search

    Antonios Zagaris; Hans G. Kaper; Tasso J. Kaper

    2004-01-01

    The Computational Singular Perturbation (CSP) method of Lam and Goussis is an\\u000aiterative method to reduce the dimensionality of systems of ordinary\\u000adifferential equations with multiple time scales. In [J. Nonlin. Sci., to\\u000aappear], the authors showed that each iteration of the CSP algorithm improves\\u000athe approximation of the slow manifold by one order. In this paper, it is shown

  4. Secure encapsulation and publication of biological services in the cloud computing environment.

    PubMed

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  5. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  6. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  7. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Zbigniew (Shoreham, NY); Falkowski, Paul (Stony Brook, NY)

    1990-01-01

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

  8. Analysis and optimization of cyclic methods in orbit computation

    NASA Technical Reports Server (NTRS)

    Pierce, S.

    1973-01-01

    The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

  9. Two-channel computer-generated holograms: a simplified method

    Microsoft Academic Search

    M. Araiza-Esquivel; S. Guel-Sandoval

    2003-01-01

    A simplified approach is presented, to obtain two-channel computer-generated holograms of detour type. The method allows effectively to handle up to two flat objects (two digitized images), in x and y directions. The experimental results demonstrate the effectiveness of the suggested procedure.

  10. Hindawi Publishing Corporation Computational and Mathematical Methods in Medicine

    E-print Network

    Hindawi Publishing Corporation Computational and Mathematical Methods in Medicine Volume 2013 Dynamic Response to Mental Stress: A Multivariate Time-Frequency Analysis Devy Widjaja,1,2 Michele Orini,3 is a growing problem in our society. In order to deal with this, it is important to understand the underlying

  11. Computational Methods for Learning Population History from Large Scale Genetic

    E-print Network

    Matsuda, Noboru

    , Population History, Markov chain Monte Carlos, Coalescent Theory, Genome Wide Association Study #12;#12;ivComputational Methods for Learning Population History from Large Scale Genetic Variation Datasets of Philosophy. Copyright 2013 Ming-Chi Tsai #12;Keywords: Population Genetics, Minimum Description Length

  12. Disequilibration for teaching the scientific method in computer science

    Microsoft Academic Search

    Grant Braught; David W. Reed

    2002-01-01

    We present several introductory computer science laboratory assignments designed to reinforce the use of the scientific method. These assignments require students to make predictions, write simulations, perform experiments, collect data and analyze the results. The assignments are specifically designed to place student predictions in conflict with the observed results, thus producing a disequilibration. As a result, students are motivated to

  13. Multiscale Computations of Fluid Flows Using an Adaptive Wavelet Method

    E-print Network

    of a continuous 1-D function on the unit interval [0, 1]. 4 #12;Wavelet Amplitudes · Wavelet amplitude, |dj to the interpolation property of the basis, there exists a fast wavelet transform (AFWT), with O(N) operations, N = dimMultiscale Computations of Fluid Flows Using an Adaptive Wavelet Method By D. Wirasaet, S. Paolucci

  14. PREFACE: Theory, Modelling and Computational methods for Semiconductors

    Microsoft Academic Search

    Max Migliorato; Matt Probert

    2010-01-01

    These conference proceedings contain the written papers of the contributions presented at the 2nd International Conference on: Theory, Modelling and Computational methods for Semiconductors. The conference was held at the St Williams College, York, UK on 13th-15th Jan 2010. The previous conference in this series took place in 2008 at the University of Manchester, UK. The scope of this conference

  15. COMPUTER-BASED TRIZ - SYSTEMATIC INNOVATION METHODS FOR ARCHITECTURE

    Microsoft Academic Search

    Darrell L Mann; Conall Ó Catháin

    The Russian Theory of Inventive Problem Solving, TRIZ, is the most comprehensive systematic innovation and creativity methodology available. Essentially the method consists of restating a specific design task in a more general way and then selecting generic solutions from databases of patents and solutions from a wide range of technologies. The development of computer databases greatly facilitates this task. Since

  16. Computational and Crowdsourcing Methods for Extracting Ontological Structure from Folksonomy

    Microsoft Academic Search

    Huairen Lin; Joseph Davis

    2010-01-01

    This paper investigates the unification of folksonomies and ontologies in such a way that the resulting structures can better support exploration and search on the World Wide Web. First, an integrated computational method is employed to extract the ontological structures from folksonomies. It exploits the power of low support association rule mining supplemented by an upper ontology such as WordNet.

  17. A bibliography on finite element and related methods analysis in reactor physics computations (1971--1997)

    SciTech Connect

    Carpenter, D.C.

    1998-01-01

    This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

  18. Some data and observations on research publication in the areas of numerical computation and programming languages and systems

    Microsoft Academic Search

    John R. Rice

    1976-01-01

    This report contains extensive data on the level of research publications in refereed journals for two areas in Computer Science: Numerical Computation and Programming Languages and Systems. It is concluded that the research output in Numerical Computation is about 5 times that in Programming Languages and Systems (as measured by refereed research articles). Some less complete data on the number

  19. Description of a method to support public health information management: organizational network analysis

    PubMed Central

    Merrill, Jacqueline; Bakken, Suzanne; Rockoff, Maxine; Gebbie, Kristine; Carley, Kathleen

    2007-01-01

    In this case study we describe a method that has potential to provide systematic support for public health information management. Public health agencies depend on specialized information that travels throughout an organization via communication networks among employees. Interactions that occur within these networks are poorly understood and are generally unmanaged. We applied organizational network analysis, a method for studying communication networks, to assess the method’s utility to support decision making for public health managers, and to determine what links existed between information use and agency processes. Data on communication links among a health department’s staff was obtained via survey with a 93% response rate, and analyzed using Organizational Risk Analyzer (ORA) software. The findings described the structure of information flow in the department’s communication networks. The analysis succeeded in providing insights into organizational processes which informed public health managers’ strategies to address problems and to take advantage of network strengths. PMID:17098480

  20. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jaros?aw J.

    2011-09-01

    SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other Evolutionary Computation-based algorithms. The Levenberg-Marquardt optimization must be considered as the most efficient one due to its speed. Its drawback due to possible sticking in poor local optimum can be overcome by applying a multi-start approach.

  1. Computational methods for a class of network models.

    PubMed

    Wang, Junshan; Jasra, Ajay; De Iorio, Maria

    2014-02-01

    In the following article, we provide an exposition of exact computational methods to perform parameter inference from partially observed network models. In particular, we consider the duplication attachment model that has a likelihood function that typically cannot be evaluated in any reasonable computational time. We consider a number of importance sampling (IS) and sequential Monte Carlo (SMC) methods for approximating the likelihood of the network model for a fixed parameter value. It is well-known that, for IS, the relative variance of the likelihood estimate typically grows at an exponential rate in the time parameter (here this is associated with the size of the network); we prove that, under assumptions, the SMC method will have relative variance that can grow only polynomially. In order to perform parameter estimation, we develop particle Markov chain Monte Carlo algorithms to perform Bayesian inference. Such algorithms use the aforementioned SMC algorithms within the transition dynamics. The approaches are illustrated numerically. PMID:24144112

  2. Computer-aided methods of determining thyristor thermal transients

    SciTech Connect

    Lu, E.; Bronner, G.

    1988-08-01

    An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs.

  3. Three-dimensional cardiac computational modelling: methods, features and applications.

    PubMed

    Lopez-Perez, Alejandro; Sebastian, Rafael; Ferrero, Jose M

    2015-01-01

    The combination of computational models and biophysical simulations can help to interpret an array of experimental data and contribute to the understanding, diagnosis and treatment of complex diseases such as cardiac arrhythmias. For this reason, three-dimensional (3D) cardiac computational modelling is currently a rising field of research. The advance of medical imaging technology over the last decades has allowed the evolution from generic to patient-specific 3D cardiac models that faithfully represent the anatomy and different cardiac features of a given alive subject. Here we analyse sixty representative 3D cardiac computational models developed and published during the last fifty years, describing their information sources, features, development methods and online availability. This paper also reviews the necessary components to build a 3D computational model of the heart aimed at biophysical simulation, paying especial attention to cardiac electrophysiology (EP), and the existing approaches to incorporate those components. We assess the challenges associated to the different steps of the building process, from the processing of raw clinical or biological data to the final application, including image segmentation, inclusion of substructures and meshing among others. We briefly outline the personalisation approaches that are currently available in 3D cardiac computational modelling. Finally, we present examples of several specific applications, mainly related to cardiac EP simulation and model-based image analysis, showing the potential usefulness of 3D cardiac computational modelling into clinical environments as a tool to aid in the prevention, diagnosis and treatment of cardiac diseases. PMID:25928297

  4. Methods for the computation of detailed geoids and their accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.; Rummel, R.

    1975-01-01

    Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.

  5. The ensemble switch method for computing interfacial tensions

    NASA Astrophysics Data System (ADS)

    Schmitz, Fabian; Virnau, Peter

    2015-04-01

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.

  6. Digital data storage systems, computers, and data verification methods

    DOEpatents

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  7. Nonlinear Piece In Hand Matrix Method for Enhancing Security of Multivariate Public Key Cryptosystems

    Microsoft Academic Search

    Shigeo Tsujii; Kohtaro Tadaki; Ryou Fujita

    Abstract. It is widely believed to take exponential time to find a solution of a system of random,multivariate polynomials because of the NP-completeness of such a task. On the other hand, in most of multivariate public key cryptosystems proposed so far, the computational complexity of cryptanalysis is apt to be polynomial time due to the trapdoor structure. In this paper,

  8. Nonlinear Piece In Hand Matrix Method for Enhancing Security of Multivariate Public Key Cryptosystems

    Microsoft Academic Search

    Shigeo Tsujiiy

    It is widely believed to take exponential time to find a solution of a system of random multivariate polynomials because of the NP-completeness of such a task. On the other hand, in most of multivariate public key cryptosystems proposed so far, the computational complexity of cryptanalysis is apt to be polynomial time due to the trapdoor structure. In this paper,

  9. Publications

    NSDL National Science Digital Library

    1969-12-31

    The Nitrogen and Phosphorus Knowledge Web page is offered by Iowa State University Extension and the College of Agriculture. The publications page contains links to various newsletters, articles, publications, power point presentations, links to governmental publications, and more. For example, visitors will find articles written on phosphorous within the Integrated Crop Management Newsletter, power point presentations on Nitrogen Management and Carbon Sequestration, and links to other Iowa State University publications on various subjects such as nutrient management. Other links on the home page of the site contain soil temperature data, research highlights, and other similarly relevant information for those in similar fields.

  10. Computational methods to determine the structure of hydrogen storage materials

    NASA Astrophysics Data System (ADS)

    Mueller, Tim

    2009-03-01

    To understand the mechanisms and thermodynamics of material-based hydrogen storage, it is important to know the structure of the material and the positions of the hydrogen atoms within the material. Because hydrogen can be difficult to resolve experimentally computational research has proven to be a valuable tool to address these problems. We discuss different computational methods for identifying the structure of hydrogen materials and the positions of hydrogen atoms, and we illustrate the methods with specific examples. Through the use of ab-initio molecular dynamics, we identify molecular hydrogen binding sites in the metal-organic framework commonly known as MOF-5 [1]. We present a method to identify the positions of atomic hydrogen in imide structures using a novel type of effective Hamiltonian. We apply this new method to lithium imide (Li2NH), a potentially important hydrogen storage material, and demonstrate that it predicts a new ground state structure [2]. We also present the results of a recent computational study of the room-temperature structure of lithium imide in which we suggest a new structure that reconciles the differences between previous experimental and theoretical studies. [4pt] [1] T. Mueller and G. Ceder, Journal of Physical Chemistry B 109, 17974 (2005). [0pt] [2] T. Mueller and G. Ceder, Physical Review B 74 (2006).

  11. Approximate Quantum Mechanical Methods for Rate Computation in Complex Systems

    NASA Astrophysics Data System (ADS)

    Schwartz, Steven D.

    The last 20 years have seen qualitative leaps in the complexity of chemical reactions that have been studied using theoretical methods. While methodologies for small molecule scattering are still of great importance and under active development [1], two important trends have allowed the theoretical study of the rates of reaction in complex molecules, condensed phase systems, and biological systems. First, there has been the explicit recognition that the type of state to state information obtained by rigorous scattering theory is not only not possible for complex systems, but more importantly, not meaningful. Thus, methodologies have been developed that compute averaged rate data directly from a Hamiltonian. Perhaps the most influential of these approaches has been the correlation function formalisms developed by Bill Miller et al. [2]. While these formal expressions for rate theories are certainly not the only correlation function descriptions of quantum rates [3, 4], these expressions of rates directly in terms of evolution operators, and in their coordinate space representations as Feynman Propagators, have lent themselves beautifully to complex systems because many of the approximation methods that have been devised are for Feynman propagator computation. This fact brings us to the second contributor to the blossoming of these approximate methods, the development of a wide variety of approximate mathematical methods to compute the time evolution of quantum systems. Thus the marriage of these mathematical developments has created the necessary powerful tools needed to probe systems of complexity unimagined just a few decades ago.

  12. Reducing Total Power Consumption Method in Cloud Computing Environments

    E-print Network

    Kuribayashi, Shin-ichi

    2012-01-01

    The widespread use of cloud computing services is expected to increase the power consumed by ICT equipment in cloud computing environments rapidly. This paper first identifies the need of the collaboration among servers, the communication network and the power network, in order to reduce the total power consumption by the entire ICT equipment in cloud computing environments. Five fundamental policies for the collaboration are proposed and the algorithm to realize each collaboration policy is outlined. Next, this paper proposes possible signaling sequences to exchange information on power consumption between network and servers, in order to realize the proposed collaboration policy. Then, in order to reduce the power consumption by the network, this paper proposes a method of estimating the volume of power consumption by all network devices simply and assigning it to an individual user.

  13. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  14. Library Orientation Methods, Mental Maps, and Public Services Planning.

    ERIC Educational Resources Information Center

    Ridgeway, Trish

    Two library orientation methods, a self-guided cassette walking tour and a slide-tape program, were administered to 202 freshmen students to determine if moving through the library increased students' ability to develop a mental map of the library. An effort was made to ensure that the two orientation programs were equivalent. Results from the 148…

  15. Practical methods to improve the development of computational software

    SciTech Connect

    Osborne, A. G.; Harding, D. W.; Deinert, M. R. [Department of Mechanical Engineering, University of Texas, Austin (United States)

    2013-07-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  16. Characterization of Meta-Materials Using Computational Electromagnetic Methods

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Shin, Joon

    2005-01-01

    An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

  17. Computer processing improves hydraulics optimization with new methods

    SciTech Connect

    Gavignet, A.A.; Wick, C.J.

    1987-12-01

    In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configuration system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.

  18. Assessing computational methods of cis-regulatory module prediction.

    PubMed

    Su, Jing; Teichmann, Sarah A; Down, Thomas A

    2010-01-01

    Computational methods attempting to identify instances of cis-regulatory modules (CRMs) in the genome face a challenging problem of searching for potentially interacting transcription factor binding sites while knowledge of the specific interactions involved remains limited. Without a comprehensive comparison of their performance, the reliability and accuracy of these tools remains unclear. Faced with a large number of different tools that address this problem, we summarized and categorized them based on search strategy and input data requirements. Twelve representative methods were chosen and applied to predict CRMs from the Drosophila CRM database REDfly, and across the human ENCODE regions. Our results show that the optimal choice of method varies depending on species and composition of the sequences in question. When discriminating CRMs from non-coding regions, those methods considering evolutionary conservation have a stronger predictive power than methods designed to be run on a single genome. Different CRM representations and search strategies rely on different CRM properties, and different methods can complement one another. For example, some favour homotypical clusters of binding sites, while others perform best on short CRMs. Furthermore, most methods appear to be sensitive to the composition and structure of the genome to which they are applied. We analyze the principal features that distinguish the methods that performed well, identify weaknesses leading to poor performance, and provide a guide for users. We also propose key considerations for the development and evaluation of future CRM-prediction methods. PMID:21152003

  19. How public relations professionals are managing the potential for sabotage, rumors, and misinformation disseminated via the Internet by computer hackers

    Microsoft Academic Search

    Joseph Basso

    1997-01-01

    The paper examines how public relations professionals are dealing with the potential for sabotage, rumors, and misinformation spread via the Internet by computer hackers. The author examines the public relations profession from a systems theory perspective and attempts to outline skills necessary for organizational survival in the new information age. Original data was gathered from a sample population of 41

  20. Fluid history computation methods for reactor safeguards problems using MNODE computer program. [PWR and BWR

    Microsoft Academic Search

    Y. S. Huang; C. W. Savery

    1976-01-01

    A method for predicting the pressure-temperature histories of air, water liquid, and vapor flowing in a zoned containment as a result of high energy pipe rupture is described. The computer code, MNODE, has been developed for 12 connected control volumes and 24 inertia flow paths. Predictions by the code are compared with the results of an analytical gas dynamic problem,

  1. European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS Computational Fluid Dynamics Conference 2001

    E-print Network

    Utah, University of

    METHODS FOR ELASTOHYDRODYNAMIC LUBRICATION Christopher E. Goodyer , Roger Fairlie , Martin Berzins Innovation Park, P.O. Box 1, Chester, CH1 3SH, UK Key words: Elastohydrodynamic lubrication, Multigrid, Adaptive meshing Abstract. The solution of elastohydrodynamic lubrication problems is both computationally

  2. On a method computing transient wave propagation in ionospheric regions

    NASA Technical Reports Server (NTRS)

    Gray, K. G.; Bowhill, S. A.

    1978-01-01

    A consequence of an exoatmospheric nuclear burst is an electromagnetic pulse (EMP) radiated from it. In a region far enough away from the burst, where nonlinear effects can be ignored, the EMP can be represented by a large-amplitude narrow-time-width plane-wave pulse. If the ionosphere intervenes the origin and destination of the EMP, frequency dispersion can cause significant changes in the original pulse upon reception. A method of computing these dispersive effects of transient wave propagation is summarized. The method described is different from the standard transform techniques and provides physical insight into the transient wave process. The method, although exact, can be used in approximating the early-time transient response of an ionospheric region by a simple integration with only explicit knowledge of the electron density, electron collision frequency, and electron gyrofrequency required. As an illustration of the method, it is applied to a simple example and contrasted with the corresponding transform solution.

  3. Statistical methods for dealing with publication bias in meta-analysis.

    PubMed

    Jin, Zhi-Chao; Zhou, Xiao-Hua; He, Jia

    2015-01-30

    Publication bias is an inevitable problem in the systematic review and meta-analysis. It is also one of the main threats to the validity of meta-analysis. Although several statistical methods have been developed to detect and adjust for the publication bias since the beginning of 1980s, some of them are not well known and are not being used properly in both the statistical and clinical literature. In this paper, we provided a critical and extensive discussion on the methods for dealing with publication bias, including statistical principles, implementation, and software, as well as the advantages and limitations of these methods. We illustrated a practical application of these methods in a meta-analysis of continuous support for women during childbirth. PMID:25363575

  4. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    SciTech Connect

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  5. On implicit Runge-Kutta methods for parallel computations

    NASA Technical Reports Server (NTRS)

    Keeling, Stephen L.

    1987-01-01

    Implicit Runge-Kutta methods which are well-suited for parallel computations are characterized. It is claimed that such methods are first of all, those for which the associated rational approximation to the exponential has distinct poles, and these are called multiply explicit (MIRK) methods. Also, because of the so-called order reduction phenomenon, there is reason to require that these poles be real. Then, it is proved that a necessary condition for a q-stage, real MIRK to be A sub 0-stable with maximal order q + 1 is that q = 1, 2, 3, or 5. Nevertheless, it is shown that for every positive integer q, there exists a q-stage, real MIRK which is I-stable with order q. Finally, some useful examples of algebraically stable MIRKs are given.

  6. Prepublication version. Accepted for publication in Neural Computation, 2014. Spine head calcium as a measure of summed postsynaptic activity for

    E-print Network

    Graham, Bruce

    Prepublication version. Accepted for publication in Neural Computation, 2014. 1 Spine head, University of Glasgow, Glasgow, G12 8QB, U.K. Running title: Spine head calcium driving synaptic plasticity use a computational model of a hippocampal CA1 pyramidal cell to demonstrate that spine head calcium

  7. A Critical Review of Computer-Assisted Learning in Public Health via the Internet, 1999-2008

    ERIC Educational Resources Information Center

    Corda, Kirsten W.; Polacek, Georgia N. L. J.

    2009-01-01

    Computers and the internet have been utilized as viable avenues for public health education delivery. Yet the effectiveness, e.g., behavior change, from use of these tools has been limited. Previous reviews have focused on single health topics such as smoking cessation and weight loss. This review broadens the scope to consider computer-assisted…

  8. Abstract--CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) is a simple test that

    E-print Network

    Zou, Cliff C.

    1 Abstract-- CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) is a simple test that is easy for humans but extremely difficult for computers to solve. CAPTCHA has been to protect their resources from attacks initiated by automatic scripts. By design, CAPTCHA is unable

  9. Adaptive Mesh Refinement in Computational Astrophysics -- Methods and Applications

    E-print Network

    Dinshaw Balsara

    2001-12-06

    The advent of robust, reliable and accurate higher order Godunov schemes for many of the systems of equations of interest in computational astrophysics has made it important to understand how to solve them in multi-scale fashion. This is so because the physics associated with astrophysical phenomena evolves in multi-scale fashion and we wish to arrive at a multi-scale simulational capability to represent the physics. Because astrophysical systems have magnetic fields, multi-scale magnetohydrodynamics (MHD) is of especial interest. In this paper we first discuss general issues in adaptive mesh refinement (AMR). We then focus on the important issues in carrying out divergence-free AMR-MHD and catalogue the progress we have made in that area. We show that AMR methods lend themselves to easy parallelization. We then discuss applications of the RIEMANN framework for AMR-MHD to problems in computational astophysics.

  10. Using Geostatistical Methods in the Analysis of Public Health Data: The Final Frontier?

    Microsoft Academic Search

    Linda J. Young; Carol A. Gotway

    \\u000a Geostatistical methods have been demonstrated to be very powerful analytical tools in a variety of disciplines, most notably\\u000a in mining, agriculture, meteorology, hydrology, geology and environmental science. Unfortunately, their use in public health,\\u000a medical geography, and spatial epidemiology has languished in favor of Bayesian methods or the analytical methods developed\\u000a in geography and promoted via geographic information systems. In this

  11. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  12. High performance computing and the simplex method Julian Hall, Qi Huangfu and Edmund Smith

    E-print Network

    Hall, Julian

    High performance computing and the simplex method Julian Hall, Qi Huangfu and Edmund Smith School of Mathematics University of Edinburgh 12th April 2011 High performance computing and the simplex method #12;The... ... but methods for all three depend on it! High performance computing and the simplex method 1 #12;Overview · LP

  13. Analysis of flavonoids: tandem mass spectrometry, computational methods, and NMR.

    PubMed

    March, Raymond; Brodbelt, Jennifer

    2008-12-01

    Due to the increasing understanding of the health benefits and chemopreventive properties of flavonoids, there continues to be significant effort dedicated to improved analytical methods for characterizing the structures of flavonoids and monitoring their levels in fruits and vegetables, as well as developing new approaches for mapping the interactions of flavonoids with biological molecules. Tandem mass spectrometry (MS/MS), particularly in conjunction with liquid chromatography (LC), is the dominant technique that has been pursued for elucidation of flavonoids. Metal complexation strategies have proven to be especially promising for enhancing the ionization of flavonoids and yielding key diagnostic product ions for differentiation of isomers. Of particular value is the addition of a chromophoric ligand to allow the application of infrared (IR) multiphoton dissociation as an alternative to collision-induced dissociation (CID) for the differentiation of isomers. CID, including energy-resolved methods, and nuclear magnetic resonance (NMR) have also been utilized widely for structural characterization of numerous classes of flavonoids and development of structure/activity relationships.The gas-phase ion chemistry of flavonoids is an active area of research particularly when combined with accurate mass measurement for distinguishing between isobaric ions. Applications of a variety of ab initio and chemical computation methods to the study of flavonoids have been reported, and the results of computations of ion and molecular structures have been shown together with computations of atomic charges and ion fragmentation. Unambiguous ion structures are obtained rarely using MS alone. Thus, it is necessary to combine MS with spectroscopic techniques such as ultraviolet (UV) and NMR to achieve this objective. The application of NMR data to the mass spectrometric examination of flavonoids is discussed. PMID:18855332

  14. Method and apparatus for managing transactions with connected computers

    DOEpatents

    Goldsmith, Steven Y. (Albuquerque, NM); Phillips, Laurence R. (Corrales, NM); Spires, Shannon V. (Albuquerque, NM)

    2003-01-01

    The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.

  15. Asymptotics and computations for approximation of method of regularization estimators

    E-print Network

    Lee, Sang-Joon

    2005-08-29

    ASYMPTOTICS AND COMPUTATIONS FOR APPROXIMATION OF METHOD OF REGULARIZATION ESTIMATORS A Dissertation by SANG-JOON LEE Submitted to the O?ce of Graduate Studies of Texas A&M University in partial ful?llment of the requirements for the degree...;1] with ?(ti) = Lif; i = 1;:::;n, and consider the model yi = Lif + "i; i = 1;:::;n; (1.4) where the "i are as in (1.2) and f is some unknown element of Wm2 [0;1]. The estimator for f in the generalized model (1.4) can be obtained from MOR. More precisely...

  16. Numerical methods and computers used in elastohydrodynamic lubrication

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Tripp, J. H.

    1982-01-01

    Some of the methods of obtaining approximate numerical solutions to boundary value problems that arise in elastohydrodynamic lubrication are reviewed. The highlights of four general approaches (direct, inverse, quasi-inverse, and Newton-Raphson) are sketched. Advantages and disadvantages of these approaches are presented along with a flow chart showing some of the details of each. The basic question of numerical stability of the elastohydrodynamic lubrication solutions, especially in the pressure spike region, is considered. Computers used to solve this important class of lubrication problems are briefly described, with emphasis on supercomputers.

  17. FAST MARCHING METHOD TO CORRECT FOR REFRACTION IN ULTRASOUND COMPUTED TOMOGRAPHY

    E-print Network

    Mueller, Klaus

    FAST MARCHING METHOD TO CORRECT FOR REFRACTION IN ULTRASOUND COMPUTED TOMOGRAPHY Shengying Li Detection Systems ABSTRACT A significant obstacle in the advancement of Ultrasound Computed Tomography has ultrasound breast phantom. 1. INTRODUCTION Ultrasound computed tomography (UCT) has a long history

  18. Computer-simulation methods in human linkage analysis.

    PubMed Central

    Ott, J

    1989-01-01

    In human linkage analysis, many statistical problems without analytical solution could be solved by ad hoc Monte Carlo procedures were efficient computer-simulation methods available for members of family pedigrees. In this paper, a general method is described for randomly generating genotypes at one or more marker loci, given observed phenotypes at loci linked among themselves and with the markers. The method is based on a well-known expansion of the multivariate probability of genotypes, given phenotypes, into a product of conditional univariate probabilities that may be viewed as corresponding to conditionally independent univariate random variables. This representation allows a recursive evaluation of the univariate probabilities that can be implemented in a surprisingly simple manner by carrying out successive "risk calculations" with respect to marker genotypes, given observed phenotypes and marker genotypes already generated. Potential applications to various unresolved problems are discussed. The method is applied to 28 published families analyzed for genetic linkage between hereditary motor and sensory neuropathy I and the Duffy (FY) blood group locus and confirms heterogeneity of hereditary motor and sensory neuropathy I. An implementation of the simulation methods developed in the LINKAGE program package will be available later in 1989. PMID:2726769

  19. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  20. Fractional Steps methods for transient problems on commodity computer architectures

    NASA Astrophysics Data System (ADS)

    Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.

    2008-12-01

    Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.

  1. Computing thermal Wigner densities with the phase integration method

    NASA Astrophysics Data System (ADS)

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-08-01

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  2. Computational aeroacoustics applications based on a discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Delorme, Philippe; Mazet, Pierre; Peyret, Christophe; Ventribout, Yoan

    2005-09-01

    CAA simulation requires the calculation of the propagation of acoustic waves with low numerical dissipation and dispersion error, and to take into account complex geometries. To give, at the same time, an answer to both challenges, a Discontinuous Galerkin Method is developed for Computational AeroAcoustics. Euler's linearized equations are solved with the Discontinuous Galerkin Method using flux splitting technics. Boundary conditions are established for rigid wall, non-reflective boundary and imposed values. A first validation, for induct propagation is realized. Then, applications illustrate: the Chu and Kovasznay's decomposition of perturbation inside uniform flow in term of independent acoustic and rotational modes, Kelvin-Helmholtz instability and acoustic diffraction by an air wing. To cite this article: Ph. Delorme et al., C. R. Mecanique 333 (2005).

  3. Parallel computation of multigroup reactivity coefficient using iterative method

    SciTech Connect

    Susmikanti, Mike [Center for Development of Nuclear Informatics, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)] [Center for Development of Nuclear Informatics, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia); Dewayatna, Winter [Center for Nuclear Fuel Technology, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)] [Center for Nuclear Fuel Technology, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)

    2013-09-09

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  4. A fast phase space method for computing creeping rays

    SciTech Connect

    Motamed, Mohammad [Department of Numerical Analysis and Computer Science, Royal Institute of Technology (KTH), Lindstadsvagen 3, 10044 Stockholm (Sweden)]. E-mail: mohamad@nada.kth.se; Runborg, Olof [Department of Numerical Analysis and Computer Science, Royal Institute of Technology (KTH), Lindstadsvagen 3, 10044 Stockholm (Sweden)]. E-mail: olofr@nada.kth.se

    2006-11-20

    Creeping rays can give an important contribution to the solution of medium to high frequency scattering problems. They are generated at the shadow lines of the illuminated scatterer by grazing incident rays and propagate along geodesics on the scatterer surface, continuously shedding diffracted rays in their tangential direction. In this paper, we show how the ray propagation problem can be formulated as a partial differential equation (PDE) in a three-dimensional phase space. To solve the PDE we use a fast marching method. The PDE solution contains information about all possible creeping rays. This information includes the phase and amplitude of the field, which are extracted by a fast post-processing. Computationally, the cost of solving the PDE is less than tracing all rays individually by solving a system of ordinary differential equations. We consider an application to mono-static radar cross section problems where creeping rays from all illumination angles must be computed. The numerical results of the fast phase space method and a comparison with the results of ray tracing are presented.

  5. Evaluation of some methods for the relative assessment of scientific publications

    Microsoft Academic Search

    P. Vinkler

    1986-01-01

    Some bibliometric methods for the assessment of the publication activity of research units are discussed on the basis of impact factors and citations of papers. Average subfield impact factor of periodicals representing subfields in chemistry is suggested. This indicator characterizes the average citedness of a paper in a given subfield. Comparing the total sum of impact factors of corresponding periodicals

  6. Developing a personal computer-based data visualization system using public domain software

    NASA Astrophysics Data System (ADS)

    Chen, Philip C.

    1999-03-01

    The current research will investigate the possibility of developing a computing-visualization system using a public domain software system built on a personal computer. Visualization Toolkit (VTK) is available on UNIX and PC platforms. VTK uses C++ to build an executable. It has abundant programming classes/objects that are contained in the system library. Users can also develop their own classes/objects in addition to those existing in the class library. Users can develop applications with any of the C++, Tcl/Tk, and JAVA environments. The present research will show how a data visualization system can be developed with VTK running on a personal computer. The topics will include: execution efficiency; visual object quality; availability of the user interface design; and exploring the feasibility of the VTK-based World Wide Web data visualization system. The present research will feature a case study showing how to use VTK to visualize meteorological data with techniques including, iso-surface, volume rendering, vector display, and composite analysis. The study also shows how the VTK outline, axes, and two-dimensional annotation text and title are enhancing the data presentation. The present research will also demonstrate how VTK works in an internet environment while accessing an executable with a JAVA application programing in a webpage.

  7. Are Private Schools Better Than Public Schools? Appraisal for Ireland by Methods for Observational Studies

    PubMed Central

    Pfeffermann, Danny; Landsman, Victoria

    2011-01-01

    In observational studies the assignment of units to treatments is not under control. Consequently, the estimation and comparison of treatment effects based on the empirical distribution of the responses can be biased since the units exposed to the various treatments could differ in important unknown pretreatment characteristics, which are related to the response. An important example studied in this article is the question of whether private schools offer better quality of education than public schools. In order to address this question we use data collected in the year 2000 by OECD for the Programme for International Student Assessment (PISA). Focusing for illustration on scores in mathematics of 15-years old pupils in Ireland, we find that the raw average score of pupils in private schools is higher than of pupils in public schools. However, application of a newly proposed method for observational studies suggests that the less able pupils tend to enroll in public schools, such that their lower scores is not necessarily an indication of bad quality of the public schools. Indeed, when comparing the average score in the two types of schools after adjusting for the enrollment effects, we find quite surprisingly that public schools perform better on average. This outcome is supported by the methods of instrumental variables and latent variables, commonly used by econometricians for analyzing and evaluating social programs. PMID:22242110

  8. ULO Course Learning Outcome Assessment Method Pedagogy 02-01 Build on what public speaking entails to adv-

    E-print Network

    Barrash, Warren

    COMM441 ULO Course Learning Outcome Assessment Method Pedagogy 02-01 Build on what public speaking, extemporaneous style, and proficient speaking about a specific topic. 02-02 Build on what public speaking entails, extemporaneous style, and proficient speaking about a specific topic. 03-01 Build on what public speaking entails

  9. Characterization of heterogeneous solids via wave methods in computational microelasticity

    NASA Astrophysics Data System (ADS)

    Gonella, Stefano; Steven Greene, M.; Kam Liu, Wing

    2011-05-01

    Real solids are inherently heterogeneous bodies. While the resolution at which they are observed may be disparate from one material to the next, heterogeneities heavily affect the dynamic behavior of all microstructured solids. This work introduces a wave propagation simulation methodology, based on Mindlin's microelastic continuum theory, as a tool to dynamically characterize microstructured solids in a way that naturally accounts for their inherent heterogeneities. Wave motion represents a natural benchmark problem to appreciate the full benefits of the microelastic theory, as in high-frequency dynamic regimes do microstructural effects unequivocally elucidate themselves. Through a finite-element implementation of the microelastic continuum and the interpretation of the resulting computational multiscale wavefields, one can estimate the effect of microstructures upon the wave propagation modes, phase and group velocities. By accounting for microstructures without explicitly modeling them, the method allows reducing the computational time with respect to classical methods based on a direct numerical simulation of the heterogeneities. The numerical method put forth in this research implements the microelastic theory through a finite-element scheme with enriched super-elements featuring microstructural degrees of freedom, and implementing constitutive laws obtained by homogenizing the microstructure characteristics over material meso-domains. It is possible to envision the use of this modeling methodology in support of diverse applications, ranging from structural health monitoring in composite materials to the simulation of biological and geomaterials. From an intellectual point of view, this work offers a mathematical explanation of some of the discrepancies often observed between one-scale models and physical experiments by targeting the area of wave propagation, one area where these discrepancies are most pronounced.

  10. Computational Simulation of Buoyancy-Driven Flows Using Vortex Methods.

    NASA Astrophysics Data System (ADS)

    Egan, Erik Witmer

    A new vortex method for simulating two-dimensional buoyancy-driven flows is presented. This Lagrangian method utilizes a discrete representation of the known density field along with the vorticity transport equation and Boussinesq approximation to yield the baroclinically-generated vorticity field, also in a discrete form. The corresponding velocity field is then computed using a vorticity-streamfunction scheme similar to the vortex-in-cell approach. Complete simulations for a variety of Rayleigh-Taylor stability problems are presented, as are preliminary results for Rayleigh-Bernard flows. The discrete vorticity field is made up of vertically -oriented vortex dipole markers. The mutual interactions among these markers are determined by redistributing the dipolar marker vorticity onto a fixed array of true vortices. Standard vortex-in-cell techniques can then be used to generate marker velocities. The vorticity redistribution step is accomplished by matching the far-field velocity of a single dipole marker to that generated by the local grid vortices. The overall simulation method is termed the Dipole-in-Cell approach. Viscous and thermal diffusion effects (for Rayleigh -Benard flows only) are described using a random walk scheme. Rayleigh-Taylor simulations for both single- and double-interface geometries show the expected linear and nonlinear flow development, including the recirculation associated with the Kelvin-Helmholtz interfacial instability. The double-interface results show the development of an "anti -spike" along the top interface, as seen in other studies. The simulations are also shown to be capable of following the impact of a mass of fluid on solid boundaries and pools of stagnant fluid. The Rayleigh-Benard results demonstrate the validity of the random walk mechanism for simulating diffusion and the ability to generate rough representations of the classic Benard convection cells. The accuracy of the Benard cell results is limited by the long computation times required to reach steady state for small Rayleigh numbers. For the large Rayleigh number flows of greatest interest, no such problems will occur and the method should be well suited to simulating them. Suggestions are made for method improvements, including extensions to three-dimensional flow problems.

  11. Computational Acoustic Methods for the Design of Woodwind Instruments

    NASA Astrophysics Data System (ADS)

    Lefebvre, Antoine

    2011-12-01

    This thesis presents a number of methods for the computational analysis of woodwind instruments. The Transmission-Matrix Method (TMM) for the calculation of the input impedance of an instrument is described. An approach based on the Finite Element Method (FEM) is applied to the determination of the transmission-matrix parameters of woodwind instrument toneholes, from which new formulas are developed that extend the range of validity of current theories. The effect of a hanging keypad is investigated and discrepancies with current theories are found for short toneholes. This approach was applied as well to toneholes on a conical bore, and we conclude that the tonehole transmission matrix parameters developed on a cylindrical bore are equally valid for use on a conical bore. A boundary condition for the approximation of the boundary layer losses for use with the FEM was developed, and it enables the simulation of complete woodwind instruments. The comparison of the simulations of instruments with many open or closed toneholes with calculations using the TMM reveal discrepancies that are most likely attributable to internal or external tonehole interactions. This is not taken into account in the TMM and poses a limit to its accuracy. The maximal error is found to be smaller than 10 cents. The effect of the curvature of the main bore is investigated using the FEM. The radiation impedance of a wind instrument bell is calculated using the FEM and compared to TMM calculations; we conclude that the TMM is not appropriate for the simulation of flaring bells. Finally, a method is presented for the calculation of the tonehole positions and dimensions under various constraints using an optimization algorithm, which is based on the estimation of the playing frequencies using the Transmission-Matrix Method. A number of simple woodwind instruments are designed using this algorithm and prototypes evaluated.

  12. Smart algorithms and adaptive methods in computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Tinsley Oden, J.

    1989-05-01

    A review is presented of the use of smart algorithms which employ adaptive methods in processing large amounts of data in computational fluid dynamics (CFD). Smart algorithms use a rationally based set of criteria for automatic decision making in an attempt to produce optimal simulations of complex fluid dynamics problems. The information needed to make these decisions is not known beforehand and evolves in structure and form during the numerical solution of flow problems. Once the code makes a decision based on the available data, the structure of the data may change, and criteria may be reapplied in order to direct the analysis toward an acceptable end. Intelligent decisions are made by processing vast amounts of data that evolve unpredictably during the calculation. The basic components of adaptive methods and their application to complex problems of fluid dynamics are reviewed. The basic components of adaptive methods are: (1) data structures, that is what approaches are available for modifying data structures of an approximation so as to reduce errors; (2) error estimation, that is what techniques exist for estimating error evolution in a CFD calculation; and (3) solvers, what algorithms are available which can function in changing meshes. Numerical examples which demonstrate the viability of these approaches are presented.

  13. Profiling animal toxicants by automatically mining public bioassay data: a big data approach for computational toxicology.

    PubMed

    Zhang, Jun; Hsieh, Jui-Hua; Zhu, Hao

    2014-01-01

    In vitro bioassays have been developed and are currently being evaluated as potential alternatives to traditional animal toxicity models. Already, the progress of high throughput screening techniques has resulted in an enormous amount of publicly available bioassay data having been generated for a large collection of compounds. When a compound is tested using a collection of various bioassays, all the testing results can be considered as providing a unique bio-profile for this compound, which records the responses induced when the compound interacts with different cellular systems or biological targets. Profiling compounds of environmental or pharmaceutical interest using useful toxicity bioassay data is a promising method to study complex animal toxicity. In this study, we developed an automatic virtual profiling tool to evaluate potential animal toxicants. First, we automatically acquired all PubChem bioassay data for a set of 4,841 compounds with publicly available rat acute toxicity results. Next, we developed a scoring system to evaluate the relevance between these extracted bioassays and animal acute toxicity. Finally, the top ranked bioassays were selected to profile the compounds of interest. The resulting response profiles proved to be useful to prioritize untested compounds for their animal toxicity potentials and form a potential in vitro toxicity testing panel. The protocol developed in this study could be combined with structure-activity approaches and used to explore additional publicly available bioassay datasets for modeling a broader range of animal toxicities. PMID:24950175

  14. Novel computational methods to design protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Zhou, Alice Qinhua; O'Hern, Corey; Regan, Lynne

    2014-03-01

    Despite the abundance of structural data, we still cannot accurately predict the structural and energetic changes resulting from mutations at protein interfaces. The inadequacy of current computational approaches to the analysis and design of protein-protein interactions has hampered the development of novel therapeutic and diagnostic agents. In this work, we apply a simple physical model that includes only a minimal set of geometrical constraints, excluded volume, and attractive van der Waals interactions to 1) rank the binding affinity of mutants of tetratricopeptide repeat proteins with their cognate peptides, 2) rank the energetics of binding of small designed proteins to the hydrophobic stem region of the influenza hemagglutinin protein, and 3) predict the stability of T4 lysozyme and staphylococcal nuclease mutants. This work will not only lead to a fundamental understanding of protein-protein interactions, but also to the development of efficient computational methods to rationally design protein interfaces with tunable specificity and affinity, and numerous applications in biomedicine. NSF DMR-1006537, PHY-1019147, Raymond and Beverly Sackler Institute for Biological, Physical and Engineering Sciences, and Howard Hughes Medical Institute.

  15. Computational Methods and Challenges for Large-Scale Circuit Mapping

    PubMed Central

    Helmstaedter, Moritz; Mitra, Partha

    2012-01-01

    Summary The connectivity architecture of neuronal circuits is essential to understand how brains work, yet our knowledge about the neuronal wiring diagrams remains limited and partial. Technical breakthroughs in labeling and imaging methods starting more than a century ago have advanced knowledge in the field. However, the volume of data associated with imaging a whole brain or a significant fraction thereof, with electron or light microscopy, has only recently become amenable to digital storage and analysis. A mouse brain imaged at light microscopic resolution is about a terabyte of data, and 1 mm3 of the brain at EM resolution is about half a petabyte. This has given rise to a new field of research, computational analysis of large scale neuroanatomical data sets, with goals that include reconstructions of the morphology of individual neurons as well as entire circuits. The problems encountered include large data management, segmentation and 3D reconstruction, computational geometry and workflow management allowing for hybrid approaches combining manual and algorithmic processing. Here we review this growing field of neuronal data analysis with emphasis on reconstructing neurons from EM data cubes. PMID:22221862

  16. Computational and analytical methods in nonlinear fluid dynamics

    NASA Astrophysics Data System (ADS)

    Walker, James

    1993-09-01

    The central focus of the program was on the application and development of modern analytical and computational methods to the solution of nonlinear problems in fluid dynamics and reactive gas dynamics. The research was carried out within the Division of Engineering Mathematics in the Department of Mechanical Engineering and Mechanics and principally involved Professors P.A. Blythe, E. Varley and J.D.A. Walker. In addition. the program involved various international collaborations. Professor Blythe completed work on reactive gas dynamics with Professor D. Crighton FRS of Cambridge University in the United Kingdom. Professor Walker and his students carried out joint work with Professor F.T. Smith, of University College London, on various problems in unsteady flow and turbulent boundary layers.

  17. Modern wing flutter analysis by computational fluid dynamics methods

    NASA Technical Reports Server (NTRS)

    Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.

    1987-01-01

    The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.

  18. Helping Students Soar to Success on Computers: An Investigation of the Soar Study Method for Computer-Based Learning

    ERIC Educational Resources Information Center

    Jairam, Dharmananda; Kiewra, Kenneth A.

    2010-01-01

    This study used self-report and observation techniques to investigate how students study computer-based materials. In addition, it examined if a study method called SOAR can facilitate computer-based learning. SOAR is an acronym that stands for the method's 4 theoretically driven and empirically supported components: select (S), organize (O),…

  19. Public Domain Computer-Aided Surgery (CAS) in Orthodontic and Maxillofacial Surgery

    Microsoft Academic Search

    Thomas Stamm; Ulrich Meyer; Norbert Meier; Ulrike Ehmer; Ulrich Joos

    2002-01-01

    Background: Computer-aided virtual three-dimensional (3D) surgical simulation assists the necessary visual understanding of complex pathological situations but has so far been dependent on expensive hardware and software. Method: For the first time a non-commercial, user-orientated application for orthognathic and craniofacial surgical simulation has been introduced, based on freeware NIH Image 1.62 provided by the National Institute of Mental Health (NIMH).

  20. [Simulation exercises, a problem oriented method of learning public health in medical education].

    PubMed

    Yano, E; Tamiya, N; Hasegawa, T

    1998-03-01

    Using the case method of learning of American business schools, we introduced "Simulation Exercises (SE)," a problem oriented method of public health education for medical students. With SE, a group of students were given simulated cases of patients or situations (SC), and were asked to assume the role of physicians or other public health workers using their skills and knowledge of public health. Students learn on their own, with the aid of tutors, through discussion, role-play, investigation of literature, and a small field survey. There have been a whole variety of SC covering most of the current topics in public health ranging from mental health, dental health, industrial health, maternal & child health, elderly care, terminal care and international health. Each SC has 5 to 10 questions which stimulate and direct the students' group discussion. Some of the questions do not have a correct answer, but the criteria used to evaluate the students included clarity, consistency, and comprehensiveness of their ideas in addition to the positive commitment to the group discussion. At the end of the week-long group learning, each group demonstrated the results of their discussion. Role play was often used to demonstrate what they learned. As a result, students participated positively and concentrated and enjoyed the learning exercise very much. An anonymous survey shortly after SE showed that more than 80% of students felt a positive change in their rating of public health among the many subjects of study. Tutors also changed their rating of the students after observing their positive attitude and sometimes very creative ideas. In conclusion, we found SE to be useful for practical learning by medical students of public health. PMID:9623253

  1. Research on Assessment Methods for Urban Public Transport Development in China

    PubMed Central

    Zou, Linghong; Guo, Hongwei

    2014-01-01

    In recent years, with the rapid increase in urban population, the urban travel demands in Chinese cities have been increasing dramatically. As a result, developing comprehensive urban transport systems becomes an inevitable choice to meet the growing urban travel demands. In urban transport systems, public transport plays the leading role to promote sustainable urban development. This paper aims to establish an assessment index system for the development level of urban public transport consisting of a target layer, a criterion layer, and an index layer. Review on existing literature shows that methods used in evaluating urban public transport structure are dominantly qualitative. To overcome this shortcoming, fuzzy mathematics method is used for describing qualitative issues quantitatively, and AHP (analytic hierarchy process) is used to quantify expert's subjective judgment. The assessment model is established based on the fuzzy AHP. The weight of each index is determined through the AHP and the degree of membership of each index through the fuzzy assessment method to obtain the fuzzy synthetic assessment matrix. Finally, a case study is conducted to verify the rationality and practicability of the assessment system and the proposed assessment method. PMID:25530756

  2. A stoichiometric calibration method for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

    2014-04-01

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0 mm, 0.5 mm and 0.4 mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy.

  3. Interactive computer methods for generating mineral-resource maps

    USGS Publications Warehouse

    Calkins, James Alfred; Crosby, A.S.; Huffman, T.E.; Clark, A.L.; Mason, G.T.; Bascle, R.J.

    1980-01-01

    Inasmuch as maps are a basic tool of geologists, the U.S. Geological Survey's CRIB (Computerized Resources Information Bank) was constructed so that the data it contains can be used to generate mineral-resource maps. However, by the standard methods used-batch processing and off-line plotting-the production of a finished map commonly takes 2-3 weeks. To produce computer-generated maps more rapidly, cheaply, and easily, and also to provide an effective demonstration tool, we have devised two related methods for plotting maps as alternatives to conventional batch methods. These methods are: 1. Quick-Plot, an interactive program whose output appears on a CRT (cathode-ray-tube) device, and 2. The Interactive CAM (Cartographic Automatic Mapping system), which combines batch and interactive runs. The output of the Interactive CAM system is final compilation (not camera-ready) paper copy. Both methods are designed to use data from the CRIB file in conjunction with a map-plotting program. Quick-Plot retrieves a user-selected subset of data from the CRIB file, immediately produces an image of the desired area on a CRT device, and plots data points according to a limited set of user-selected symbols. This method is useful for immediate evaluation of the map and for demonstrating how trial maps can be made quickly. The Interactive CAM system links the output of an interactive CRIB retrieval to a modified version of the CAM program, which runs in the batch mode and stores plotting instructions on a disk, rather than on a tape. The disk can be accessed by a CRT, and, thus, the user can view and evaluate the map output on a CRT immediately after a batch run, without waiting 1-3 days for an off-line plot. The user can, therefore, do most of the layout and design work in a relatively short time by use of the CRT, before generating a plot tape and having the map plotted on an off-line plotter.

  4. Computation of three-dimensional Brinkman flows using regularized methods

    NASA Astrophysics Data System (ADS)

    Cortez, Ricardo; Cummins, Bree; Leiderman, Karin; Varela, Douglas

    2010-10-01

    The Brinkman equations of fluid motion are a model of flows in a porous medium. We develop the exact solution of the Brinkman equations for three-dimensional incompressible flow driven by regularized forces. Two different approaches to the regularization are discussed and compared on test problems. The regularized Brinkman model is also applied to the unsteady Stokes equation for oscillatory flows since the latter leads to the Brinkman equations with complex permeability parameter. We provide validation studies of the method based on the flow and drag of a solid sphere translating in a Brinkman medium and the flow inside a cylindrical channel of circular cross-section. We present a numerical example of a swimming organism in a Brinkman flow which shows that the maximum swimming speed is obtained with a small but non-zero value of the porosity. We also demonstrate that unsteady Stokes flows with oscillatory forcing fall within the same framework and are computed with the same method by applying it to the motion of the oscillating feeding appendage of a copepod.

  5. A determination of antioxidant efficiencies using ESR and computational methods

    NASA Astrophysics Data System (ADS)

    Rhodes, Christopher J.; Tran, Thuy T.; Morris, Harry

    2004-05-01

    Using Transition-State Theory, experimental rate constants, determined over a range of temperatures, for reactions of Vitamin E type antioxidants are analysed in terms of their enthalpies and entropies of activation. It is further shown that computational methods may be employed to calculate enthalpies and entropies, and hence Gibbs free energies, for the overall reactions. Within the linear free energy relationship (LFER) assumption, that the Gibbs free energy of activation is proportional to the overall Gibbs free energy change for the reaction, it is possible to rationalise, and even to predict, the relative contributions of enthalpy and entropy for reactions of interest, involving potential antioxidants. A method is devised, involving a competitive reaction between rad CH 3 radicals and both the spin-trap PBN and the antioxidant, which enables the relatively rapid determination of a relative ordering of activities for a series of potential antioxidant compounds, and also of their rate constants for scavenging rad CH 3 radicals (relative to the rate constant for addition of rad CH 3 to PBN).

  6. Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.

    2011-01-01

    This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…

  7. 26 CFR 1.9001-1 - Change from retirement to straight-line method of computing depreciation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...Change from retirement to straight-line method of computing depreciation. 1...Change from retirement to straight-line method of computing depreciation. ...the retirement to the straight-line method of computing the allowance...

  8. 26 CFR 1.9001-1 - Change from retirement to straight-line method of computing depreciation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...Change from retirement to straight-line method of computing depreciation. 1...Change from retirement to straight-line method of computing depreciation. ...the retirement to the straight-line method of computing the allowance...

  9. 26 CFR 1.9001-1 - Change from retirement to straight-line method of computing depreciation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...Change from retirement to straight-line method of computing depreciation. 1...Change from retirement to straight-line method of computing depreciation. ...the retirement to the straight-line method of computing the allowance...

  10. 26 CFR 1.9001-1 - Change from retirement to straight-line method of computing depreciation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...Change from retirement to straight-line method of computing depreciation. 1...Change from retirement to straight-line method of computing depreciation. ...the retirement to the straight-line method of computing the allowance...

  11. 26 CFR 1.9001-1 - Change from retirement to straight-line method of computing depreciation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...Change from retirement to straight-line method of computing depreciation. 1...Change from retirement to straight-line method of computing depreciation. ...the retirement to the straight-line method of computing the allowance...

  12. Method for Rapid Recovery Path Computation on Mesh IP Network

    Microsoft Academic Search

    T. Masayuki

    2008-01-01

    A collection of slides from the authorpsilas seminar presentation is given. These discuss centralized management architecture, path computation and restoration, recovery by descending order, greedy algorithm, heuristic algorithm, computation time, network for simulation, test for optimality.

  13. Computational intelligence methods for information understanding and information management

    E-print Network

    Jankowski, Norbert

    Wlodzislaw Duch1,2 , Norbert Jankowski1 and Krzysztof Grbczewski1 1 Department of Informatics, Nicolaus Copernicus University, Torun, Poland, and 2 Department of Computer Science, School of Computer Engineering

  14. Soft Computing Explains Heuristic Numerical Methods in Data Processing and in Logic Programming

    E-print Network

    Kreinovich, Vladik

    Soft Computing Explains Heuristic Numerical Methods in Data Processing and in Logic Programming soft comput- ing approaches explain and justify heuristic nu- merical methods used in data processing fixed point theorems, etc. Introduction What is soft computing good for? Tradi- tional viewpoint. When

  15. Soft Computing Explains Heuristic Numerical Methods in Data Processing and in Logic Programming

    E-print Network

    Kreinovich, Vladik

    Soft Computing Explains Heuristic Numerical Methods in Data Processing and in Logic Programming soft comput­ ing approaches explain and justify heuristic nu­ merical methods used in data processing fixed point theorems, etc. Introduction What is soft computing good for? Tradi­ tional viewpoint. When

  16. A Statistical Method for Time Synchronization of Computer Clocks with Precisely Frequency-Synchronized Oscillators

    Microsoft Academic Search

    Takao Yamashita; Satoshi Ono

    1998-01-01

    With the growing of computer networks, continuous media communication and processing are expected in distributed systems. Continuous media communication and processing require both time and frequency synchronization between computer clocks. The authors propose a statistical time synchronization method for computer clocks that have precisely frequency-synchronized oscillators. This method not only improves time synchronization accuracy but also prevents degradation of the

  17. Recent developments in the Green's function method. [for aerodynamics computer program

    NASA Technical Reports Server (NTRS)

    Tseng, K.; Puglise, J. A.; Morino, L.

    1977-01-01

    A recent computational development on the Green's function method (the method used in the computer program SOUSSA: Steady, Oscillatory and Unsteady Subsonic and Supersonic Aerodynamics) is presented. A scheme consisting of combined numerical (Gaussian quadrature) and analytical procedures for the evaluation of the source and doublet integrals used in the program is presented. This combination results in 80 to 90% reduction in computer time.

  18. The Computer Experience Microvan Program: A Cooperative Endeavor to Improve University-Public School Relations through Technology.

    ERIC Educational Resources Information Center

    Amodeo, Luiza B.; Martin, Jeanette

    To a large extent the Southwest can be described as a rural area. Under these circumstances, programs for public understanding of technology become, first of all, exercises in logistics. In 1982, New Mexico State University introduced a program to inform teachers about computer technology. This program takes microcomputers into rural classrooms…

  19. Preprint version. ACM Journal on Computing and Cultural Heritage, Vol. 5, No. 2, Article 8, Publication date: July 2012.

    E-print Network

    Paris-Sud XI, Université de

    Preprint version. ACM Journal on Computing and Cultural Heritage, Vol. 5, No. 2, Article 8, Publication date: July 2012. Qualitative Evaluation of Cultural Heritage Information Modelling Techniques cultural heritage domain concepts. Evaluations of the modelling techniques were performed by carrying out

  20. Numerical Reliability of Data Analysis Systems Submitted for publication in: Computational Statistics & Data Analyis (ver 1.7)

    E-print Network

    Sawitzki, Günther

    Numerical Reliability of Data Analysis Systems Submitted for publication in: Computational Statistics & Data Analyis (ver 1.7) Testing Numerical Reliability of Data Analysis Systems Günther Sawitzki quality control Introduction Reliable performance is the main reason to use well-established statistical

  1. 78 FR 54453 - Notice of Public Meeting-Intersection of Cloud Computing and Mobility Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ...Notice of Public Meeting--Intersection of Cloud Computing and Mobility Forum and Workshop...NIST) announces the Intersection of Cloud and Mobility Forum and Workshop to be held...held each day. The NIST Intersection of Cloud and Mobility Forum and Workshop will...

  2. A review of data quality assessment methods for public health information systems.

    PubMed

    Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping

    2014-05-01

    High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users' concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process. PMID:24830450

  3. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

    SciTech Connect

    Luttman, A.

    2012-03-30

    The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

  4. Computational methods for reentry trajectories and risk assessment

    NASA Astrophysics Data System (ADS)

    Anselmo, Luciano; Pardini, Carmen

    The trajectory modeling of uncontrolled satellites close to reentry in the atmosphere is still a challenging activity. Tracking data may be sparse and not particularly accurate, the objects' complicate shape and unknown attitude evolution may render difficult the aerodynamic computations and, last but not the least, the models used to predict the air density at the altitudes of interest, as a function of solar and geomagnetic activity, are affected by significant uncertainties. After a brief overview of the relevance of the risk related to satellite reentries and debris survival down to the ground, the paper describes some of the methods and techniques developed in support of the reentry predictions carried out for civil protection purposes. An appropriate management of the intrinsic uncertainties of the problem is in fact critical for the dissemination of the information, avoiding, as much as possible, misunderstandings and unjustified alarm. Special attention is paid to the evaluation of the risk, the availability of orbit determinations, the uncertainties of the residual lifetime estimation, and the definition of reentry and risk windows. When possible, the discussion is supported by real data, results and examples, often based on the authors' direct experience and researches.

  5. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  6. Archives of Computational Methods in Engineering, 2010 (In Press) Proper Generalized Decompositions and separated

    E-print Network

    Boyer, Edmond

    Archives of Computational Methods in Engineering, 2010 (In Press) Proper Generalized Decompositions Received: date / Accepted: date Abstract Uncertainty quantication and propagation in physical systems is the product of deterministic and stochas- tic approximation spaces. The computation of the approximate

  7. A parallel domain decomposition finite element method for massively parallel computers

    SciTech Connect

    Su, P.S.; Fulton, R.E. [Georgia Institute of Technology, Atlanta, GA (United States)] [Georgia Institute of Technology, Atlanta, GA (United States)

    1993-12-01

    New massively parallel computer architectures have revolutionized the design of computer algorithms, and promise to have significant influence on algorithms for engineering computations. The traditional global model method has a limited benefit for massively parallel computers. An alternative method is to use the domain decomposition approach. This paper explores the potential for the domain decomposition strategy through actual computations. The example of a three-dimensional linear static finite element analysis is presented to the BBN Butterfly TC2000 massively parallel computer with up to 104 processors. The numerical reults indicate that the parallel domain decomposition method requires a lower computation time than parallel global model method. Also, the parallel domain decomposition approach offers a better speed-up than does the parallel global model method.

  8. Parallel Computing Environments and Methods for Power Distribution System Simulation

    Microsoft Academic Search

    Ning Lu; Z. Todd Taylor; David P. Chassin; Ross T. Guttromson; R. Scott Studham

    2004-01-01

    The development of cost-effective high- performance parallel computing on multi-processor supercomputers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at the appliance level, where detailed thermal models for appliances are used. This approach works well for a small

  9. 16.901 Computational Methods in Aerospace Engineering, Spring 2003

    E-print Network

    Darmofal, David L.

    Introduction to computational techniques arising in aerospace engineering. Applications drawn from aerospace structures, aerodynamics, dynamics and control, and aerospace systems. Techniques include: numerical integration ...

  10. Validation of viscous and inviscid computational methods for turbomachinery components

    NASA Technical Reports Server (NTRS)

    Povinelli, L. A.

    1986-01-01

    An assessment of several three-dimensional computer codes used at the NASA Lewis Research Center is presented. Four flow situations are examined, for which both experimental data and computational results are available. The four flows form a basis for the evaluation of the computational procedures. It is concluded that transonic rotor flow at peak efficiency conditions may be calculated with a reasonable degree of accuracy, whereas, off-design conditions are not accurately determined. Duct flows and turbine cascade flows may also be computed with reasonable accuracy whereas radial inflow turbine flow remains a challenging problem.

  11. ACUTRI a computer code for assessing doses to the general public due to acute tritium releases

    E-print Network

    Yokoyama, S; Noguchi, H; Ryufuku, S; Sasaki, T

    2002-01-01

    Tritium, which is used as a fuel of a D-T burning fusion reactor, is the most important radionuclide for the safety assessment of a nuclear fusion experimental reactor such as ITER. Thus, a computer code, ACUTRI, which calculates the radiological impact of tritium released accidentally to the atmosphere, has been developed, aiming to be of use in a discussion of licensing of a fusion experimental reactor and an environmental safety evaluation method in Japan. ACUTRI calculates an individual tritium dose based on transfer models specific to tritium in the environment and ICRP dose models. In this calculation it is also possible to analyze statistically on meteorology in the same way as a conventional dose assessment method according to the meteorological guide of the Nuclear Safety Commission of Japan. A Gaussian plume model is used for calculating the atmospheric dispersion of tritium gas (HT) and/or tritiated water (HTO). The environmental pathway model in ACUTRI considers the following internal exposures: i...

  12. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  13. The Adomian decomposition method for computing eigenelements of Sturm-Liouville two point boundary value problems

    Microsoft Academic Search

    Basem S. Attili

    2005-01-01

    We will consider the Adomian decomposition method for computing the eigenelements of Sturm–Liouville two point boundary value problem. The method proved to be very successful and powerful in computing such elements. Numerical examples showed the competitive nature of the method. Comparison with the results of others will also be presented.

  14. APPROVAL SHEET Title of Dissertation: A Covariance Matrix Method for the Computation

    E-print Network

    Maryland, Baltimore County, University of

    , and C. R. Menyuk, "Optimization of the split-step Fourier method in modeling optical fiberAPPROVAL SHEET Title of Dissertation: A Covariance Matrix Method for the Computation of Bit Errors matrix method to compute accurate bit error rates in a highly nonlinear dispersion- managed soliton

  15. Methods for assessing the quality of data in public health information systems: a critical review.

    PubMed

    Chen, Hong; Yu, Ping; Hailey, David; Wang, Ning

    2014-01-01

    The quality of data in public health information systems can be ensured by effective data quality assessment. In order to conduct effective data quality assessment, measurable data attributes have to be precisely defined. Then reliable and valid measurement methods for data attributes have to be used to measure each attribute. We conducted a systematic review of data quality assessment methods for public health using major databases and well-known institutional websites. 35 studies were eligible for inclusion in the study. A total of 49 attributes of data quality were identified from the literature. Completeness, accuracy and timeliness were the three most frequently assessed attributes of data quality. Most studies directly examined data values. This is complemented by exploring either data users' perception or documentation quality. However, there are limitations of current data quality assessment methods: a lack of consensus on attributes measured; inconsistent definition of the data quality attributes; a lack of mixed methods for assessing data quality; and inadequate attention to reliability and validity. Removal of these limitations is an opportunity for further improvement. PMID:25087521

  16. Complementing computational fluid dynamics methods with classical analytical techniques

    Microsoft Academic Search

    A. Verhoff

    1999-01-01

    New aerospace vehicle designs must have greater performance and versatility at affordable cost. This requires multi-disciplinary analysis and optimization which in turn requires more accurate and efficient numerical simulation tools. The need for greater accuracy and efficiency of computational fluid dynamics (CFD) tools is further amplified by the industry trend toward distributed computing (e.g. workstation clusters) and away from supercomputers.

  17. Analytic and simulation methods in computer network design*

    E-print Network

    Kleinrock, Leonard

    familiar) is the Defense Department's Advanced Research Projects Agency (ARPA) experimental computer of the Advanced Research Projects Agency, who originally conceived this system. Reference 6, which appears Projects Agency of the Department of Defense (DAHC15-69-C-0285). THE ARPA EXPERIMENTAL COMPUTER NETWORK

  18. Stochastic Lagrangian Method for Downscaling Problems in Computational Fluid Dynamics

    E-print Network

    Paris-Sud XI, Université de

    mathematicians as long as deterministic tools are used. Among others, let us quote the Adaptative Mesh Refinement for the downscaling in Computational Fluid Dynamics (CFD). For numerous practical reasons (computational cost, we consider a new approach for the downscaling in CFD, although the authors are particu- larly

  19. One-to-One Computing in Public Schools: Lessons from "Laptops for All" Programs

    ERIC Educational Resources Information Center

    Abell Foundation, 2008

    2008-01-01

    The basic tenet of one-to-one computing is that the student and teacher have Internet-connected, wireless computing devices in the classroom and optimally at home as well. Also known as "ubiquitous computing," this strategy assumes that every teacher and student has her own computing device and obviates the need for moving classes to computer

  20. Private and Public Sector Enterprise Resource Planning System Post-Implementation Practices: A Comparative Mixed Method Investigation

    ERIC Educational Resources Information Center

    Bachman, Charles A.

    2010-01-01

    While private sector organizations have implemented enterprise resource planning (ERP) systems since the mid 1990s, ERP implementations within the public sector lagged by several years. This research conducted a mixed method, comparative assessment of post "go-live" ERP implementations between public and private sector organization. Based on a…

  1. Methods of legitimation: how ethics committees decide which reasons count in public policy decision-making.

    PubMed

    Edwards, Kyle T

    2014-07-01

    In recent years, liberal democratic societies have struggled with the question of how best to balance expertise and democratic participation in the regulation of emerging technologies. This study aims to explain how national deliberative ethics committees handle the practical tension between scientific expertise, ethical expertise, expert patient input, and lay public input by explaining two institutions' processes for determining the legitimacy or illegitimacy of reasons in public policy decision-making: that of the United Kingdom's Human Fertilisation and Embryology Authority (HFEA) and the United States' American Society for Reproductive Medicine (ASRM). The articulation of these 'methods of legitimation' draws on 13 in-depth interviews with HFEA and ASRM members and staff conducted in January and February 2012 in London and over Skype, as well as observation of an HFEA deliberation. This study finds that these two institutions employ different methods in rendering certain arguments legitimate and others illegitimate: while the HFEA attempts to 'balance' competing reasons but ultimately legitimizes arguments based on health and welfare concerns, the ASRM seeks to 'filter' out arguments that challenge reproductive autonomy. The notably different structures and missions of each institution may explain these divergent approaches, as may what Sheila Jasanoff (2005) terms the distinctive 'civic epistemologies' of the US and the UK. Significantly for policy makers designing such deliberative committees, each method differs substantially from that explicitly or implicitly endorsed by the institution. PMID:24833251

  2. Large-Scale Automated Analysis of News Media: A Novel Computational Method for Obesity Policy Research

    PubMed Central

    Hamad, Rita; Pomeranz, Jennifer L.; Siddiqi, Arjumand; Basu, Sanjay

    2015-01-01

    Objective Analyzing news media allows obesity policy researchers to understand popular conceptions about obesity, which is important for targeting health education and policies. A persistent dilemma is that investigators have to read and manually classify thousands of individual news articles to identify how obesity and obesity-related policy proposals may be described to the public in the media. We demonstrate a novel method called “automated content analysis” that permits researchers to train computers to “read” and classify massive volumes of documents. Methods We identified 14,302 newspaper articles that mentioned the word “obesity” during 2011–2012. We examined four states that vary in obesity prevalence and policy (Alabama, California, New Jersey, and North Carolina). We tested the reliability of an automated program to categorize the media’s “framing” of obesity as an individual-level problem (e.g., diet) and/or an environmental-level problem (e.g., obesogenic environment). Results The automated program performed similarly to human coders. The proportion of articles with individual-level framing (27.7–31.0%) was higher than the proportion with neutral (18.0–22.1%) or environmental-level framing (16.0–16.4%) across all states and over the entire study period (p<0.05). Conclusion We demonstrate a novel approach to the study of how obesity concepts are communicated and propagated in news media. PMID:25522013

  3. A New Method of Building Keyboarding Speed on the Computer.

    ERIC Educational Resources Information Center

    Sharp, Walter M.

    1998-01-01

    Use of digraphs (pairs of letters representing single speech sounds) in keyboarding is facilitated by computer technology allowing analysis of speed between keystrokes. New software programs provide a way to develop keyboarding speed. (SK)

  4. Fast Methods for Computing the $p$-Radius of Matrices

    E-print Network

    Jungers, Raphael M.

    The $p$-radius characterizes the average rate of growth of norms of matrices in a multiplicative semigroup. This quantity has found several applications in recent years. We raise the question of its computability. We prove ...

  5. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  6. Optimization methods for complex sheet metal stamping computer aided engineering

    Microsoft Academic Search

    Giuseppe Ingarao; Rosa Di Lorenzo

    2010-01-01

    Nowadays, sheet metal stamping processes design is not a trivial task due to the complex issues to be taken into account (complex\\u000a shapes forming, conflicting design goals and so on). Therefore, proper design methodologies to reduce times and costs have\\u000a to be developed mostly based on computer aided procedures. In this paper, a computer aided approach is proposed with the

  7. Eliciting road traffic injuries cost among Iranian drivers’ public vehicles using willingness to pay method

    PubMed Central

    Ainy, Elaheh; Soori, Hamid; Ganjali, Mojtaba; Baghfalaki, Taban

    2015-01-01

    Background and Aim: To allocate resources at the national level and ensure the safety level of roads with the aim of economic efficiency, cost calculation can help determine the size of the problem and demonstrate the economic benefits resulting from preventing such injuries. This study was carried out to elicit the cost of traffic injuries among Iranian drivers of public vehicles. Materials and Methods: In a cross-sectional study, 410 drivers of public vehicles were randomly selected from all the drivers in city of Tehran, Iran. The research questionnaire was prepared based on the standard for willingness to pay (WTP) method (stated preference (SP), contingent value (CV), and revealed preference (RP) model). Data were collected along with a scenario for vehicle drivers. Inclusion criteria were having at least high school education and being in the age range of 18 to 65 years old. Final analysis of willingness to pay was carried out using Weibull model. Results: Mean WTP was 3,337,130 IRR among drivers of public vehicles. Statistical value of life was estimated 118,222,552,601,648 IRR, for according to 4,694 dead drivers, which was equivalent to 3,940,751,753 $ based on the dollar free market rate of 30,000 IRR (purchase power parity). Injury cost was 108,376,366,437,500 IRR, equivalent to 3,612,545,548 $. In sum, injury and death cases came to 226,606,472,346,449 IRR, equivalent to 7,553,549,078 $. Moreover in 2013, cost of traffic injuries among the drivers of public vehicles constituted 1.25% of gross national income, which was 604,300,000,000$. WTP had a significant relationship with gender, daily payment, more payment for time reduction, more pay to less traffic, and minibus drivers. Conclusion: Cost of traffic injuries among drivers of public vehicles included 1.25% of gross national income, which was noticeable; minibus drivers had less perception of risk reduction than others.

  8. Moving finite elements: A continuously adaptive method for computational fluid dynamics

    Microsoft Academic Search

    A. H. Glasser; K. Miller; N. Carlson

    1991-01-01

    Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor

  9. A new era in scientific computing: Domain decomposition methods in hybrid CPU–GPU architectures

    Microsoft Academic Search

    M. Papadrakakis; G. Stavroulakis; A. Karatarakis

    2011-01-01

    Recent advances in graphics processing units (GPUs) technology open a new era in high performance computing. Applications of GPUs to scientific computations are attracting a lot of attention due to their low cost in conjunction with their inherently remarkable performance features and the recently enhanced computational precision and improved programming tools. Domain decomposition methods (DDM) constitute today an important category

  10. Review of the Use of Electroencephalography as an Evaluation Method for Human-Computer Interaction

    E-print Network

    Paris-Sud XI, Université de

    Review of the Use of Electroencephalography as an Evaluation Method for Human-Computer Interaction: HCI EVALUATION, EEG, ERRP, WORKLOAD, ATTENTION, EMOTIONS Abstract: Evaluating human-computer. Phys- iological sensors help to improve the ergonomics of human-computer interaction (HCI) (Fairclough

  11. Using Aesthetic Computing as a Method for Customizing Model Structure: An Empirical Study

    Microsoft Academic Search

    PAUL FISHWICK; TIMOTHY DAVIS; JANE DOUGLAS

    We present empirical results from a new approach, Aesthetic Computing to customizing model structures for designing models for systems found in mathematics and computer simulation. At the University of Florida, we have taught the methodology of Aesthetic Computing as a separate class, and within the context of a Simulation class. Students in the simulation class were taught the method and

  12. Multi-scale problems, high performance computing and hybrid numerical methods

    E-print Network

    Cottet, Georges-Henri

    Multi-scale problems, high performance computing and hybrid numerical methods G. Balarac, G of High Performance Computing G. Balarac LEGI, CNRS and Universit´e de Grenoble, BP 53, 38041 Grenoble. It involves different range of scales in the fluid and in the scalar and requires important computational re

  13. Multi-scale problems, high performance computing and hybrid numerical methods

    E-print Network

    Paris-Sud XI, Université de

    Multi-scale problems, high performance computing and hybrid numerical methods G. Balarac, G of High Performance Computing (HPC) is not anymore restricted to academia and scientific grand challenges. It involves different range of scales in the fluid and in the scalar and requires important computational re

  14. Finite volume methods applied to the computational modelling of welding phenomena

    Microsoft Academic Search

    Gareth A. TAYLOR; Michael Hughes; Nadia Strusevich; Koulis Pericleous

    2002-01-01

    This paper presents the computational modelling of welding phenomena within a versatile numerical framework. The framework embraces models from both the fields of computational fluid dynamics (CFD) and computational solid mechanics (CSM). With regard to the CFD modelling of the weld pool fluid dynamics, heat transfer and phase change, cell-centred finite volume (FV) methods are employed. Additionally, novel vertex-based FV

  15. Enforcing Trust-based Intrusion Detection in Cloud Computing Using Algebraic Methods

    E-print Network

    Paris-Sud XI, Université de

    Enforcing Trust-based Intrusion Detection in Cloud Computing Using Algebraic Methods Amira Bradai scheme for hybrid cloud computing is proposed. We consider a trust metric based on honesty, cooperation detection, Perron Frobenius, cloud computing, hybrid execution, false alarms, security scores. I

  16. SUBSPACE METHODS AND EQUILIBRATION IN COMPUTER VISION Matthias Muhlich and Rudolf Mester

    E-print Network

    Mester, Rudolf

    SUBSPACE METHODS AND EQUILIBRATION IN COMPUTER VISION Matthias M¨uhlich and Rudolf Mester J. W, Germany (Muehlich|Mester)@iap.uni-frankfurt.de ABSTRACT Many computer vision problems (e.g. the estimation- mography matrix estimation). 1. INTRODUCTION The mathematical core of numerous computer vision prob- lems

  17. Constructing analysis-suitable parameterization of computational domain from CAD boundary by variational harmonic method

    E-print Network

    Paris-Sud XI, Université de

    Constructing analysis-suitable parameterization of computational domain from CAD boundary-suitable parameterization of computational domain from CAD boundary for 2D and 3D isogeometric applications. Different from computational approach that offers the possibility of seamless integration between CAD and CAE. The method uses

  18. Methods Used to Assess the Susceptibility to Contamination of Transient, Non-Community Public Ground-Water Supplies in Indiana

    USGS Publications Warehouse

    Arihood, Leslie D.; Cohen, David A.

    2006-01-01

    The Safe Water Drinking Act of 1974 as amended in 1996 gave each State the responsibility of developing a Source-Water Assessment Plan (SWAP) that is designed to protect public-water supplies from contamination. Each SWAP must include three elements: (1) a delineation of the source-water protection area, (2) an inventory of potential sources of contaminants within the area, and (3) a determination of the susceptibility of the public-water supply to contamination from the inventoried sources. The Indiana Department of Environmental Management (IDEM) was responsible for preparing a SWAP for all public-water supplies in Indiana, including about 2,400 small public ground-water supplies that are designated transient, non-community (TNC) supplies. In cooperation with IDEM, the U.S. Geological Survey compiled information on conditions near the TNC supplies and helped IDEM complete source-water assessments for each TNC supply. The delineation of a source-water protection area (called the assessment area) for each TNC ground-water supply was defined by IDEM as a circular area enclosed by a 300-foot radius centered at the TNC supply well. Contaminants of concern (COCs) were defined by IDEM as any of the 90 contaminants for which the U.S. Environmental Protection Agency has established primary drinking-water standards. Two of these, nitrate as nitrogen and total coliform bacteria, are Indiana State-regulated contaminants for TNC water supplies. IDEM representatives identified potential point and nonpoint sources of COCs within the assessment area, and computer database retrievals were used to identify potential point sources of COCs in the area outside the assessment area. Two types of methods-subjective and subjective hybrid-were used in the SWAP to determine susceptibility to contamination. Subjective methods involve decisions based upon professional judgment, prior experience, and (or) the application of a fundamental understanding of processes without the collection and analysis of data for a specific condition. Subjective hybrid methods combine subjective methods with quantitative hydrologic analyses. The subjective methods included an inventory of potential sources and associated contaminants, and a qualitative description of the inherent susceptibility of the area around the TNC supply. The description relies on a classification of the hydrogeologic and geomorphic characteristics of the general area around the TNC supply in terms of its surficial geology, regional aquifer system, the occurrence of fine- and coarse-grained geologic materials above the screen of the TNC well, and the potential for infiltration of contaminants. The subjective hybrid method combined the results of a logistic regression analysis with a subjective analysis of susceptibility and a subjective set of definitions that classify the thickness of fine-grained geologic materials above the screen of a TNC well in terms of impedance to vertical flow. The logistic regression determined the probability of elevated concentrations of nitrate as nitrogen (greater than or equal to 3 milligrams per liter) in ground water associated with specific thicknesses of fine-grained geologic materials above the screen of a TNC well. In this report, fine-grained geologic materials are referred to as a geologic barrier that generally impedes vertical flow through an aquifer. A geologic barrier was defined to be thin for fine-grained materials between 0 and 45 feet thick, moderate for materials between 45 and 75 feet thick, and thick if the fine-grained materials were greater than 75 feet thick. A flow chart was used to determine the susceptibility rating for each TNC supply. The flow chart indicated a susceptibility rating using (1) concentrations of nitrate as nitrogen and total coliform bacteria reported from routine compliance monitoring of the TNC supply, (2) the presence or absence of potential sources of regulated contaminants (nitrate as nitrogen and coliform bac

  19. Opinions of the Dutch public on palliative sedation: a mixed-methods approach

    PubMed Central

    van der Kallen, Hilde TH; Raijmakers, Natasja JH; Rietjens, Judith AC; van der Male, Alex A; Bueving, Herman J; van Delden, Johannes JM; van der Heide, Agnes

    2013-01-01

    Background Palliative sedation is defined as deliberately lowering a patient’s consciousness, to relieve intolerable suffering from refractory symptoms at the end of life. Palliative sedation is considered a last resort intervention in end-of-life care that should not be confused with euthanasia. Aim To inform healthcare professionals about attitudes of the general public regarding palliative sedation. Design and setting A cross-sectional survey among members of the Dutch general public followed by qualitative interviews. Method One thousand nine hundred and sixty members of the general public completed the questionnaire, which included a vignette describing palliative sedation (response rate 78%); 16 participants were interviewed. Results In total, 22% of the responders indicated knowing the term ‘palliative sedation’. Qualitative data showed a variety of interpretations of the term. Eighty-one per cent of the responders agreed with the provision of sedatives as described in a vignette of a patient with untreatable pain and a life expectancy of <1 week who received sedatives to alleviate his suffering. This percentage was somewhat lower for a patient with a life expectancy of <1 month (74%, P = 0.007) and comparable in the case where the physician gave sedatives with the aim of ending the patient’s life (79%, P = 0.54). Conclusion Most of the general public accept the use of palliative sedation at the end of life, regardless of a potential life-shortening effect. However, confusion exists about what palliative sedation represents. This should be taken into account by healthcare professionals when communicating with patients and their relatives on end-of-life care options. PMID:24152482

  20. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    NASA Technical Reports Server (NTRS)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  1. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  2. A method of assessing users' vs managers' perceptions of safety and security problems in public beach park settings 

    E-print Network

    Steele, Robert James Scott

    1986-01-01

    A METHOD OF ASSESSING USERS' VS MANAGERS' PERCEPTIONS OF SAFETY AND SECURITY PROBLEMS IN PUBLIC BEACH PARK SETTINGS A Thesis by ROBERT JAMES SCOTT STEELE Submitted to the Graduate College of Texas A&M University In Par ial Fulfillment... of the Requirements for the Degree of MASTER GF SCIENCE August 1986 Major Subject: Recreation and Resource Development A METHOD OF ASSESSING USERS' YS MANAGERS' PERCEPTIONS OF SAFETY AND SECURITY PROBLEMS IN PUBLIC BEACH PARK AREAS A Thesis by ROBERT JAMES...

  3. 47 CFR 90.483 - Permissible methods and requirements of interconnecting private and public systems of...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...private and public systems of communications. 90.483 Section 90.483 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY...private and public systems of communications. Interconnection...

  4. Evaluating and developing parameter optimization and uncertainty analysis methods for a computationally intensive distributed hydrological model

    E-print Network

    Zhang, Xuesong

    2009-05-15

    EVALUATING AND DEVELOPING PARAMETER OPTIMIZATION AND UNCERTAINTY ANALYSIS METHODS FOR A COMPUTATIONALLY INTENSIVE DISTRIBUTED HYDROLOGICAL MODEL A Dissertation by XUESONG ZHANG Submitted to the Office of Graduate Studies... ANALYSIS METHODS FOR A COMPUTATIONALLY INTENSIVE DISTRIBUTED HYDROLOGICAL MODEL A Dissertation by XUESONG ZHANG Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree...

  5. Variance reduction in computations of neoclassical transport in stellarators using a {delta}f method

    SciTech Connect

    Allmaier, K.; Kernbichler, W.; Leitold, G. O. [Association EURATOM-OeAW, Institut fuer Theoretische Physik-Computational Physics, Technische Universitaet Graz, Petersgasse 16, A-8010 Graz (Austria); Kasilov, S. V. [Institute of Plasma Physics, National Science Center 'Kharkov Institute of Physics and Technology', Akademicheskaya str. 1, 61108 Kharkov (Ukraine)

    2008-07-15

    An improved {delta}f Monte Carlo method for the computation of neoclassical transport coefficients in stellarators is presented. Compared to the standard {delta}f method without filtering, the computing time needed for the same statistical error decreases by a factor proportional to the mean free path to the power 3/2.

  6. A Finite Element Method for Computation of Structural Intensity by the Normal Mode Approach

    Microsoft Academic Search

    L. Gavric; G. Pavic

    1993-01-01

    A method for numerical computation of structural intensity in thin-walled structures is presented. The method is based on structural finite elements (beam, plate and shell type) enabling computation of real eigenvalues and eigenvectors of the undamped structure which then serve in evaluation of complex response. The distributed structural damping is taken into account by using the modal damping concept, while

  7. A general method for the computation of probabilities in systems of first order chemical reactions

    E-print Network

    Djuriæ, Petar M.

    A general method for the computation of probabilities in systems of first order chemical reactions for the computation of molecular population distributions in a system of first-order chemical reactions. The method to model the chemical reactions in a stochastic way rather than with the traditional differential equations

  8. Clint Dawson March 2013 PUBLICATIONS

    E-print Network

    Dawson, Clint N.

    Clint Dawson March 2013 1 PUBLICATIONS: Refereed Journal Publications 1. Bell, J. B., Dawson, C," J. of Comput. Phys., Vol. 74, pp. 1-24, 1988. 2. Dawson, C., Russell, T. F., and Wheeler, M. F. 26, pp. 1487-1512, 1989. 3. Dawson, C., "Godunov-mixed methods for immiscible displacement

  9. Multi-Level iterative methods in computational plasma physics

    SciTech Connect

    Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.

    1999-03-01

    Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD.

  10. Computation of electrostatic forces by the virtual work method

    E-print Network

    Hiptmair, Ralf

    are the Maxwell stress tensor, the virtual work and the eggshell methods. In this project we study and compare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2. Virtual Work Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3. EggShell.7. Eggshell implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3. Equivalence between

  11. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    NASA Astrophysics Data System (ADS)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  12. An accurate and efficient computation method of the hydration free energy of a large, complex molecule.

    PubMed

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of ?UUV?/2 (?UUV? is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since ?UUV? can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load. PMID:25956125

  13. Computational Methods for Analyzing and Modeling Gene Regulation Dynamics

    E-print Network

    on a diverse set of genomic properties. We applied these methods to yeast, E. coli, and human cells. Our E. coli is a semi-supervised learning method that uses verified transcription factor-gene in present a method motivated by human genomic data, that combines motif information with a probabilistic

  14. Spring 2014: Computational and Variational Methods for Inverse Problems

    E-print Network

    Ghattas, Omar

    that are governed by systems of partial differential equations (PDEs). The focus of the course is on variational of instructors. Background in numerical linear alge- bra, partial differential equations, and nonlinear optimization methods ­ line search globalization ­ steepest descent ­ Newton method ­ Gauss-Newton method

  15. Computational methods for constructing protein structure models from 3D electron microscopy maps

    PubMed Central

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-01-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3 Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. PMID:23796504

  16. Designing for Social Inclusion: Computer Mediation of Trust Relations Between Citizens and Public Service Providers

    Microsoft Academic Search

    Michael Grimsley; Anthony Meehan; Anna Tan

    2004-01-01

    Trust has a direct impact on the extent to which citizens engage with public and community services. This paper advances a framework which seeks to support HCI designers and managers in promoting ICT-mediated citizen engagement with public services through a strategy of trust promotion. The framework is based upon an analysis of evidence from large-scale community surveys which demonstrate a

  17. THE ROLE OF STATISTICAL METHODS IN COMPUTER SCIENCE AND BIOINFORMATICS

    Microsoft Academic Search

    Irina Arhipova

    2006-01-01

    This article discusses the links between computer science, statistics and biology education on the basis of research at the Latvia University of Agriculture. Bioinformatics study is considered from two aspects - as one for biologists learning Information Technologies (IT) to use within their speciality, or for IT specialists learning biology so they can apply their skills to biological problems. The

  18. Volume rendering methods for computational fluid dynamics visualization

    Microsoft Academic Search

    David S. Ebert; Roni Yagel; James N. Scott; Yair Kurzion

    1994-01-01

    This paper describes three alternative volume rendering approaches to visualizing computational fluid dynamics (CFD) data. One new approach uses realistic volumetric gas rendering techniques to produce photo-realistic images and animations from scalar CFD data. The second uses ray casting that is based on a simpler illumination model and is mainly centered around a versatile new tool for the design of

  19. Introducing Research Methods to Computer Science Honours Students

    Microsoft Academic Search

    Vashti Galpin; Scott Hazelhurst; Conrad Mueller; Ian Sanders

    Research skills are important for any academic and can be of great benefit to any professional person. These skills are, however, difficult to teach and to learn. In the Department of Computer Science at the University of the Witwatersrand we have for a number of years included the completion of a research report as part of our Honours programme. This

  20. A Method of Computational Correction for Optical Distortion

    E-print Network

    North Carolina at Chapel Hill, University of

    warpings. 1. Introduction A head-mounted display (HMD), head tracker, computer graphics system. In order to generate a pair of stereoscopic images for the two display devices in the HMD, the display at Chapel Hill (UNC-CH)). The purpose of the optics used in an HMD is to project equally magnified images

  1. Computer-Graphics and the Literary Construct: A Learning Method.

    ERIC Educational Resources Information Center

    Henry, Avril

    2002-01-01

    Describes an undergraduate student module that was developed at the University of Exeter (United Kingdom) in which students made their own computer graphics to discover and to describe literary structures in texts of their choice. Discusses learning outcomes and refers to the Web site that shows students' course work. (Author/LRW)

  2. Methods for analyzing data from computer simulation experiments

    Microsoft Academic Search

    Thomas H. Naylor; Kenneth Wertz; Thomas H. Wonnacott

    1967-01-01

    This paper addresses itself to the problem of analyzing data generated by computer simulations of economic systems. We first turn to a hypothetical firm, whose operation is represented by a single-channel, multistation queueing model. The firm seeks to maximize total expected profit for the coming period by selecting one of five operating plans, where each plan incorporates a certain marketing

  3. Computational methods for rapid prototyping of analytic solid models

    Microsoft Academic Search

    Rida T. Farouki; Thomas König

    1996-01-01

    Looks at how layered fabrication processes typically entail extensive computations and large memory requirements in the reduction of three-dimensional part descriptions to area-filling paths that cover the interior of each of a sequence of planar slices. Notes that the polyhedral “STL” representation exacerbates this problem by necessitating large input data volumes to describe curved surface models at acceptable levels of

  4. All for One: Integrating Budgetary Methods by Computer.

    ERIC Educational Resources Information Center

    Herman, Jerry J.

    1994-01-01

    With the advent of high speed and sophisticated computer programs, all budgetary systems can be combined in one fiscal management information system. Defines and provides examples for the four budgeting systems: (1) function/object; (2) planning, programming, budgeting system; (3) zero-based budgeting; and (4) site-based budgeting. (MLF)

  5. Verifying a computational method for predicting extreme ground motion

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, B.T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  6. Computer Training and Individual Differences: When Method Matters.

    ERIC Educational Resources Information Center

    Harp, Candace G.; Taylor, Sandra C.; Satzinger, John W.

    1998-01-01

    Interviews were conducted with 263 licensed users of training software, 68 of whom had used computer-based training (CBT), instructor-led training, and video tutorials. Videos were deemed the least useful. Instructor-led training had the most feedback and media richness, but CBT was an effective low-cost alternative. (SK)

  7. Computational methods for reentry trajectories and risk assessment

    Microsoft Academic Search

    Luciano Anselmo; Carmen Pardini

    2005-01-01

    The trajectory modeling of uncontrolled satellites close to reentry in the atmosphere is still a challenging activity. Tracking data may be sparse and not particularly accurate, the objects’ complicate shape and unknown attitude evolution may render difficult the aerodynamic computations and, last but not the least, the models used to predict the air density at the altitudes of interest, as

  8. Small Scale Distance Education; "The Personal (Computer) Touch"; Tutorial Methods for TMA's Using a Computer.

    ERIC Educational Resources Information Center

    Fritsch, Helmut; And Others

    1989-01-01

    The authors present reports of current research on distance education at the FernUniversitat in West Germany. Fritsch discusses adapting distance education techniques for small classes. Kuffner describes procedures for providing feedback to students using personalized computer-generated letters. Klute discusses using a computer with tutorial…

  9. Recursive method for computing matrix elements for two-body interactions

    NASA Astrophysics Data System (ADS)

    Hyvärinen, Juhani; Suhonen, Jouni

    2015-05-01

    A recursive method for the efficient computation of two-body matrix elements is presented. The method consists of a set of recursion relations for the computationally demanding radial integral and adds one more tool to the set of computational methods introduced by Horie and Sasaki [H. Horie and K. Sasaki, Prog. Theor. Phys. 25, 475 (1961), 10.1143/PTP.25.475]. The neutrinoless double-? decay will serve as the primary application and example, but the method is general and can be applied equally well to other kinds of nuclear structure calculations involving matrix elements of two-body interactions.

  10. Public Health Ethics Education in a Competency-Based Curriculum: A Method of Programmatic Assessment

    Microsoft Academic Search

    Cynthia L. Chappell; Nathan Carlin

    2011-01-01

    Public health ethics began to emerge in the 1990s as a development within bioethics. Public health ethics education has been\\u000a implemented in schools of public health in recent years, and specific professionalism and ethics competencies were included\\u000a in the Master of Public Health (MPH) competency set developed nationally and adapted by individual schools of public health\\u000a around the country. The

  11. A strong coupled CFD-CSD method on computational aeroelastity

    Microsoft Academic Search

    Rui Xi; Hongguang Jia

    2011-01-01

    In this paper, a strong coupled CFD-CSD method is developed to simulate the aeroelastic phenomena. The CFD solver is based on the finite-volume algorithm for the Navier-Stokes equations on unstructured grid. The CSD solver solves the aeroelastic governing equations in the modal space. Their coupling is realized by a dual-time method. The spring-based smoothing method is adopted to deform and

  12. Density-Weighted Nyström Method for Computing Large Kernel Eigensystems

    Microsoft Academic Search

    Kai Zhang; James T. Kwok

    2009-01-01

    The Nyström method is a well-known sampling-based technique for approximating the eigensystem of large kernel matrices. However, the chosen samples in the Nyström method are all assumed to be of equal importance, which deviates from the integral equation that defines the kernel eigenfunctions. Motivated by this observation, we extend the Nyström method to a more general, density-weighted version. We show

  13. One-eighth look-up table method for effectively generating computer-generated hologram patterns

    NASA Astrophysics Data System (ADS)

    Cho, Sungjin; Ju, Byeong-Kwon; Kim, Nam-Young; Park, Min-Chul

    2014-05-01

    To generate ideal digital holograms, a computer-generated hologram (CGH) has been regarded as a solution. However, it has an unavoidable problem in that the computational burden for generating CGH is very large. Recently, many studies have been conducted to investigate different solutions in order to reduce the computational complexity of CGH by using particular methods such as look-up tables (LUTs) and parallel processing. Each method has a positive effectiveness about reducing computational time for generating CGH. However, it appears to be difficult to apply both methods simultaneously because of heavy memory consumption of the LUT technique. Therefore, we proposed a one-eighth LUT method where the memory usage of the LUT is reduced, making it possible to simultaneously apply both of the fast computing methods for the computation of CGH. With the one-eighth LUT method, only one-eighth of the zone plates were stored in the LUT. All of the zone plates were accessed by indexing method. Through this method, we significantly reduced memory usage of LUT. Also, we confirmed the feasibility of reducing the computational time of the CGH by using general-purpose graphic processing units while reducing the memory usage.

  14. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of: (1) deterministic structural analyses with fine (convergent) finite element meshes, (2) probabilistic structural analyses with coarse finite element meshes, (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes, and (4) a probabilistic mapping. The results show that the scatter of the probabilistic structural responses and structural reliability can be accurately predicted using a coarse finite element model with proper mapping methods. Therefore, large structures can be analyzed probabilistically using finite element methods.

  15. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of the following: (1) deterministic structural analyses with fine (convergent) finite element meshes; (2) probabilistic structural analyses with coarse finite element meshes; (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes; and (4) a probabilistic mapping. The results show that the scatter in the probabilistic structural responses and structural reliability can be efficiently predicted using a coarse finite element model and proper mapping methods with good accuracy. Therefore, large structures can be efficiently analyzed probabilistically using finite element methods.

  16. Computational experiments on the weighted linear discontinuous method

    E-print Network

    Rodriguez, Gabriel

    1994-01-01

    Discontinuous Methods . II CLOF METHODS AND THE LOCFES CODE 1 1 I 2 3 4 5 7 8 9 11 12 15 Basic Linear Functionals and LOF Methods Fully Discrete Source Iteration by Closed Linear One-Cell Functional Methods Closure Approximations . Closed... LIST OF FIGURES Figure 1. 1 Linear Discontinuous Representation for Angular Flux Page 13 11. 1 A Mesh of Cells 17 11. 2 A Single Cell . 11. 3 A Sample Input File 11. 4 Sample Last Page of the Main LOCFES Output III. I BLF Subroutine for WLD...

  17. Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing

    Microsoft Academic Search

    Cong Wang; Qian Wang; Kui Ren; Wenjing Lou

    2010-01-01

    Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact

  18. nAture methods | VOL.9 NO.2 | FEBRUARY2012 | 201 the understanding of brain computations requires methods

    E-print Network

    Cai, Long

    Articles nAture methods | VOL.9 NO.2 | FEBRUARY2012 | 201 the understanding of brain computations requires methods that read out neural activity on different spatial and temporal scales. Following signal in mouse brain slices. We also performed volumetric random-access scanning calcium imaging of spontaneous

  19. The Direct Lighting Computation in Global Illumination Methods

    Microsoft Academic Search

    Changyaw Wang

    This document addresses several important issues regarding image synthesis for complexscenes. It pays particular attention to the "direct lighting computation", where the brightnessof an object that is due to light that comes directly from the source (without reflection)is calculated as in Figure 1.1.Generating an image involves three major steps. The initial step, scene specification,defines geometry, material, lighting, texture, movement, camera,

  20. In-troducing research methods to computer science Honours students

    Microsoft Academic Search

    V. C. Galpin; S. Hazelhurst; C. Mueller; I. Sanders

    1999-01-01

    Abstract Research skills are important for any academic and can be of great benefit to any professional person. These skills are, however, difficult to teach and to learn. In the Department of Computer Science at the University of the Witwatersrand we have for a number,of years included the completion,of a research report as part of our Honours programme.,This paper is

  1. Computational Methods for the Analysis of Array Comparative Genomic Hybridization

    PubMed Central

    Chari, Raj; Lockwood, William W.; Lam, Wan L.

    2006-01-01

    Array comparative genomic hybridization (array CGH) is a technique for assaying the copy number status of cancer genomes. The widespread use of this technology has lead to a rapid accumulation of high throughput data, which in turn has prompted the development of computational strategies for the analysis of array CGH data. Here we explain the principles behind array image processing, data visualization and genomic profile analysis, review currently available software packages, and raise considerations for future software development. PMID:17992253

  2. Computational methods to dissect cis -regulatory transcriptional networks

    Microsoft Academic Search

    Vibha Rani

    2007-01-01

    The formation of diverse cell types from an invariant set of genes is governed by biochemical and molecular processes that\\u000a regulate gene activity. A complete understanding of the regulatory mechanisms of gene expression is the major function of\\u000a genomics. Computational genomics is a rapidly emerging area for deciphering the regulation of metazoan genes as well as interpreting\\u000a the results of

  3. A performance computing and monitoring method for monopropellant propulsion systems

    NASA Astrophysics Data System (ADS)

    Corchero, G.

    1992-07-01

    A simplified model for performance computation of monopropellant propulsion systems, hydrazine, is presented. The model allows prediction of thrust and mass flow rate for steady firing as well as for unsteady pulse mode operations, for a given thruster. Input to the model are thruster performance for an inlet pressure; performance given by its thrust, mass flow rate and chamber stagnation pressure. Chamber pressure can be replaced by mass flow rate ratio for a second inlet pressure.

  4. Methods for the Accurate Computations of Hypersonic Flows

    Microsoft Academic Search

    Kyu Hong Kim; Chongam Kim; Oh-Hyun Rho

    2001-01-01

    In order to overcome some difficulties observed in the computation of hypersonic flows, a robust, accurate and efficient numerical scheme based on AUSM-type splitting is developed. Typical symptoms appearing in the application of AUSM-type schemes for high-speed flows, such as pressure wiggles near a wall and overshoots across a strong shock, are cured by introducing weighting functions based on pressure

  5. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    SciTech Connect

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  6. Using Zone Graph Method for Computing the State Space of a Time Petri Net

    Microsoft Academic Search

    Guillaume Gardey; Olivier H. Roux; Olivier F. Roux

    2003-01-01

    \\u000a Presently, the method to verify quantitative time properties on Time Petri Nets is the use of observers. The state space is\\u000a then computed to test the reachability of a given marking. The main method to compute the state space of a Time Petri Net\\u000a has been introduced by Berthomieu and Diaz. It is known as the “state class method”. We present in

  7. A New Method to Compute Standard-Weight Equations That Reduces Length-Related Bias

    Microsoft Academic Search

    Kenneth G. Gerow; Richard C. Anderson-Sprecher; Wayne A. Hubert

    2005-01-01

    We propose a new method for developing standard-weight (Ws) equations for use in the computation of relative weight (Wr) because the regression line–percentile (RLP) method often leads to length-related biases in Ws equations. We studied the structural properties of Ws equations developed by the RLP method through simulations, identified reasons for biases, and compared Ws equations computed by the RLP

  8. A new computational method for Volterra-Fredholm integral equations

    Microsoft Academic Search

    K. Maleknejad; M. Hadizadeh

    1999-01-01

    The main purpose of this article is to demonstrate the use of the Adomian decomposition method for mixed nonlinear Volterra-Fredholm integral equations. A bound is also given for the Adomian decomposition series. Finally, numerical examples are presented to illustrate the implementation and accuracy of the decomposition method.

  9. Computational Method for Electrical Potential and Other Field Problems

    ERIC Educational Resources Information Center

    Hastings, David A.

    1975-01-01

    Proposes the finite differences relaxation method as a teaching tool in secondary and university level courses discussing electrical potential, temperature distribution in a region, and similar problems. Outlines the theory and operating procedures of the method, and discusses examples of teaching applications, including possible laboratory…

  10. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  11. Using quality improvement methods to improve public health emergency preparedness: PREPARE for Pandemic Influenza.

    PubMed

    Lotstein, Debra; Seid, Michael; Ricci, Karen; Leuschner, Kristin; Margolis, Peter; Lurie, Nicole

    2008-01-01

    Many public health departments seek to improve their capability to respond to large-scale events such as an influenza pandemic. Quality improvement (QI), a structured approach to improving performance, has not been widely applied in public health. We developed and tested a pilot QI collaborative to explore whether QI could help public health departments improve their pandemic preparedness. We demonstrated that this is a promising model for improving public health preparedness and may be useful for improving public health performance overall. Further efforts are needed, however, to encourage the robust implementation of QI in public health. PMID:18628274

  12. Epidemiologic Methods Lessons Learned from Environmental Public Health Disasters: Chernobyl, the World Trade Center, Bhopal, and Graniteville, South Carolina

    PubMed Central

    Svendsen, Erik R.; Runkle, Jennifer R.; Dhara, Venkata Ramana; Lin, Shao; Naboka, Marina; Mousseau, Timothy A.; Bennett, Charles

    2012-01-01

    Background: Environmental public health disasters involving hazardous contaminants may have devastating effects. While much is known about their immediate devastation, far less is known about long-term impacts of these disasters. Extensive latent and chronic long-term public health effects may occur. Careful evaluation of contaminant exposures and long-term health outcomes within the constraints imposed by limited financial resources is essential. Methods: Here, we review epidemiologic methods lessons learned from conducting long-term evaluations of four environmental public health disasters involving hazardous contaminants at Chernobyl, the World Trade Center, Bhopal, and Graniteville (South Carolina, USA). Findings: We found several lessons learned which have direct implications for the on-going disaster recovery work following the Fukushima radiation disaster or for future disasters. Interpretation: These lessons should prove useful in understanding and mitigating latent health effects that may result from the nuclear reactor accident in Japan or future environmental public health disasters. PMID:23066404

  13. Comparison of computation methods for CBM production performance

    E-print Network

    Mora, Carlos A.

    2009-06-02

    methane production is somewhat complicated and has led to numerous methods of approximating production performance. Many CBM reservoirs go through a dewatering period before significant gas production occurs. With dewatering, desorption of gas...

  14. A memory based method for computing robot-arm configuration

    E-print Network

    Karimjee, Saleem

    1985-01-01

    TO CMAC AND PRELIMINARY WORK 2. 1 The Basic Structure of CMAC 2. 2 CMAC and Learning 2. 3 Memory, Input Resolution and Quantizing Levels 2. 4 Implementation of Albus' Algorithm 2. 5 Tests and Results ' 2. 6 Conclusions 13 15 16 20 26 III A... of computation time, and can sometimes experience numerical instabilities. As a result, there is a lot of interest in techniques that offer a way to quickly and accurately approximate the inverse solution. In 1975, Albus [7, 8, 9] proposed an algorithm...

  15. The homological reduction method for computing cocyclic Hadamard matrices

    Microsoft Academic Search

    Víctor Álvarez; José Ándrés Armario; María Dolores Frau; P. Real

    2009-01-01

    An alternate method for constructing (Hadamard) cocyclic matrices over a finite group G is described. Provided that a homological modelB?(Z[G])?:?HFhG for G is known, the homological reduction method automatically generates a full basis for 2-cocycles over G (including 2-coboundaries). From these data, either an exhaustive or a heuristic search for Hadamard cocyclic matrices is then developed. The knowledge of an

  16. Geographical Information Systems (GIS): Their Use as Decision Support Tools in Public Libraries and the Integration of GIS with Other Computer Technology

    Microsoft Academic Search

    Andrew M. Hawkins

    1994-01-01

    Describes the the use of Geographical Information Systems (GIS) as decision support tools in public libraries in England. A GIS is a computer software system that represents data in a geographic dimension. GIS as a decision support tool in public libraries is in its infancy; only seven out of 40 libraries contacted in the survey have GIS projects, three of

  17. Publications Publications

    E-print Network

    Seybold, Steven J.

    Society, 56(3): 229-394. 1983 Karban, Richard. Induced responses of cherry trees to periodical cicadaJournals 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Publications Publications Richard Karban 1977 Karban, Richard. Growth form and interleaf shading by Costus lima in a Costa Rican rainforest. Biotropica

  18. The Voronoi Implicit Interface Method for computing multiphase physics

    PubMed Central

    Saye, Robert I.; Sethian, James A.

    2011-01-01

    We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. PMID:22106269

  19. A new method to compute standard-weight equations that reduces length-related bias

    USGS Publications Warehouse

    Gerow, K.G.; Anderson-Sprecher, R. C.; Hubert, W.A.

    2005-01-01

    We propose a new method for developing standard-weight (Ws) equations for use in the computation of relative weight (Wr) because the regression line-percentile (RLP) method often leads to length-related biases in Ws equations. We studied the structural properties of W s equations developed by the RLP method through simulations, identified reasons for biases, and compared Ws equations computed by the RLP method and the new method. The new method is similar to the RLP method but is based on means of measured weights rather than on means of weights predicted from regression models. The new method also models curvilinear W s relationships not accounted for by the RLP method. For some length-classes in some species, the relative weights computed from Ws equations developed by the new method were more than 20 Wr units different from those using Ws equations developed by the RLP method. We recommend assessment of published Ws equations developed by the RLP method for length-related bias and use of the new method for computing new Ws equations when bias is identified. ?? Copyright by the American Fisheries Society 2005.

  20. Interior-point methods and their applications to power systems: a classification of publications and software codes

    Microsoft Academic Search

    Victor H. Quintana; Geraldo L. Torres; Jose Medina-Palomo

    2000-01-01

    Since Karmarkar's first successful interior-point algorithm for linear programming in 1984, the interest and consequently the number of publications in the area have increased tremendously, leaving the newcomers to the field trapped in a jungle of papers and reports. In this paper,the authors review and classify major publications on interior-point methods theory, on the practical implementation of the most successful

  1. Study and Application of Establishment Method Based on TransCAD for Urban Public Transit Basic Data System

    Microsoft Academic Search

    Xinhuan Zhang; Kefei Yan

    2009-01-01

    Public transit network has a wide range of data, its storage, display, and management process is very complicated. The use of TransCAD software contribute to establish intuitionistic public transit basic data system which would has strong maneuverability and clear layer-structure. In this paper, the construction method, pertinent attribute settings of basic data system based on TransCAD software was discussed, which

  2. A low computation cost method for seizure prediction.

    PubMed

    Zhang, Yanli; Zhou, Weidong; Yuan, Qi; Wu, Qi

    2014-10-01

    The dynamic changes of electroencephalograph (EEG) signals in the period prior to epileptic seizures play a major role in the seizure prediction. This paper proposes a low computation seizure prediction algorithm that combines a fractal dimension with a machine learning algorithm. The presented seizure prediction algorithm extracts the Higuchi fractal dimension (HFD) of EEG signals as features to classify the patient's preictal or interictal state with Bayesian linear discriminant analysis (BLDA) as a classifier. The outputs of BLDA are smoothed by a Kalman filter for reducing possible sporadic and isolated false alarms and then the final prediction results are produced using a thresholding procedure. The algorithm was evaluated on the intracranial EEG recordings of 21 patients in the Freiburg EEG database. For seizure occurrence period of 30 min and 50 min, our algorithm obtained an average sensitivity of 86.95% and 89.33%, an average false prediction rate of 0.20/h, and an average prediction time of 24.47 min and 39.39 min, respectively. The results confirm that the changes of HFD can serve as a precursor of ictal activities and be used for distinguishing between interictal and preictal epochs. Both HFD and BLDA classifier have a low computational complexity. All of these make the proposed algorithm suitable for real-time seizure prediction. PMID:25062892

  3. Comparison of Monte Carlo methods for fluorescence molecular tomography—computational efficiency

    PubMed Central

    Chen, Jin; Intes, Xavier

    2011-01-01

    Purpose: The Monte Carlo method is an accurate model for time-resolved quantitative fluorescence tomography. However, this method suffers from low computational efficiency due to the large number of photons required for reliable statistics. This paper presents a comparison study on the computational efficiency of three Monte Carlo-based methods for time-domain fluorescence molecular tomography. Methods: The methods investigated to generate time-gated Jacobians were the perturbation Monte Carlo (pMC) method, the adjoint Monte Carlo (aMC) method and the mid-way Monte Carlo (mMC) method. The effects of the different parameters that affect the computation time and statistics reliability were evaluated. Also, the methods were applied to a set of experimental data for tomographic application. Results:In silico results establish that, the investigated parameters affect the computational time for the three methods differently (linearly, quadratically, or not significantly). Moreover, the noise level of the Jacobian varies when these parameters change. The experimental results in preclinical settings demonstrates the feasibility of using both aMC and pMC methods for time-resolved whole body studies in small animals within a few hours. Conclusions: Among the three Monte Carlo methods, the mMC method is a computationally prohibitive technique that is not well suited for time-domain fluorescence tomography applications. The pMC method is advantageous over the aMC method when the early gates are employed and large number of detectors is present. Alternatively, the aMC method is the method of choice when a small number of source-detector pairs are used. PMID:21992393

  4. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  5. Speech Pedagogy beyond the Basics: A Study of Instructional Methods in the Advanced Public Speaking Course

    ERIC Educational Resources Information Center

    Levasseur, David; Dean, Kevin; Pfaff, Julie

    2004-01-01

    Although the class in advanced public speaking is a mainstay of communication instruction, little scholarship has addressed the nature of expertise in public speaking or the instructional techniques by which it is imparted. The present study conducted in-depth interviews with 23 active college teachers of advanced public speaking, inquiring…

  6. Control of structural problems in cultural heritage monuments using close-range photogrammetry and computer methods

    Microsoft Academic Search

    P. Arias; J. Herráez; H. Lorenzo; C. Ordóñez

    2005-01-01

    This paper deals with the conservation of monumental buildings. Several methods used for the architectonic documentation are analysed in this study. Computers methods and close-range photogrammetry are proposed as a preventive method which allows to detect, measure and track the temporal evolution of some structural problems detected, and also to assess the degree of conservation of the materials employed. A

  7. Analysis of the Computational Singular Perturbation Reduction Method for Chemical Kinetics

    Microsoft Academic Search

    A. Zagaris; H. G. Kaper; T. J. Kaper

    2004-01-01

    This article is concerned with the asymptotic accuracy of the Computational Singular Perturbation (CSP) method developed by Lam and Goussis [The CSP method for simplifying kinetics, Int. J. Chem. Kin. 26 (1994) 461–486] to reduce the dimensionality of a system of chemical kinetics equations. The method, which is generally applicable to multiple-time scale problems arising in a broad array of

  8. Use of fast-Fourier-transform computational methods in radiation transport

    Microsoft Academic Search

    Burke Ritchie; Pieter G. Dykema; Dennis Braddy

    1997-01-01

    Fast-Fourier-transform computational methods are used to solve the radiation-transport equation. Results are presented in nondiffusive and diffusive regimes. In the latter regime the method is benchmarked against the Schrödinger equation, which has the form of a diffusion equation in imaginary time. The method is further tested against prototypical problems in radiation transport.

  9. A Monte Carlo method to compute the exchange coefficient in the double porosity model

    E-print Network

    Paris-Sud XI, Université de

    A Monte Carlo method to compute the exchange coefficient in the double porosity model Fabien: Monte Carlo methods, double porosity model, ran- dom walk on squares, fissured media AMS Classification: 76S05 (65C05 76M35) Published in Monte Carlo Methods Appl.. Proc. of Monte Carlo and probabilistic

  10. A computational method for solving singularly perturbed turning point problems exhibiting twin boundary layers

    Microsoft Academic Search

    S. Natesan; N. Ramanujam

    1998-01-01

    Singularly perturbed turning point problems (TPPs) for second order ordinary differential equations (DEs) exhibiting twin boundary layers are considered. In order to obtain numerical solution of these problems a computational method is suggested in which exponentially fitted difference schemes are combined with classical numerical methods. In this method, the given internal (domain of definition of differential equation) is divided into

  11. Fuzzy neural net and computer simulation method for fabricating holographic plastic material

    NASA Astrophysics Data System (ADS)

    Chang, Rong-Seng; Lay, Yun L.; Lin, Chern-Sheng

    1995-12-01

    Injection and embossing is an economical method of producing HOE (holographic optical element) in large or small quantities and variable types. We use CIM (computer integrated manufacturing) system to obtain the proper parameters of the manufacturing system. The CAEMOLD computer simulation software and photoelasticity measurement can assist in calculating and modifying injection and embossing process. These technique can obtain the uniform distribution and lower value of residual stress in the plastic HOE product from the results of the computer simulation.

  12. A method to validate gravimetric-geoid computation software based on Stokes's integral formula

    Microsoft Academic Search

    W. E. Featherstone; J. G. Olliver

    1997-01-01

    .   A method is presented with which to verify that the computer software used to compute a gravimetric geoid is capable of producing\\u000a the correct results, assuming accurate input data. The Stokes, gravimetric terrain correction and indirect effect formulae\\u000a are integrated analytically after applying a transformation to surface spherical coordinates centred on each computation point.\\u000a These analytical results can be

  13. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  14. Computer capillaroscopy as a new cardiological diagnostics method

    NASA Astrophysics Data System (ADS)

    Gurfinkel, Youri I.; Korol, Oleg A.; Kufal, George E.

    1998-04-01

    The blood flow in capillary vessels plays an important role in sustaining the vital activity of the human organism. The computerized capillaroscope is used for the investigations of nailfold (eponychium) capillary blood flow. An important advantage of the instrument is the possibility of performing non-invasive investigations, i.e., without damage to skin or vessels and causing no pain or unpleasant sensations. The high-class equipment and software allow direct observation of capillary blood flow dynamics on a computer screen at a 700 - 1300 times magnification. For the first time in the clinical practice, it has become possible to precisely measure the speed of capillary blood flow, as well as the frequency of aggregate formation (glued together in clots of blood particles). In addition, provision is made for automatic measurement of capillary size and wall thickness and automatic recording of blood aggregate images for further visual study, documentation, and electronic database management.

  15. Computation of Spectroscopic Factors with the Coupled-Cluster Method

    SciTech Connect

    Jensen, O. [University of Bergen, Bergen, Norway; Hagen, Gaute [ORNL; Papenbrock, T. [University of Tennessee, Knoxville (UTK) & Oak Ridge National Laboratory (ORNL); Dean, David Jarvis [ORNL; Vaagen, J. S. [University of Bergen, Bergen, Norway

    2010-01-01

    We present a calculation of spectroscopic factors within coupled-cluster theory. Our derivation of algebraic equations for the one-body overlap functions are based on coupled-cluster equation-of-motion solutions for the ground and excited states of the doubly magic nucleus with mass number A and the odd-mass neighbor with mass A-1. As a proof-of-principle calculation, we consider ^{16}O and the odd neighbors ^{15}O and ^{15}N, and compute the spectroscopic factor for nucleon removal from ^{16}O. We employ a renormalized low-momentum interaction of the V_{low-k} type derived from a chiral interaction at next-to-next-to-next-to-leading order. We study the sensitivity of our results by variation of the momentum cutoff, and then discuss the treatment of the center of mass.

  16. The GLEaMviz computational tool, a publicly available software to explore realistic epidemic spreading scenarios at the global scale

    PubMed Central

    2011-01-01

    Background Computational models play an increasingly important role in the assessment and control of public health crises, as demonstrated during the 2009 H1N1 influenza pandemic. Much research has been done in recent years in the development of sophisticated data-driven models for realistic computer-based simulations of infectious disease spreading. However, only a few computational tools are presently available for assessing scenarios, predicting epidemic evolutions, and managing health emergencies that can benefit a broad audience of users including policy makers and health institutions. Results We present "GLEaMviz", a publicly available software system that simulates the spread of emerging human-to-human infectious diseases across the world. The GLEaMviz tool comprises three components: the client application, the proxy middleware, and the simulation engine. The latter two components constitute the GLEaMviz server. The simulation engine leverages on the Global Epidemic and Mobility (GLEaM) framework, a stochastic computational scheme that integrates worldwide high-resolution demographic and mobility data to simulate disease spread on the global scale. The GLEaMviz design aims at maximizing flexibility in defining the disease compartmental model and configuring the simulation scenario; it allows the user to set a variety of parameters including: compartment-specific features, transition values, and environmental effects. The output is a dynamic map and a corresponding set of charts that quantitatively describe the geo-temporal evolution of the disease. The software is designed as a client-server system. The multi-platform client, which can be installed on the user's local machine, is used to set up simulations that will be executed on the server, thus avoiding specific requirements for large computational capabilities on the user side. Conclusions The user-friendly graphical interface of the GLEaMviz tool, along with its high level of detail and the realism of its embedded modeling approach, opens up the platform to simulate realistic epidemic scenarios. These features make the GLEaMviz computational tool a convenient teaching/training tool as well as a first step toward the development of a computational tool aimed at facilitating the use and exploitation of computational models for the policy making and scenario analysis of infectious disease outbreaks. PMID:21288355

  17. Fast methods for computing scene raw signals in millimeter-wave sensor simulations

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.; Reynolds, Terry M.; Satterfield, H. Dewayne

    2010-04-01

    Modern millimeter wave (mmW) radar sensor systems employ wideband transmit waveforms and efficient receiver signal processing methods for resolving accurate measurements of targets embedded in complex backgrounds. Fast Fourier Transform processing of pulse return signal samples is used to resolve range and Doppler locations, and amplitudes of scattered RF energy. Angle glint from RF scattering centers can be measured by performing monopulse arithmetic on signals resolved in both delta and sum antenna channels. Environment simulations for these sensors - including all-digital and hardware-in-the-loop (HWIL) scene generators - require fast, efficient methods for computing radar receiver input signals to support accurate simulations with acceptable execution time and computer cost. Although all-digital and HWIL simulations differ in their representations of the radar sensor (which is itself a simulation in the all-digital case), the signal computations for mmW scene modeling are closely related for both types. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) have developed various fast methods for computing mmW scene raw signals to support both HWIL scene projection and all-digital receiver model input signal synthesis. These methods range from high level methods of decomposing radar scenes for accurate application of spatially-dependent nonlinear scatterer phase history, to low-level methods of efficiently computing individual scatterer complex signals and single precision transcendental functions. The efficiencies of these computations are intimately tied to math and memory resources provided by computer architectures. The paper concludes with a summary of radar scene computing performance on available computer architectures, and an estimate of future growth potential for this computational performance.

  18. Computation of three-dimensional Brinkman flows using regularized methods

    E-print Network

    Cortez, Ricardo

    * is the average fluid pressure, l is the dynamic viscosity, K is the Darcy permeability of the medium and FComputation of three-dimensional Brinkman flows using regularized methods Ricardo Cortez a,*, Bree Pasadena, CA, USA a r t i c l e i n f o Article history: Received 8 January 2010 Received in revised form

  19. New developments in adaptive methods for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Oden, J. T.; Bass, Jon M.

    1990-01-01

    New developments in a posteriori error estimates, smart algorithms, and h- and h-p adaptive finite element methods are discussed in the context of two- and three-dimensional compressible and incompressible flow simulations. Applications to rotor-stator interaction, rotorcraft aerodynamics, shock and viscous boundary layer interaction and fluid-structure interaction problems are discussed.

  20. Computer Game Scene-Generation Projects Using "Particle Methods"

    E-print Network

    Hawick, Ken

    trajectories of particles in a system. Particles can be used directly to model flow of fluids such as water of movement of the material, as would occur from flow in air or water currents, is simulated by applying or in modern games. The recent Star Wars movie Attack of the Clones makes use of particle methods in generating

  1. Hindawi Publishing Corporation Computational and Mathematical Methods in Medicine

    E-print Network

    O'Toole, Alice J.

    reading" by Cox and Savoy [1], and more generally multi-voxel pattern anal- ysis (MVPA Discriminant Analysis (MUSUBADA): How to Assign Scans to Categories without Using Spatial Normalization Herv the original work is properly cited. We present a new discriminant analysis (DA) method called Multiple Subject

  2. Approximation and Noise Filtering Methods in Computer Vision

    Microsoft Academic Search

    We extend our previous work and consider the task of filtering the noise from images and other types of inputs which are assumed to be piecewise continuous and piecewise monotone. We show that nonlinear diffusion of the data, a powerful filtering method, is too restrictive for such a case, leading to piecewise constant functions. We claim that the piecewise monotonicity

  3. A FILTER METHOD WITH UNIFIED STEP COMPUTATION FOR ...

    E-print Network

    2013-05-09

    This contrasts traditional filter methods that use a (separate) restoration phase designed to ... move the initial guess into the strict interior of the feasible region. It is from this interior location ..... By design, the trial step sk is a descent direction for.

  4. An equivalent block method for computing fatigue crack growth

    Microsoft Academic Search

    R. Jones; L. Molent; K. Krishnapillai

    2008-01-01

    This paper builds on a development in the science of fatigue crack growth to present an equivalent spectrum block method for predicting fatigue crack growth under variable amplitude loading. This approach is based on the generalised Frost–Dugdale model and forms an analytical basis for the observation of a near exponential relationship between crack length and fatigue life under variable amplitude

  5. A Novel Computational Method for Solving Finite QBD Processes \\Lambda

    E-print Network

    Akar, Nail

    is constructed with a time complexity of O(m 3 log 2 K). Therefore, the effect of the number of levels on the overall complexity is minimal. Besides its numerical efficiency, the proposed method is numerically stable Ÿ mg, and which, in discrete time, has an irreducible transition probability matrix of the canonical

  6. Mathematical and computational methods in seismic exploration and reservoir modeling

    SciTech Connect

    Fitzgibbon, W.E.

    1986-01-01

    This book presents the papers given at a conference on the modeling of petroleum and natural gas deposits. Topics considered at the conference included the simulation of thermal oil recovery processes, iterative techniques for mixed finite element methods, miscible-phase displacement, stability problems, seismic modeling, image processing, pattern recognition, seismic stratigraphic traps, seismic wave propagation, vector processors, parallel processors, multiprocessors, and velocity inversion.

  7. Hierarchical process control by combining SPC and soft computing methods

    Microsoft Academic Search

    Wonoh Kim; G. Vachtsevanos

    2000-01-01

    Statistical process control (SPC) provides methods of monitoring a system to improve the quality of the product. However, using SPC alone has limitations since it does not control a system but rather monitors it to remove the root causes. SPC and feedback control are combined in a hierarchical structure to monitor a system and use the information to compensate for

  8. Computations for Group Sequential Boundaries Using the Lan-DeMets Spending Function Method

    Microsoft Academic Search

    David M. Reboussin; David L. DeMets; KyungMann Kim; K. K. Gordon Lan

    2000-01-01

    We describe an interactive Fortran program which performs computations related to the design and analysis of group sequential clinical trials using Lan-DeMets spending functions. Many clinical trials include interim analyses of accumulating data and rely on group sequential methods to avoid consequent inflation of the type I error rate. The computations are appropriate for interim test statistics whose distribution or

  9. On the anomalous asymptotic performance of the regular computer methods for grounding analysis

    Microsoft Academic Search

    I. Colominas; F. Navarrina; M. Casteleiro

    Grounding systems are designed to guarantee personal security, protection of equip- ments and continuity of power supply. Hence, engineers must compute the equiv- alent resistance of the system and the potential distribution on the earth surface when a fault condition occurs (1, 2, 3). While very crude approximations were available until the 70's, several computer methods have been more recently

  10. Nonlinear dynamic simulation of single- and multispool core engines, part 1: Computational method

    Microsoft Academic Search

    M. T. Schobeiri; M. Attia; C. Lippke

    1994-01-01

    A new computational method for accurate simulation of the nonlinear, dynamic behavior of single- and multispool core engines, turbofan engines, and power-generation gas turbine engines is presented in part 1. In order to perform the simulation, a modularly structured computer code has been developed that includes individual mathematical modules representing various engine components. The generic structure of the code enables

  11. A numerical method for the computation of profile loss of turbine blades

    Microsoft Academic Search

    A. L. Chandraker

    1985-01-01

    Two schemes are presented for computing the profile loss of turbine blades. The first, a generalized 'loss-correlation' scheme, based on a set of semiempirical expressions, is an extension of the Ainley and Mathieson (1951) method. It can predict the profile loss closer to the experimental results than the existing similar schemes and is easy to implement on a small computing

  12. Conformal Mapping by Computationally Efficient Methods Stefan Pintilie and Ali Ghodsi

    E-print Network

    Zhu, Mu

    Conformal Mapping by Computationally Efficient Methods Stefan Pintilie and Ali Ghodsi Department) or Laplacian Eigenmaps (LEM), do not produce conformal maps. Post-processing techniques formulated as instances a conformal map. However, the effectiveness of this approach is limited by the computational complexity of SDP

  13. A New Analytical Method for Computing Solvent-Accessible Surface Area of Macromolecules

    E-print Network

    A New Analytical Method for Computing Solvent-Accessible Surface Area of Macromolecules and its: In the calculation of thermodynamic properties and three-dimensional structures of macromolecules, such as proteins, it is important to have an efficient algorithm for computing the solvent-accessible surface area of macromolecules

  14. A Fast Method for Local Penetration Depth Computation Stephane Redon and Ming C. Lin

    E-print Network

    Paris-Sud XI, Université de

    A Fast Method for Local Penetration Depth Computation Stephane Redon and Ming C. Lin Department for determining an ap- proximation of the local penetration information for intersect- ing polyhedral models the computation of the corresponding local penetration depths: for any pair of intersecting objects, we partition

  15. A Fast Method for Local Penetration Depth Computation Stephane Redon and Ming C. Lin

    E-print Network

    North Carolina at Chapel Hill, University of

    A Fast Method for Local Penetration Depth Computation Stephane Redon and Ming C. Lin Department for determining an approximation of the local penetration infor- mation for intersecting polyhedral models, non-convex models, we decouple the computation of the local penetration directions from

  16. Constructing analysis-suitable parameterization of computational domain from CAD boundary by variational harmonic method

    E-print Network

    Paris-Sud XI, Université de

    Constructing analysis-suitable parameterization of computational domain from CAD boundary-suitable parameterization of computational domain from CAD boundary for 2D and 3D isogeometric applications. Different from that offers the possibility of seamless integration between CAD and CAE. The method uses the same type

  17. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  18. DEPARTMENT OF COMPUTER SCIENCE 2011 RESEARCH PUBLICATIONS, CREATIVE WORKS AND OTHER

    E-print Network

    Sun, Jing

    , P; GIMEL'FARB, G; KHALIFA, F; ELNAKIB, A; FALK, R; ABO EL-GHAR, M; SURI, J. 'Validation of a New; Suri, J. (ed.) Lung Imaging and Computer Aided Diagnosis, New York, USA, Taylor & Francis, 2011, J. (ed.) Lung Imaging and Computer Aided Diagnosis, New York, USA, Taylor & Francis, 2011, (Accepted

  19. VerSum: Verifiable Computations over Large Public Logs Jelle van den Hooff

    E-print Network

    Gummadi, Ramakrishna

    expensive compu- tations over large and frequently changing data structures, such as the Bitcoin or Namecoin clients, and that it can easily keep up with Bitcoin's rate of new blocks with transactions. Categories Keywords Verifiable Computation; Cloud Computing 1. INTRODUCTION Systems such as Bitcoin [15] provide

  20. Toward dynamic and attribute based publication, discovery and selection for cloud computing

    Microsoft Academic Search

    Andrzej Goscinski; Michael Brock

    2010-01-01

    Cloud computing is an emerging paradigm where computing resources are offered over the Internet as scalable, on-demand (Web) services. While cloud vendors have concentrated their efforts on the improvement of performance, resource consumption and scalability, other cloud characteristics have been neglected. On the one hand cloud service providers face difficult problems of publishing services that expose resources, and on the

  1. CLOUD COMPUTING TECHNOLOGIES PROGRAM An eleven-week in-depth program in the principles, methods, and technologies of Cloud Computing

    E-print Network

    Schaefer, Marcus

    CLOUD COMPUTING TECHNOLOGIES PROGRAM An eleven-week in-depth program in the principles, methods, and technologies of Cloud Computing DePaul University's Cloud Computing Technologies Program provides a broad understanding of the different leading Cloud Computing technologies. The program is designed to quickly educate

  2. CLOUD COMPUTING FUNDAMENTALS PROGRAM An eleven-week in-depth program in the principles, methods, and technologies of Cloud Computing

    E-print Network

    Schaefer, Marcus

    CLOUD COMPUTING FUNDAMENTALS PROGRAM An eleven-week in-depth program in the principles, methods, and technologies of Cloud Computing DePaul University's Cloud Computing Fundamentals Program provides a comprehensive introduction to essential aspects of Cloud Computing. The program is designed to quickly educate

  3. Novel methods in computational analysis and design of protein-protein interactions : applications to phosphoregulated interactions

    E-print Network

    Joughin, Brian Alan

    2007-01-01

    This thesis presents a number of novel computational methods for the analysis and design of protein-protein complexes, and their application to the study of the interactions of phosphopeptides with phosphopeptide-binding ...

  4. COMPUTATIONAL METHODS FOR STUDYING THE INTERACTION BETWEEN POLYCYCLIC AROMATIC HYDROCARBONS AND BIOLOGICAL MACROMOLECULES

    EPA Science Inventory

    Computational Methods for Studying the Interaction between Polycyclic Aromatic Hydrocarbons and Biological Macromolecules . The mechanisms for the processes that result in significant biological activity of PAHs depend on the interaction of these molecules or their metabol...

  5. 29 CFR 2530.204-3 - Alternative computation methods for benefit accrual.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Alternative computation methods for benefit accrual. 2530.204-3 Section 2530...Relating to Labor (Continued) EMPLOYEE BENEFITS SECURITY ADMINISTRATION, DEPARTMENT OF...MINIMUM STANDARDS FOR EMPLOYEE PENSION BENEFIT PLANS UNDER THE EMPLOYEE RETIREMENT...

  6. Computational Method for Drug Target Search and Application in Drug Discovery

    E-print Network

    Chen, Yuzong

    Ligand-protein inverse docking has recently been introduced as a computer method for identification of potential protein targets of a drug. A protein structure database is searched to find proteins to which a drug can bind ...

  7. Principled computational methods for the validation discovery of genetic regulatory networks

    E-print Network

    Hartemink, Alexander J. (Alexander John), 1972-

    2001-01-01

    As molecular biology continues to evolve in the direction of high-throughput collection of data, it has become increasingly necessary to develop computational methods for analyzing observed data that are at once both ...

  8. Computational studies of hydrogen storage materials and the development of related methods

    E-print Network

    Mueller, Timothy Keith

    2007-01-01

    Computational methods, including density functional theory and the cluster expansion formalism, are used to study materials for hydrogen storage. The storage of molecular hydrogen in the metal-organic framework with formula ...

  9. Subsonic Flow over Unstalled Pitching Airfoil Computed by Euler Method

    Microsoft Academic Search

    Shuchi Yang; Shijun Luo; Feng Liu

    Subsonic o w about a sinusoidally pitching airfoil with mean angle of attack is studied by an unsteady compressible Euler o w solver. Fully attached o ws are considered. The Euler method is evaluated extensively by wind-tunnel test data of two no-stall cases for pitching-oscillating airfoil NACA 0012 in the literature. The wind tunnel walls are not considered in the

  10. A spatial kinetics computational method for large fast reactors

    SciTech Connect

    Fletcher, J.K. (United Kingdom Atomic Energy Authority, Risley Technical Services, Risley, Warrington, Cheshire (GB))

    1989-12-01

    A method for solving the time-dependent diffusion and transport equation is described in which the flux {Phi}({bold r},t) at position {bold r} and time t takes the approximate form {alpha}(t){psi}({bold r},t), where {alpha}(t) depends solely on time. The treatment includes a heat transfer model, thus enabling temperature and expansion feedback effects to be incorporated into the solution.

  11. COMPUTATIONAL METHODS IN DECISION-MAKING ECONOMICS AND FINANCE

    Microsoft Academic Search

    Erricos John Kontoghiorghes; Berc Rustem; Stavros Siokos

    Abstract,The value function of an American put option defined in a discrete domain,may be given as a solution of a Linear Complementarity Problem (LCP). However, the state of the art methods that solve LCP converge slowly. Recently, Dempster, Hutton & Richards have proposed,a Linear Program (LP) formulation of the American put and a special simplex algorithm that exploits the option

  12. Computer method for design of acoustic liners for turbofan engines

    NASA Technical Reports Server (NTRS)

    Minner, G. L.; Rice, E. J.

    1976-01-01

    A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.

  13. Computer simulations of 2-dimensional photonic crystal waveguide by method of moment

    Microsoft Academic Search

    Masahiro Tanaka; Kazuo Tanaka

    2010-01-01

    In this paper, we perform computer simulation of 2-dimensional photonic crystal waveguide composed of photonic crystal and slab waveguides. The slab waveguides work as input\\/output ports. Computer simulations are performed by the method of moment based on the guided-mode extracted integral equations, which had been proposed by the authors [1], [2]. We also apply the fast multipole method [3] on

  14. [Text mining, a method for computer-assisted analysis of scientific texts, demonstrated by an analysis of author networks].

    PubMed

    Hahn, P; Dullweber, F; Unglaub, F; Spies, C K

    2014-06-01

    Searching for relevant publications is becoming more difficult with the increasing number of scientific articles. Text mining as a specific form of computer-based data analysis may be helpful in this context. Highlighting relations between authors and finding relevant publications concerning a specific subject using text analysis programs are illustrated graphically by 2 performed examples. PMID:24810335

  15. The Role of Analytic Methods in Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Posey, J. W.

    2003-01-01

    As air traffic grows, annoyance produced by aircraft noise will grow unless new aircraft produce no objectionable noise outside airport boundaries. Such ultra-quiet aircraft must be of revolutionary design, having unconventional planforms and most likely with propulsion systems highly integrated with the airframe. Sophisticated source and propagation modeling will be required to properly account for effects of the airframe on noise generation, reflection, scattering, and radiation. It is tempting to say that since all the effects are included in the Navier-Stokes equations, time-accurate CFD can provide all the answers. Unfortunately, the computational time required to solve a full aircraft noise problem will be prohibitive for many years to come. On the other hand, closed form solutions are not available for such complicated problems. Therefore, a hybrid approach is recommended in which analysis is taken as far as possible without omitting relevant physics or geometry. Three examples are given of recently reported work in broadband noise prediction, ducted fan noise propagation and radiation, and noise prediction for complex three-dimensional jets.

  16. COMPUTER SUPPORTED COOPERATIVE LEARNING AND KNOWING FOR PUBLIC ADMINISTRATIONS ENGAGING IN EGOVERNMENT PROJECTS

    Microsoft Academic Search

    Maurizio Marchese; Filippo Bonella; Andrea Silli; Gianni Jacucci

    2004-01-01

    This paper reports on current research aimed at the definition of a coherent theoretical and architectural approach to support the development of effective Knowledge Management environments capable to assist Public Administrations in the design and deployment of services to citizens and enterprises. We briefly present an overview of Knowledge Management conceptual frameworks useful, in our opinion, in the management, deployment

  17. Computational Methods for Stability and Control (COMSAC): The Time Has Come

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.

    2005-01-01

    Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.

  18. The Use of Qsar and Computational Methods in Drug Design

    Microsoft Academic Search

    Fania Bajot

    2010-01-01

    \\u000a The application of quantitative structure–activity relationships (QSARs) has significantly impacted the paradigm of drug discovery.\\u000a Following the successful utilization of linear solvation free-energy relationships (LSERs), numerous 2D- and 3D-QSAR methods\\u000a have been developed, most of them based on descriptors for hydrophobicity, polarizability, ionic interactions, and hydrogen\\u000a bonding. QSAR models allow for the calculation of physicochemical properties (e.g., lipophilicity), the prediction

  19. Virtual Space Exploration: Let's Use Web-Based Computer Game Technology to Boost IYA 2009 Public Interest

    NASA Astrophysics Data System (ADS)

    Hussey, K.; Doronila, P.; Kulikov, A.; Lane, K.; Upchurch, P.; Howard, J.; Harvey, S.; Woodmansee, L.

    2008-09-01

    With the recent releases of both Google's "Sky" and Microsoft's "WorldWide Telescope" and the large and increasing popularity of video games, the time is now for using these tools, and those crafted at NASA's Jet Propulsion Laboratory, to engage the public in astronomy like never before. This presentation will use "Cassini at Saturn Interactive Explorer " (CASSIE) to demonstrate the power of web-based video-game engine technology in providing the public a "first-person" look at space exploration. The concept of virtual space exploration is to allow the public to "see" objects in space as if they were either riding aboard or "flying" next to an ESA/NASA spacecraft. Using this technology, people are able to immediately "look" in any direction from their virtual location in space and "zoom-in" at will. Users can position themselves near Saturn's moons and observe the Cassini Spacecraft's "encounters" as they happened. Whenever real data for their "view" exists it is incorporated into the scene. Where data is missing, a high-fidelity simulation of the view is generated to fill in the scene. The observer can also change the time of observation into the past or future. Our approach is to utilize and extend the Unity 3d game development tool, currently in use by the computer gaming industry, along with JPL mission specific telemetry and instrument data to build our virtual explorer. The potential of the application of game technology for the development of educational curricula and public engagement are huge. We believe this technology can revolutionize the way the general public and the planetary science community views ESA/NASA missions and provides an educational context that is attractive to the younger generation. This technology is currently under development and application at JPL to assist our missions in viewing their data, communicating with the public and visualizing future mission plans. Real-time demonstrations of CASSIE and other applications in development will be shown. Astronomy is one of the oldest basic sciences. We should use one of today's newest communications technologies available to engage the public. We should embrace the use of web-based gaming technology to prepare the world for the International Year of Astronomy 2009.

  20. Computational observers and visualization methods for stereoscopic medical imaging.

    PubMed

    Zafar, Fahad; Yesha, Yaacov; Badano, Aldo

    2014-09-22

    As stereoscopic display devices become common, their image quality assessment evaluation becomes increasingly important. Most studies conducted on 3D displays are based on psychophysics experiments with humans rating their experience based on detection tasks. The physical measurements do not map to effects on signal detection performance. Additionally, human observer study results are often subjective and difficult to generalize. We designed a computational stereoscopic observer approach inspired by the mechanisms of stereopsis in human vision for task-based image assessment that makes binary decisions based on a set of image pairs. The stereo-observer is constrained to a left and a right image generated using a visualization operator to render voxel datasets. We analyze white noise and lumpy backgrounds using volume rendering techniques. Our simulation framework generalizes many different types of model observers including existing 2D and 3D observers as well as providing flexibility to formulate a stereo model observer approach following the principles of stereoscopic viewing. This methodology has the potential to replace human observer studies when exploring issues with stereo display devices to be used in medical imaging. We show results quantifying the changes in performance when varying stereo angle as measured by an ideal linear stereoscopic observer. Our findings indicate that there is an increase in performance of about 13-18% for white noise and 20-46% for lumpy backgrounds, where the stereo angle is varied from 0 to 30. The applicability of this observer extends to stereoscopic displays used for in the areas of medical and entertainment imaging applications. PMID:25321697