For comprehensive and current results, perform a real-time search at Science.gov.

1

Nordic School of Public Health Computer Software in Epidemiology / Statistical Methods in Epidemiology Open Source Solutions - # Mark Myatt, December 2001 &RS\\ULJKW0DUN0\\DWW : Permission is granted the 5 environment for data analysis and graphics to work with epidemiological data. Topics covered

Gallagher, Colin

2

spaces such as the real plane. We investigate the expressive power and computational complexity of logics obtained in this way. It turns out that our modal logics have the same expressive power as the two results regarding the expressive power, but weaker results regarding the complexity. 1. Introduction

Wolter, Frank

3

Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother's age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

Kershenbaum, Anne D; Langston, Michael A; Levine, Robert S; Saxton, Arnold M; Oyana, Tonny J; Kilbourne, Barbara J; Rogers, Gary L; Gittner, Lisaann S; Baktash, Suzanne H; Matthews-Juarez, Patricia; Juarez, Paul D

2014-12-01

4

Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother’s age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

Kershenbaum, Anne D.; Langston, Michael A.; Levine, Robert S.; Saxton, Arnold M.; Oyana, Tonny J.; Kilbourne, Barbara J.; Rogers, Gary L.; Gittner, Lisaann S.; Baktash, Suzanne H.; Matthews-Juarez, Patricia; Juarez, Paul D.

2014-01-01

5

The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

Sellers, C.; Fox, B.; Paulz, J.

1996-03-01

6

47 CFR 80.771 - Method of computing coverage.

Code of Federal Regulations, 2011 CFR

... 2011-10-01 false Method of computing coverage. 80.771 Section 80...THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu...

2011-10-01

7

47 CFR 80.771 - Method of computing coverage.

Code of Federal Regulations, 2010 CFR

... 2010-10-01 false Method of computing coverage. 80.771 Section 80...THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu...

2010-10-01

8

Computer methods mechanics and

m.__33 l!id Computer methods in applied mechanics and engineering EL!!EMER Comput. Methods Appl. Mech. Engrg. 129 (1996) 349-370 Superconvergent extraction of flux intensity factors and first investigations. Most of the work has been concerned with estimation and control of error in energy norm (see [l

Yosibash, Zohar

9

Publications of Martin Henz School of Computing

Publications of Martin Henz School of Computing National University of Singapore 3 Science Drive 2 Singapore 117543 email: henz@comp.nus.edu.sg September 4, 2007 This is the list of publications of Martin://www.comp.nus.edu.sg/~henz/publications/. Books [1] Martin Henz. Objects for Concurrent Constraint Programming. The Kluwer International Series

Henz, Martin

10

Public Databases Supporting Computational Toxicology

A major goal of the emerging field of computational toxicology is the development of screening-level models that predict potential toxicity of chemicals from a combination of mechanistic in vitro assay data and chemical structure descriptors. In order to build these models, resea...

11

Computational Methods for Crashworthiness

NASA Technical Reports Server (NTRS)

Presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Crashworthiness held at Langley Research Center on 2-3 Sep. 1992 are included. The presentations addressed activities in the area of impact dynamics. Workshop attendees represented NASA, the Army and Air Force, the Lawrence Livermore and Sandia National Laboratories, the aircraft and automotive industries, and academia. The workshop objectives were to assess the state-of-technology in the numerical simulation of crash and to provide guidelines for future research.

Noor, Ahmed K. (compiler); Carden, Huey D. (compiler)

1993-01-01

12

Computational Methods and Stochastic Models

, proteomics, Metropolis-Hastings algorithm, Expectation-Maximization, liquid chromatography mass spectrometry arising from mass spectrometry (MS) data processing. It describes computational methods for solving themComputational Methods and Stochastic Models in Proteomics Boguslaw Kluge Faculty of Mathematics

Bechler, Pawel

13

Protecting Public-Access Computers in Libraries.

ERIC Educational Resources Information Center

Describes one public library's development of a computer-security plan, along with helpful products used. Discussion includes Internet policy, physical protection of hardware, basic protection of the operating system and software on the network, browser dilemmas and maintenance, creating clear intuitive interface, and administering fair use and…

King, Monica

1999-01-01

14

Computational Methods for Simulating Quantum Computers H. De Raedt

Computational Methods for Simulating Quantum Computers H. De Raedt and K. Michielsen Department to simulate quantum computers. It covers the basic concepts of quantum computation and quantum algorithms of quantum computers. Keywords: Quantum computation, computer simulation, time-integration algorithms

15

On computational methods for crashworthiness

NASA Technical Reports Server (NTRS)

The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

Belytschko, T.

1992-01-01

16

Closing the "Digital Divide": Building a Public Computing Center

ERIC Educational Resources Information Center

The public computing center offers an economical and environmentally friendly model for providing additional public computer access when and where it is needed. Though not intended to be a replacement for a full-service branch, the public computing center does offer a budget-friendly option for quickly expanding high-demand services into the…

Krebeck, Aaron

2010-01-01

17

Systems Science Methods in Public Health

Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example in preparing for global pandemics. However, systems science study designs and analytic methods remain underutilized and are not widely featured in public health curricula or training. In this review we present an argument for the utility of systems science methods in public health, introduce three important systems science methods (system dynamics, network analysis, and agent-based modeling), and provide three case studies where these methods have been used to answer important public health science questions in the areas of infectious disease, tobacco control, and obesity. PMID:22224885

Luke, Douglas A.; Stamatakis, Katherine A.

2012-01-01

18

77 FR 4568 - Annual Computational Science Symposium; Public Conference

Federal Register 2010, 2011, 2012, 2013, 2014

...Annual Computational Science Symposium; Public...Annual Computational Science Symposium...advance computational science. At the conference...specific challenges in accessing and reviewing data to support product development. These...

2012-01-30

19

Code of Federal Regulations, 2011 CFR

...and donation of public domain computer software. 201.26 Section 201.26 Patents...and donation of public domain computer software. (a) General. This section...the deposit of public domain computer software under section 805 of Public...

2011-07-01

20

Code of Federal Regulations, 2012 CFR

...and donation of public domain computer software. 201.26 Section 201.26 Patents...and donation of public domain computer software. (a) General. This section...the deposit of public domain computer software under section 805 of Public...

2012-07-01

21

Code of Federal Regulations, 2014 CFR

...and donation of public domain computer software. 201.26 Section 201.26 Patents...and donation of public domain computer software. (a) General. This section...the deposit of public domain computer software under section 805 of Public...

2014-07-01

22

Code of Federal Regulations, 2010 CFR

2010-07-01

23

Code of Federal Regulations, 2013 CFR

2013-07-01

24

Computational Methods in Drug Discovery

Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

2014-01-01

25

Computational Methods Development at Ames

NASA Technical Reports Server (NTRS)

This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

Kwak, Dochan; Smith, Charles A. (Technical Monitor)

1998-01-01

26

Cryptography Challenges for Computational Privacy in Public Clouds

security but its readiness for this new generational shift of computing platform i.e. Cloud ComputingCryptography Challenges for Computational Privacy in Public Clouds Sashank Dara Cisco Systems into the underpinnings of Computational Privacy and lead to better solutions. I. INTRODUCTION Cloud computing came out

27

Public Review Draft: A Method for Assessing Carbon Stocks, Carbon

Public Review Draft: A Method for Assessing Carbon Stocks, Carbon Sequestration, and Greenhouse, and Zhu, Zhiliang, 2010, Public review draft; A method for assessing carbon stocks, carbon sequestration

28

NIST Special Publication 500-2991 NISTCloudComputing5

10 11 NIST Cloud Computing Security Working Group12 NIST Cloud Computing Program13 Information Acknowledgements NIST gratefully acknowledges the broad contributions of the NIST Cloud Computing Security Working1 NIST Special Publication 500-2991 2 3 4 NISTCloudComputing5 SecurityReferenceArchitecture6 7 8 9

29

Access Control in Publicly Verifiable Outsourced Computation James Alderman

Publicly Verifiable Outsourced Computation (PVC) allows devices with restricted re- sources to delegate. Thus there is a need to apply access control mechanisms in PVC environments. In this work, we define a new framework for Publicly Verifiable Outsourced Computation with Access Control (PVC-AC) that applies

30

Computing Access in Public Spaces: A Case Study

Computing access in public space (CAPS) is described as bringing a computing system into a public place. This includes a library, an airport terminal, a hospital waiting room, or a museum. These environments have unique characteristics that make designing applications for CAPS a challenging experienceThis article presents the case study of one such CAPS environment. The interactive application for the

Rachelle S. Heller; Jon McKeeby

1993-01-01

31

Research on optimal computing model based on public crisis management

With domestic and international frequent public crisis occurring, public crisis management has become a hot academic research. The ability of crisis management capabilities of public crisis management sector and scientific evaluation system design is often affected by many complex factors. An appropriate computational model and the analysis system will be able to achieve the goals of ensuring social security. The

Dai Bibo; Jia Niyan

2011-01-01

32

Computational Methods for Simulating Quantum Computers

This review gives a survey of numerical algorithms and software to simulate quantum computers.It covers the basic concepts of quantum computation and quantum algorithms and includes a few examples that illustrate the use of simulation software for ideal and physical models of quantum computers.

H. De Raedt; K. Michielsen

2004-08-02

33

[Public health risk maps using geostatistical methods].

The purpose of this paper was to demonstrate an application of geostatistical methods to public health risk maps through the identification of areas with elevated concentrations of heavy metals. The study focused on the element lead (Pb) from aerial transportation or loading of particles due to soil leaching in an area with major urban and industrial concentration in the Baixada Santista on the coastland of São Paulo State, Brazil. Maps with the spatial distribution of lead were produced using ordinary kriging; subsequently indicative kriging was performed to identify soil sites with contamination levels higher than the maximum acceptable level defined by the Sao Paulo State Environmental Control Agency. The resulting maps showed areas with increased probability of public health risk. The methodology proved to be a promising approach for decision-making related to health public policies and environmental planning. PMID:15692648

Lourenço, Roberto Wagner; Landim, Paulo Milton Barbosa

2005-01-01

34

Special Publication 500-293 US Government Cloud Computing

Acceleration to Jumpstart the Adoption of Cloud Computing (SAJACC), Security, and Standards working groups. WeSpecial Publication 500-293 (Draft) US Government Cloud Computing Technology Roadmap Volume I Release 1.0 (Draft) High-Priority Requirements to Further USG Agency Cloud Computing Adoption Lee Badger

35

Special Publication 500-293 US Government Cloud Computing

the Adoption of Cloud Computing (SAJACC), Security, and Standards Roadmap Working Groups. We especiallySpecial Publication 500-293 (Draft) US Government Cloud Computing Technology Roadmap Volume II and Dawn Leaf NIST Cloud Computing Program Information Technology Laboratory #12;This page left

36

Computers and Curricula in the New Jersey Secondary Public Schools.

ERIC Educational Resources Information Center

This 1978 survey of 408 New Jersey secondary public schools found that 242 of them use a computer for instructional purposes. Primarily the computer is used in teaching students to write programs and as a computational aid to problem solving, with mathematics departments making the greatest use of these applications. The survey results justify the…

Versteegh-Limberg, Joyce E. A.

37

Computational Methods Minor Department of Computer Science

to Bioinformatics 3. sample courses in cognate departments: Â· AVA-160 Introduction to Digital Art Â· AVA-270 Processed Pixel Â· AVA-363 3D Computer Modeling Â· BIO-320 Ecology Â· BIO-384 Molecular Genetics Â· ECO-352

Barr, Valerie

38

A Computer-Assisted Instruction in Teaching Abstract Statistics to Public Affairs Undergraduates

ERIC Educational Resources Information Center

This article attempts to demonstrate the applicability of a computer-assisted instruction supported with simulated data in teaching abstract statistical concepts to political science and public affairs students in an introductory research methods course. The software is called the Elaboration Model Computer Exercise (EMCE) in that it takes a great…

Ozturk, Ali Osman

2012-01-01

39

Tejp: Ubiquitous Computing as Expressive Means of Personalising Public Space

We present the project Tejp, which aims at exploring the potential of ubiquitous computing as an expressive means of personalising public space. The project consists of a series of experiments in which users deploy open low-tech prototypes in urban settings to create layers of personal information and meaning in public space through the parasiting of physical environments. Focusing the experiments

Margot Jacobs; Lalya Gaye; Lars Erik Holmquist

2003-01-01

40

Public Pervasive Computing: Making the Invisible Visible

The increasing deployment of pervasive computing technologies in urban environments has inspired researchers to explore the intersections between physical, social, and digital domains. The multidisciplinary Just-for-Us project is developing a mobile Web service designed to facilitate new forms of interaction by adapting content to the user's physical and social context. These trends have motivated researchers within the human-computer interaction (HCI)

Jesper Kjeldskov; Jeni Paay

2006-01-01

41

How You Can Protect Public Access Computers "and" Their Users

ERIC Educational Resources Information Center

By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and…

Huang, Phil

2007-01-01

42

Evolution as Computation Evolutionary Theory (accepted for publication)

1/21/05 1 Evolution as Computation Evolutionary Theory (accepted for publication) By: John E: jemayf@iastate.edu Key words: Evolution, Computation, Complexity, Depth Running head: Evolution of evolution must include life and also non-living processes that change over time in a manner similar

Mayfield, John

43

Wildlife software: procedures for publication of computer software

Computers and computer software have become an integral part of the practice of wildlife science. Computers now play an important role in teaching, research, and management applications. Because of the specialized nature of wildlife problems, specific computer software is usually required to address a given problem (e.g., home range analysis). This type of software is not usually available from commercial vendors and therefore must be developed by those wildlife professionals with particular skill in computer programming. Current journal publication practices generally prevent a detailed description of computer software associated with new techniques. In addition, peer review of journal articles does not usually include a review of associated computer software. Thus, many wildlife professionals are usually unaware of computer software that would meet their needs or of major improvements in software they commonly use. Indeed most users of wildlife software learn of new programs or important changes only by word of mouth.

Samuel, M.D.

1990-01-01

44

Methods and applications in computational protein design

In this thesis, we summarize our work on applications and methods for computational protein design. First, we apply computational protein design to address the problem of degradation in stored proteins. Specifically, we ...

Biddle, Jason Charles

2010-01-01

45

The Public-Access Computer Systems Forum: A Computer Conference on BITNET.

ERIC Educational Resources Information Center

Describes the Public Access Computer Systems Forum (PACS), a computer conference that deals with all computer systems that libraries make available to patrons. Areas discussed include how to subscribe to PACS, message distribution and retrieval capabilities, subscriber lists, documentation, generic list server capabilities, and other…

Bailey, Charles W., Jr.

1990-01-01

46

Optimization Methods for Computer Animation.

ERIC Educational Resources Information Center

Emphasizing the importance of economy and efficiency in the production of computer animation, this master's thesis outlines methodologies that can be used to develop animated sequences with the highest quality images for the least expenditure. It is assumed that if computer animators are to be able to fully exploit the available resources, they…

Donkin, John Caldwell

47

32 CFR 310.52 - Computer matching publication and review requirements.

Code of Federal Regulations, 2014 CFR

...2014-07-01 2014-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

2014-07-01

48

32 CFR 310.52 - Computer matching publication and review requirements.

Code of Federal Regulations, 2013 CFR

...2013-07-01 2013-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

2013-07-01

49

32 CFR 310.52 - Computer matching publication and review requirements.

Code of Federal Regulations, 2012 CFR

...2012-07-01 2012-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

2012-07-01

50

32 CFR 310.52 - Computer matching publication and review requirements.

Code of Federal Regulations, 2011 CFR

...2011-07-01 2011-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

2011-07-01

51

32 CFR 310.52 - Computer matching publication and review requirements.

Code of Federal Regulations, 2010 CFR

...2010-07-01 2010-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

2010-07-01

52

Gender and Public Access Computing: An International Perspective

Information and Communication Technologies (ICTs), and public access to computers with Internet connectivity in particular, can assist community development efforts and help bridge the so-called digital divide. However, use of ICT is not gender neutral. Technical, social, and cultural barriers emphasize women’s exclusion from the benefits of ICT for development. This paper offers a qualitative analysis of the benefits of

Allison Terry; Ricardo Gomez

2011-01-01

53

Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

ERIC Educational Resources Information Center

The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

Jayakar, Krishna; Park, Eun-A

2012-01-01

54

BOINC: A System for Public-Resource Computing and Storage

BOINC (Berkeley Open Infrastructure for Network Com- puting) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify how their resources are allocated among these projects. We describe the goals

David P. Anderson

2004-01-01

55

Computational methods for unsteady transonic flows

NASA Technical Reports Server (NTRS)

Computational methods for unsteady transonic flows are surveyed with emphasis upon applications to aeroelastic analysis and flutter prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

Edwards, John W.; Thomas, James L.

1987-01-01

56

Computational methods for unsteady transonic flows

NASA Technical Reports Server (NTRS)

Computational methods for unsteady transonic flows are surveyed with emphasis on prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed, and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

Edwards, John W.; Thomas, J. L.

1987-01-01

57

Teaching Practical Public Health Evaluation Methods

ERIC Educational Resources Information Center

Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…

Davis, Mary V.

2006-01-01

58

Distributed Data Mining using a Public Resource Computing Framework

NASA Astrophysics Data System (ADS)

The public resource computing paradigm is often used as a successful and low cost mechanism for the management of several classes of scientific and commercial applications that require the execution of a large number of independent tasks. Public computing frameworks, also known as “Desktop Grids”, exploit the computational power and storage facilities of private computers, or “workers”. Despite the inherent decentralized nature of the applications for which they are devoted, these systems often adopt a centralized mechanism for the assignment of jobs and distribution of input data, as is the case for BOINC, the most popular framework in this realm. We present a decentralized framework that aims at increasing the flexibility and robustness of public computing applications, thanks to two basic features: (i) the adoption of a P2P protocol for dynamically matching the job specifications with the worker characteristics, without relying on centralized resources; (ii) the use of distributed cache servers for an efficient dissemination and reutilization of data files. This framework is exploitable for a wide set of applications. In this work, we describe how a Java prototype of the framework was used to tackle the problem of mining frequent itemsets from a transactional dataset, and show some preliminary yet interesting performance results that prove the efficiency improvements that can derive from the presented architecture.

Cesario, Eugenio; de Caria, Nicola; Mastroianni, Carlo; Talia, Domenico

59

Computational methods for fluid flow

NASA Astrophysics Data System (ADS)

Numerical approaches are discussed, taking into account general equations, finite-difference methods, integral and spectral methods, the relationship between numerical approaches, and specialized methods. A description of incompressible flows is provided, giving attention to finite-difference solutions of the Navier-Stokes equations, finite-element methods applied to incompressible flows, spectral method solutions for incompressible flows, and turbulent-flow models and calculations. In a discussion of compressible flows, inviscid compressible flows are considered along with viscous compressible flows. Attention is given to the potential flow solution technique, Green's functions and stream-function vorticity formulation, the discrete vortex method, the cloud-in-cell method, the method of characteristics, turbulence closure equations, a large-eddy simulation model, turbulent-flow calculations with a closure model, and direct simulations of turbulence.

Peyret, R.; Taylor, T. D.

60

Computational Methods for High-Dimensional Rotations

Computational Methods for High-Dimensional Rotations in Data Visualization ANDREAS BUJA1 DIANNE COOK2 , DANIEL ASIMOV3 , CATHERINE HURLEY4 March 31, 2004 There exist many methods for visualizing projection thereof to the viewer of a computer screen. Human interfaces for controlling 3-D data rotations

Buja, Andreas

61

Theoretical and computational methods in statistical mechanics

Theoretical and computational methods in statistical mechanics Shmuel Friedland Univ. Illinois and computational methods in statistical mechanicsBerkeley, October 26, 2009 1 / 32 #12;Overview Motivation: Ising in statistical mechanicsBerkeley, October 26, 2009 2 / 32 #12;Figure: Uri Natan Peled, Photo - December 2006

Friedland, Shmuel

62

Teaching Formal Methods in Computer Science Undergraduates

Formal Methods refer to a variety of mathematical modeling techniques, which are used both to model the behaviour of a computer system and to verify that the system satisfy design, safety and functional properties. The incorporation of a Formal Methods course in the undergraduate Computer Science curriculum is strongly suggested by scientific societies such as ACM, IEEE and BCS. In

A. SOTIRIADOU; P. KEFALAS

63

Multiprocessor computer overset grid method and apparatus

A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

Barnette, Daniel W. (Veguita, NM); Ober, Curtis C. (Los Lunas, NM)

2003-01-01

64

Computational methods of neutron transport

This books presents a balanced overview of the major methods currently available for obtaining numerical solutions in neutron and gamma ray transport. It focuses on methods particularly suited to the complex problems encountered in the analysis of reactors, fusion devices, radiation shielding, and other nuclear systems. Derivations are given for each of the methods showing how the transport equation is

E. E. Lewis; W. F. Miller

1984-01-01

65

A direct method to computational acoustics

The exact knowledge of the sound field within an enclosure is essential for a number of applications in electro-acoustics. Conventional methods for the assessment of room acoustics model the sound propagation in analogy to the propagation of light. More advanced computational methods rely on the numerical solution of the wave equation. A recently presented method is based on multidimensional wave

R. Rabenstein; A. Zayati

1999-01-01

66

The Contingent Valuation Method in Public Libraries

ERIC Educational Resources Information Center

This study aims to present a new model measuring the economic value of public libraries, combining the dissonance minimizing (DM) and information bias minimizing (IBM) format in the contingent valuation (CV) surveys. The possible biases which are tied to the conventional CV surveys are reviewed. An empirical study is presented to compare the model…

Chung, Hye-Kyung

2008-01-01

67

Computational Methods for Failure Analysis and Life Prediction

NASA Technical Reports Server (NTRS)

This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

Noor, Ahmed K. (compiler); Harris, Charles E. (compiler); Housner, Jerrold M. (compiler); Hopkins, Dale A. (compiler)

1993-01-01

68

Computational Chemistry Using Modern Electronic Structure Methods

ERIC Educational Resources Information Center

Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

2007-01-01

69

Computational methods for discontinuities in fluids

NASA Astrophysics Data System (ADS)

In the present paper, a description is provided of a range of computational methods for fluid flows in which shocks or discontinuities occur. Collectively, these methods constitute the method of front tracking. Front tracking refers to certain computational methods in which special degrees of freedom are placed at fronts or discontinuities in order to obtain increased resolution in those areas. In shock tracking, an extra lower-dimensional grid is introduced, geometrically fitting the shocks. Attention is given to discontinuities in fluid physics, numerical computations, shock tracking methods, current applications of tracking, efficient elliptic solution methods, finite element and discontinuities, high-order solutions by ADI (Alternating Direction Implicit) preconditioning, multigrid solutions, and interfaces and discontinuities.

McBryan, O. A.

70

ELSEVIER Comput. Methods Appl. Mech. Engrg. 172 (1999) 273-291 Computer methods

a -- l!iid ;B ELSEVIER Comput. Methods Appl. Mech. Engrg. 172 (1999) 273-291 Computer methods tools which assist in the automatic extraction, construction and linking of model geometry not directly permit reliable computation of local stresses near constituent boundaries. The approach described

Fish, Jacob

1999-01-01

71

Computational Methods for Ideal Magnetohydrodynamics

NASA Astrophysics Data System (ADS)

Numerical schemes for the ideal magnetohydrodynamics (MHD) are widely used for modeling space weather and astrophysical flows. They are designed to resolve the different waves that propagate through a magnetohydro fluid, namely, the fast, Alfven, slow, and entropy waves. Numerical schemes for ideal magnetohydrodynamics that are based on the standard finite volume (FV) discretization exhibit pseudo-convergence in which non-regular waves no longer exist only after heavy grid refinement. A method is described for obtaining solutions for coplanar and near coplanar cases that consist of only regular waves, independent of grid refinement. The method, referred to as Compound Wave Modification (CWM), involves removing the flux associated with non-regular structures and can be used for simulations in two- and three-dimensions because it does not require explicitly tracking an Alfven wave. For a near coplanar case, and for grids with 213 points or less, we find root-mean-square-errors (RMSEs) that are as much as 6 times smaller. For the coplanar case, in which non-regular structures will exist at all levels of grid refinement for standard FV schemes, the RMSE is as much as 25 times smaller. A multidimensional ideal MHD code has been implemented for simulations on graphics processing units (GPUs). Performance measurements were conducted for both the NVIDIA GeForce GTX Titan and Intel Xeon E5645 processor. The GPU is shown to perform one to two orders of magnitude greater than the CPU when using a single core, and two to three times greater than when run in parallel with OpenMP. Performance comparisons are made for two methods of storing data on the GPU. The first approach stores data as an Array of Structures (AoS), e.g., a point coordinate array of size 3 x n is iterated over. The second approach stores data as a Structure of Arrays (SoA), e.g. three separate arrays of size n are iterated over simultaneously. For an AoS, coalescing does not occur, reducing memory efficiency. All results are given for Cartesian grids, but the algorithms are implemented for a general geometry on a unstructured grids.

Kercher, Andrew D.

72

Survey of Public IaaS Cloud Computing API

NASA Astrophysics Data System (ADS)

Recently, Cloud computing is spread rapidly and many Cloud providers start their Cloud services. One of the Cloud computing problems is Cloud provider Lock In” for users. Actually, Cloud computing management APIs such as ordering or provisioning are different in each Cloud provider, so that users need to study and implement new APIs when they change Cloud providers. OGF and DMTF start the discussions of standardization of Cloud computing APIs, but there is no standard now. In this technical note, to clarify what APIs cloud providers should provide, we study common APIs for Cloud computing. We survey and compare Cloud computing APIs such as Rackspace Cloud Server, Sun Cloud, GoGrid, ElasticHosts, Amazon EC2 and FlexiScale which are currently provided as public IaaS Cloud APIs in the market. From the survey, the common APIs should support REST access style and provide account management, virtual server management, storage management, network management and resource usage management capabilities. We also show an example of OSS to provide these common APIs compared to normal hosting services OSS.

Yamato, Yoji; Moriya, Takaaki; Ogawa, Takeshi; Akahani, Junichi

73

Computational Methods for Rough Classification and Discovery.

ERIC Educational Resources Information Center

Rough set theory is a new mathematical tool to deal with vagueness and uncertainty. Computational methods are presented for using rough sets to identify classes in datasets, finding dependencies in relations, and discovering rules which are hidden in databases. The methods are illustrated with a running example from a database of car test results.…

Bell, D. A.; Guan, J. W.

1998-01-01

74

Computing discharge using the index velocity method

Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression techniques in which the mean cross-sectional velocity for the standard section is related to the measured index velocity. Most ratings are simple-linear regressions, but more complex ratings may be necessary in some cases. Once the rating is established, validation measurements should be made periodically. Over time, validation measurements may provide additional definition to the rating or result in the creation of a new rating. The computation of discharge is the last step in the index velocity method, and in some ways it is the most straight-forward step. This step differs little from the steps used to compute discharge records for stage-discharge gaging stations. The ratings are entered into database software used for records computation, and continuous records of discharge are computed.

Levesque, Victor A.; Oberg, Kevin A.

2012-01-01

75

Updated Panel-Method Computer Program

NASA Technical Reports Server (NTRS)

Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

Ashby, Dale L.

1995-01-01

76

77 FR 26509 - Notice of Public Meeting-Cloud Computing Forum & Workshop V

Federal Register 2010, 2011, 2012, 2013, 2014

...Technology Notice of Public Meeting--Cloud Computing Forum & Workshop V AGENCY: National...SUMMARY: NIST announces the Cloud Computing Forum & Workshop V to be held on Tuesday...the U.S. Government (USG) Cloud Computing Technology Roadmap initiative....

2012-05-04

77

76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV

Federal Register 2010, 2011, 2012, 2013, 2014

...Technology Notice of Public Meeting--Cloud Computing Forum & Workshop IV AGENCY: National...SUMMARY: NIST announces the Cloud Computing Forum & Workshop IV to be held on November...the U.S. Government (USG) Cloud Computing Technology Roadmap initiative....

2011-10-07

78

An Efficient Method for Computing All Reducts

NASA Astrophysics Data System (ADS)

In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.

Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro

79

Network analysis in public health: history, methods, and applications.

Network analysis is an approach to research that is uniquely suited to describing, exploring, and understanding structural and relational aspects of health. It is both a methodological tool and a theoretical paradigm that allows us to pose and answer important ecological questions in public health. In this review we trace the history of network analysis, provide a methodological overview of network techniques, and discuss where and how network analysis has been used in public health. We show how network analysis has its roots in mathematics, statistics, sociology, anthropology, psychology, biology, physics, and computer science. In public health, network analysis has been used to study primarily disease transmission, especially for HIV/AIDS and other sexually transmitted diseases; information transmission, particularly for diffusion of innovations; the role of social support and social capital; the influence of personal and social networks on health behavior; and the interorganizational structure of health systems. We conclude with future directions for network analysis in public health. PMID:17222078

Luke, Douglas A; Harris, Jenine K

2007-01-01

80

Efficient Methods to Compute Genomic Predictions

Technology Transfer Automated Retrieval System (TEKTRAN)

Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

81

Design methods for ethical persuasive computing

Value Sensitive Design and Participatory Design are two methodological frameworks that account for ethical issues throughout the process of technology design. Through anal- ysis and case studies, this paper argues that such methods should be applied to persuasive technology—computer sys- tems that are intended to change behaviors and attitudes.

Janet Davis

2009-01-01

82

NONLOCAL COMPUTATIONAL METHODS APPLIED TO COMPOSITE STRUCTURES

to model the degradation of organic or ceramic matrix composites structures (OMC or CMC) even under simple description of heterogeneous materia ls like organic or ceramic matrix composites. New sophisticatedNONLOCAL COMPUTATIONAL METHODS APPLIED TO COMPOSITE STRUCTURES N. GERMAIN1, F. FEYEL1 and J. BESSON

Boyer, Edmond

83

Shifted power method for computing tensor eigenvalues.

Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

Mayo, Jackson R.; Kolda, Tamara Gibson

2010-07-01

84

Shifted power method for computing tensor eigenpairs.

Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

Mayo, Jackson R.; Kolda, Tamara Gibson

2010-10-01

85

29 CFR 548.500 - Methods of computation.

Code of Federal Regulations, 2010 CFR

...AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Computation...Methods of computation. The methods of computing overtime pay on the basic rates for...employees are the same as the methods of computing overtime pay at the regular rate....

2010-07-01

86

Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing

1 Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing Qian Wang and provably secure. Index Terms--Data storage, public auditability, data dynamics, cloud computing. 3 1 at the 14th European Symposium on Research in Computer Security (ESORICS'09). concerns with cloud data

Hou, Y. Thomas

87

Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing

Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing Qian Wang and provably secure. Index Terms--Data storage, public auditability, data dynamics, cloud computing. Ã? 1, IEEE, and Jin Li Abstract--Cloud Computing has been envisioned as the next-generation architecture

Hou, Y. Thomas

88

A method to compute periodic sums

NASA Astrophysics Data System (ADS)

In a number of problems in computational physics, a finite sum of kernel functions centered at N particle locations located in a box in three dimensions must be extended by imposing periodic boundary conditions on box boundaries. Even though the finite sum can be efficiently computed via fast summation algorithms, such as the fast multipole method (FMM), the periodized extension is usually treated via a different algorithm, Ewald summation, accelerated via the fast Fourier transform (FFT). A different approach to compute this periodized sum just using a blackbox finite fast summation algorithm is presented in this paper. The method splits the periodized sum into two parts. The first, comprising the contribution of all points outside a large sphere enclosing the box, and some of its neighbors, is approximated inside the box by a collection of kernel functions (“sources”) placed on the surface of the sphere or using an expansion in terms of spectrally convergent local basis functions. The second part, comprising the part inside the sphere, and including the box and its immediate neighborhood, is treated via available summation algorithms. The coefficients of the sources are determined by least squares collocation of the periodicity condition of the total potential, imposed on a circumspherical surface for the box. While the method is presented in general, details are worked out for the case of evaluating electrostatic potentials and forces. Results show that when used with the FMM, the periodized sum can be computed to any specified accuracy, at an additional cost of the order of the free-space FMM. Several technical details and efficient algorithms for auxiliary computations are provided, as are numerical comparisons.

Gumerov, Nail A.; Duraiswami, Ramani

2014-09-01

89

Computational methods for industrial radiation measurement applications

Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a {open_quotes}black box{close_quotes} mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments.

Gardner, R.P.; Guo, P.; Ao, Q. [North Carolina State Univ., Raleigh, NC (United States)

1996-12-31

90

Computational Thermochemistry and Benchmarking of Reliable Methods

During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

2006-06-20

91

Parallel computer methods for eigenvalue extraction

NASA Technical Reports Server (NTRS)

A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is used in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. The advantages of this new algorithm in parallel computer architecture are discussed.

Akl, Fred

1988-01-01

92

Analytic Method for Computing Instrument Pointing Jitter

NASA Technical Reports Server (NTRS)

A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

Bayard, David

2003-01-01

93

An analytic method to compute star cluster luminosity statistics

NASA Astrophysics Data System (ADS)

The luminosity distribution of the brightest star clusters in a population of galaxies encodes critical pieces of information about how clusters form, evolve and disperse, and whether and how these processes depend on the large-scale galactic environment. However, extracting constraints on models from these data is challenging, in part because comparisons between theory and observation have traditionally required computationally intensive Monte Carlo methods to generate mock data that can be compared to observations. We introduce a new method that circumvents this limitation by allowing analytic computation of cluster order statistics, i.e. the luminosity distribution of the Nth most luminous cluster in a population. Our method is flexible and requires few assumptions, allowing for parametrized variations in the initial cluster mass function and its upper and lower cutoffs, variations in the cluster age distribution, stellar evolution and dust extinction, as well as observational uncertainties in both the properties of star clusters and their underlying host galaxies. The method is fast enough to make it feasible for the first time to use Markov chain Monte Carlo methods to search parameter space to find best-fitting values for the parameters describing cluster formation and disruption, and to obtain rigorous confidence intervals on the inferred values. We implement our method in a software package called the Cluster Luminosity Order-Statistic Code, which we have made publicly available.

da Silva, Robert L.; Krumholz, Mark R.; Fumagalli, Michele; Fall, S. Michael

2014-03-01

94

Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

95

Delamination detection using methods of computational intelligence

NASA Astrophysics Data System (ADS)

Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

2012-11-01

96

Methods of Public-Key Cryptography Emilie Wheeler

on Elliptic Curves 16 3.1 Elliptic Curve Background . . . . . . . . . . . . . . . . . . . . 16 3.2 Elliptic . . . . . . . . . . . . . . . . . . . . 19 3.4 Elliptic Curve Variation on the RSA Cryptosystem . . . . . . 22 4 Conclusion 23 5 ReferencesMethods of Public-Key Cryptography ´Emilie Wheeler December 10, 2012 #12;Contents 1 Introduction 2

Salmasian, Hadi

97

The Diffusion of Evaluation Methods among Public Relations Practitioners.

ERIC Educational Resources Information Center

A study explored the relationships between public relations practitioners' organizational roles and the type of evaluation methods they used on the job. Based on factor analysis of role data obtained from an earlier study, four organizational roles were defined and ranked: communication manager, media relations specialist, communication liaison,…

Dozier, David M.

98

Computational Statistical Methods for Social Network Models

We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

2013-01-01

99

Review of Computational Stirling Analysis Methods

NASA Technical Reports Server (NTRS)

Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

2004-01-01

100

ELSEVIER Comput. Methods Appl. Mech. Engrg. 174 (1999) 371-391 Computer methods

of these specific classes is 3D simulation of unsteady wake flow generated by a primary (leading) object and its method for simulation of unsteady flows involving a primary object, a long wake region and, possibly of problems. In this paper, we present a multi-domain parallel computational method for simulation of unsteady

Tezduyar, Tayfun E.

101

17 CFR 43.3 - Method and timing for real-time public reporting.

Code of Federal Regulations, 2014 CFR

...2014-04-01 false Method and timing for real-time public reporting. 43.3 Section 43...TRADING COMMISSION (CONTINUED) REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...

2014-04-01

102

Evolutionary Computing Methods for Spectral Retrieval

NASA Technical Reports Server (NTRS)

A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

2009-01-01

103

Invariant subspace method for eigenvalue computation

The dynamic system being studied is first divided into subsystems with each subsystem representing some physical part of the total system. The eigenvalues and eigenvectors of the subsystems are computed using standard library routines. The change in the eigenvalues between the subsystems and the total system caused by the interconnection between the subsystems and the total system caused by the interconnection between the subsystems is found using a method based on invariant subspaces. The greatest change occurs in the global eigenvalues, those which influence the response of more than one of the subsystems. These eigenvalues are of particular interest as they are the type that could cause interarea oscillations.

Stadnicki, D.J. (ESCA Corp., Bellevue, WA (United States)); Ness, J.E. Van (Northwestern Univ., Evanston, IL (United States))

1993-05-01

104

ERIC Educational Resources Information Center

Advances in technology and course delivery methods have enabled persons with disabilities to enroll in higher education at an increasing rate. Federal regulations state persons with disabilities must be granted equal access to the information contained in computer-based instructional materials, but faculty at the six public universities in South…

Olson, Christopher

2013-01-01

105

Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing

Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing Cong Wang, Qian large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging, enabling public auditability for cloud data storage security is of critical importance so that users can

Hou, Y. Thomas

106

Publics in Practice: Ubiquitous Computing at a Shelter for Homeless Mothers

Publics in Practice: Ubiquitous Computing at a Shelter for Homeless Mothers Christopher A. Le at a shelter for homeless mothers. Our system connects mobile phones, a shared display, and a Web application and organiza- tional coordination. Author Keywords Constructed Publics, Homeless, Urban Computing, Longitu

Edwards, Keith

107

User's guide to SAC, a computer program for computing discharge by slope-area method

This user's guide contains information on using the slope-area program, SAC. SAC can be used to compute peak flood discharges from measurements of high-water marks along a stream reach. The Slope-area method used by the program is the U.S. Geological Survey (USGS) procedure presented in Techniques of Water Resources Investigations of the U.S. Geological Survey, beok 3, chapter A2, "Measurement of Peak Discharge by the Slope-Area Method." The program uses input files that have formats compatible with those used by the water-surface profile program (WSPRO) described in the Federal Highways Administration publication FHWA-IP-89-027. The guide briefly describes the slope-area method documents the input requirements and the output produced, and demonstrates use of SAC.

Fulford, Janice M.

1994-01-01

108

A large number of private networks have appeared because of the inefficiency of the public telephone network to carry data. This article tries to estimate the impacts of computer communications on the public network in terms of its structure and transmissions capacity by employing the retrospective technology assessment. It is found that the public telephone network will be the backbone

Hokyu Lee

109

Computational methods for optical molecular imaging

Summary A new computational technique, the matched interface and boundary (MIB) method, is presented to model the photon propagation in biological tissue for the optical molecular imaging. Optical properties have significant differences in different organs of small animals, resulting in discontinuous coefficients in the diffusion equation model. Complex organ shape of small animal induces singularities of the geometric model as well. The MIB method is designed as a dimension splitting approach to decompose a multidimensional interface problem into one-dimensional ones. The methodology simplifies the topological relation near an interface and is able to handle discontinuous coefficients and complex interfaces with geometric singularities. In the present MIB method, both the interface jump condition and the photon flux jump conditions are rigorously enforced at the interface location by using only the lowest-order jump conditions. This solution near the interface is smoothly extended across the interface so that central finite difference schemes can be employed without the loss of accuracy. A wide range of numerical experiments are carried out to validate the proposed MIB method. The second-order convergence is maintained in all benchmark problems. The fourth-order convergence is also demonstrated for some three-dimensional problems. The robustness of the proposed method over the variable strength of the linear term of the diffusion equation is also examined. The performance of the present approach is compared with that of the standard finite element method. The numerical study indicates that the proposed method is a potentially efficient and robust approach for the optical molecular imaging. PMID:20485461

Chen, Duan; Wei, Guo-Wei; Cong, Wen-Xiang; Wang, Ge

2010-01-01

110

Saving lives: a computer simulation game for public education about emergencies

One facet of the Information Revolution in which the nation finds itself involves the utilization of computers, video systems, and a variety of telecommunications capabilities by those who must cope with emergency situations. Such technologies possess a significant potential for performing emergency public education and transmitting key information that is essential for survival. An ''Emergency Public Information Competitive Challenge Grant,'' under the aegis of the Federal Emergency Management Agency (FEMA), has sponsored an effort to use computer technology - both large, time-sharing systems and small personal computers - to develop computer games which will help teach techniques of emergency management to the public at large. 24 references.

Morentz, J.W.

1985-01-01

111

Comparison of computation methods for CBM production performance

I INTRODUCTION............................................................................. 1 1.1 Problem Description................................................................... 1 1.2 Literature Review... ....................................................................... 3 II COMPUTATION METHODS FOR MODELING CBM................. 7 2.1 Description of Computation Methods ........................................ 7 2.2 Differences in the Input Data ...................................................... 8 2...

Mora, Carlos A.

2009-06-02

112

Computational predictive methods for fracture and fatigue

NASA Astrophysics Data System (ADS)

The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

1994-09-01

113

Computational predictive methods for fracture and fatigue

NASA Technical Reports Server (NTRS)

The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

1994-01-01

114

Public health surveillance: historical origins, methods and evaluation.

In the last three decades, disease surveillance has grown into a complete discipline, quite distinct from epidemiology. This expansion into a separate scientific area within public health has not been accompanied by parallel growth in the literature about its principles and methods. The development of the fundamental concepts of surveillance systems provides a basis on which to build a better understanding of the subject. In addition, the concepts have practical value as they can be used in designing new systems as well as understanding or evaluating currently operating systems. This article reviews the principles of surveillance, beginning with a historical survey of the roots and evolution of surveillance, and discusses the goals of public health surveillance. Methods for data collection, data analysis, interpretation, and dissemination are presented, together with proposed procedures for evaluating and improving a surveillance system. Finally, some points to be considered in establishing a new surveillance system are presented. PMID:8205649

Declich, S.; Carter, A. O.

1994-01-01

115

Computer-Assisted Writing Instruction in Public Community Colleges.

ERIC Educational Resources Information Center

A study explored the status of computer aided instruction (CAI) in writing programs offered by community colleges identified as using CAI. The questionnaire, completed in the winter of 1986 by 198 English instructors, measured accessibility of computers to faculty and students, types of hardware and software, computer locations, computer facility…

Saunders, Pearl I.

116

Domain decomposition methods in computational fluid dynamics

NASA Technical Reports Server (NTRS)

The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

Gropp, William D.; Keyes, David E.

1991-01-01

117

Modules and methods for all photonic computing

A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

Schultz, David R. (Knoxville, TN); Ma, Chao Hung (Oak Ridge, TN)

2001-01-01

118

Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing

Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing Qian Wang1, {wjlou}@ece.wpi.edu Abstract. Cloud Computing has been envisioned as the next-generation architecture the integrity of data storage in Cloud Computing. In particular, we consider the task of allow- ing a third

119

Publication Series of the John von Neumann Institute for Computing (NIC) NIC Series Volume 8

Publication Series of the John von Neumann Institute for Computing (NIC) NIC Series Volume 8 #12 by John von Neumann Institute for Computing NIC Series Volume 8 ISBN 3-00-008236-0 #12;Die Deutsche Series Volume 8 ISBN 3-00-008236-0 #12;Preface Computational Physics is today well

Marro, Joaquín

120

77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop

Federal Register 2010, 2011, 2012, 2013, 2014

...Technology Notice of Public Meeting--Cloud Computing and Big Data Forum and Workshop AGENCY...Technology (NIST) announces a Cloud Computing and Big Data Forum and Workshop to be...hands-on workshop. The NIST Cloud Computing and Big Data Forum and Workshop...

2012-12-18

121

Judging the Impact of Conference and Journal Publications in High Performance Computing

Judging the Impact of Conference and Journal Publications in High Performance Computing dimensions that count most, conferences are superior. This is particularly true in high performance computing and are never published in journals. The area of high performance computing is broad, and we divide venues

Zhou, Yuanyuan

122

Checklist and Pollard Walk butterfly survey methods on public lands

Checklist and Pollard Walk butterfly survey methods were contemporaneously applied to seven public sites in North Dakota during the summer of 1995. Results were compared for effect of method and site on total number of butterflies and total number of species detected per hour. Checklist searching produced significantly more butterfly detections per hour than Pollard Walks at all sites. Number of species detected per hour did not differ significantly either among sites or between methods. Many species were detected by only one method, and at most sites generalist and invader species were more likely to be observed during checklist searches than during Pollard Walks. Results indicate that checklist surveys are a more efficient means for initial determination of a species list for a site, whereas for long-term monitoring the Pollard Walk is more practical and statistically manageable. Pollard Walk transects are thus recommended once a prairie butterfly fauna has been defined for a site by checklist surveys.

Royer, R.A.; Austin, J.E.; Newton, W.E.

1998-01-01

123

Potentials and Requirements of Mobile Ubiquitous Computing for Public Transport

Public transport plays an important role in our society which is characterized by mobility, individuality, comfort and ecological constraints. It is common opinion that public transport offers a high level of comfort but lacks individual flexibility compared to individual transport. While navigation systems and other context-aware services enhance the feeling of self determination for car drivers, no comparable means for

Holger Mügge; Karl-heinz Lüke; Matthias Eisemann

2007-01-01

124

A method for evaluating computer-supported

Chile, Blanco Encalada 2120, Casilla 2777, Santiago, Chile E-mail: {ccollazo, luguerre, jpino, sochoa in the Â®eld of computer-aided education is that of Computer- Supported Collaborative Learning (CSCL

Guerrero, Luis

125

Postdoctoral Research Position in Computational Methods for Seismic Imaging Department applying numerical methods and high- performance computing techniques to seismic imaging. Ideal candidates elastic models used in industrial seismic inversion. Successful candidates will almost certainly have

126

Computational Evaluation of the Traceback Method

ERIC Educational Resources Information Center

Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

Kol, Sheli; Nir, Bracha; Wintner, Shuly

2014-01-01

127

Advances in turbulent flow computations using high-resolution methods

The paper reviews research activity in connection with the use of high-resolution methods in turbulent flow computations. High-resolution methods have proven to successfully compute a number of turbulent flows without need to resort to an explicit turbulence model. Here, we review the basic properties of these methods, present evidence from the successful implementation of these methods in turbulent flows, and

Dimitris Drikakis

2003-01-01

128

Scientific Methods in Computer Science Gordana Dodig-Crnkovic

(Religion, Art, ...) 5 Natural Sciences (Physics, Chemistry, Biology, ...) 2 Social Sciences (EconomicsScientific Methods in Computer Science Gordana Dodig-Crnkovic Department of Computer Science analyzes scientific aspects of Computer Science. First it defines science and scientific method in general

Cunningham, Conrad

129

Weaving Formal Methods into the Undergraduate Computer Science Curriculum

Weaving Formal Methods into the Undergraduate Computer Science Curriculum (Extended Abstract infrastructure of an undergraduate computer science curriculum. In so doing, we would be teaching formal methods) Jeannette M. Wing1 Computer Science Department Carnegie Mellon University Pittsburgh, PA USA wing

Wing, Jeannette M.

130

to appear in Behavior Research Methods, Instruments and Computers A Computational Model

to appear in Behavior Research Methods, Instruments and Computers A Computational Model,version1-1Apr2008 Author manuscript, published in "Behavior Research Methods 38, 4 (2006) 628-637" #12;to appear in Behavior Research Methods, Instruments and Computers Abstract This paper describes

Paris-Sud XI, Université de

131

Computer-Aided Dispatch System as a Decision Making Tool in Public and Private Sectors

We describe in detail seven distinct areas in both public and private sectors in which a real-time computer-aided dispatch system is applicable to the allocation of scarce resources. Characteristics of a real-time ...

Lee, I-Jen

132

A PreComputation Scheme for Speeding Up PublicKey Cryptosystems

by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arthur C. Smith Chair, Department Committee on Graduate Students #12; A PreÂComputation Scheme in these publicÂkey schemes. The constructions use random walks on Cayley (expander) graphs over Abelian groups

Goldwasser, Shafi

133

Universal Tailored Access: Automating Setup of Public and Classroom Computers.

ERIC Educational Resources Information Center

This article describes a setup smart access card that enables users with visual impairments to customize magnifiers and screen readers on computers by loading the floppy disk into the computer and finding and pressing two successive keys. A trial with four elderly users found instruction took about 15 minutes. (Contains 3 references.) (CR)

Whittaker, Stephen G.; Young, Ted; Toth-Cohen, Susan

2002-01-01

134

Strengthening Computer Technology Programs. Special Publication Series No. 49.

ERIC Educational Resources Information Center

Three papers present examples of strategies used by developing institutions and historically black colleges to strengthen computer technology programs. "Promoting Industry Support in Developing a Computer Technology Program" (Albert D. Robinson) describes how the Washtenaw Community College (Ann Arbor, Michigan) Electrical/Electronics Department…

McKinney, Floyd L., Comp.

135

PUBLIC KEY ENCRYPTION CAN BE SECURE AGAINST ENCRYPTION EMULATION ATTACKS BY COMPUTATIONALLY why, contrary to a prevalent opinion, public key encryption can be secure against "encryption by the NSF grant DMS-0405105. 1 #12;cryptographers was the fact that encryption by the sender (Bob) can

Shpilrain, Vladimir

136

The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

ERIC Educational Resources Information Center

Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…

DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

2013-01-01

137

Public library computer training for older adults to access high-quality Internet health information

An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at

Bo Xie; Julie M. Bugg

2009-01-01

138

Alternative methods for computing sound radiation from vibrating surfaces

NASA Technical Reports Server (NTRS)

The merits of various numerical and experimental methods for computing sound fields radiated from vibrating structures are examined. The finite difference method, the finite element method, direct boundary element method, indirect boundary element near-field acoustic holography, two-microphone methods, and spatial transformation of sound fields are considered. The proper utilization of the methods is discussed.

Bernhard, R. J.; Gardner, B. K.; Smith, D. C.

1987-01-01

139

ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

140

Computational methods in sequence and structure prediction

NASA Astrophysics Data System (ADS)

This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed a software package which is capable of designing novel protein structures at the atomic resolution. This software package allows us to perform protein structure design with a flexible backbone. The backbone flexibility includes loop region relaxation as well as a secondary structure collective mode relaxation scheme. (Abstract shortened by UMI.)

Lang, Caiyi

141

Atomistic Method Applied to Computational Modeling of Surface Alloys

NASA Technical Reports Server (NTRS)

The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on the BFS (Bozzolo, Ferrante, and Smith) method for the calculation of the energetics, consists of a small number of simple PCbased computer codes that deal with the different aspects of surface alloy formation. Two analysis modes are available within this package. The first mode provides an atom-by-atom description of real and virtual stages 1. during the process of surface alloying, based on the construction of catalogues of configurations where each configuration describes one possible atomic distribution. BFS analysis of this catalogue provides information on accessible states, possible ordering patterns, and details of island formation or film growth. More importantly, it provides insight into the evolution of the system. Software developed by the Computational Materials Group allows for the study of an arbitrary number of elements forming surface alloys, including an arbitrary number of surface atomic layers. The second mode involves large-scale temperature-dependent computer 2. simulations that use the BFS method for the energetics and provide information on the dynamic processes during surface alloying. These simulations require the implementation of Monte-Carlo-based codes with high efficiency within current workstation environments. This methodology capitalizes on the advantages of the BFS method: there are no restrictions on the number or type of elements or on the type of crystallographic structure considered. This removes any restrictions in the definition of the configuration catalogues used in the analytical calculations, thus allowing for the study of arbitrary ordering patterns, ultimately leading to the actual surface alloy structure. Moreover, the Monte Carlo numerical technique used for the large-scale simulations allows for a detailed visualization of the simulated process, the main advantage of this type of analysis being the ability to understand the underlying features that drive these processes. Because of the simplicity of the BFS method for e energetics used in these calculations, a detailed atom-by-atom analysis can be performed at any

Bozzolo, Guillermo H.; Abel, Phillip B.

2000-01-01

142

The Battle to Secure Our Public Access Computers

ERIC Educational Resources Information Center

Securing public access workstations should be a significant part of any library's network and information-security strategy because of the sensitive information patrons enter on these workstations. As the IT manager for the Johnson County Library in Kansas City, Kan., this author is challenged to make sure that thousands of patrons get the access…

Sendze, Monique

2006-01-01

143

When computing a square root, computers still, in effect, use an iterative algorithm developed by the Babylonians millennia ago. This is a very unusual phenomenon, because for most other computations, better algorithms have been invented - even division is performed, in the computer, by an algorithm which is much more efficient that division methods that we have all learned in

Olga Kosheleva

2009-01-01

144

Federal Register 2010, 2011, 2012, 2013, 2014

...Public Meeting--Intersection of Cloud Computing and Mobility Forum and Workshop AGENCY...INFORMATION: NIST hosted six prior Cloud Computing Forum & Workshop events in May 2010...Government's experience with cloud computing, report on the status of the NIST...

2013-09-04

145

Fourth International Symposium Computational Methods in Toxicology and Pharmacology

Fourth International Symposium Computational Methods in Toxicology and Pharmacology Integrating-Chemical Pharmacology of RAS, Russia) A. Lagunin (Institute of Biomedical Chemistry of RAMS, Russia ) A. Lisitsa

Ferreira, MÃ¡rcia M. C.

146

ERIC Educational Resources Information Center

This publication documents the revised Alaska Finance Foundation Simulation Program, a computer finance simulation package for the Alaska School District Foundation Formula. The introduction briefly describes the program, which was written in Fortran for a Honeywell '66' computer located at the University of Alaska, Fairbanks, and allows…

Fullam, T. J.

147

Computational complexity for the two-point block method

NASA Astrophysics Data System (ADS)

In this paper, we discussed and compared the computational complexity for two-point block method and one-point method of Adams type. The computational complexity for both methods is determined based on the number of arithmetic operations performed and expressed in O(n). These two methods will be used to solve two-point second order boundary value problem directly and implemented using variable step size strategy adapted with the multiple shooting technique via three-step iterative method. Two numerical examples will be tested. The results show that the computational complexity of these methods is reliable to estimate the cost of these methods in term of the execution time. We conclude that the two-point block method has better computational performance compare to the one-point method as the total number of steps is larger.

See, Phang Pei; Majid, Zanariah Abdul

2014-12-01

148

Computer methods for structural neurological systems analysis

computer science and neuroscience are placing extensive effort into the research of neural networks. The concept of the neural network has existed since the 1950's, almost as long as computer science itself. In the beginning, these networks were... of models of the saccadic eye movement system, an extremely complex sensory-motor system. An anatomical approach has also been taken by Arbib and the Systems Neuroscience Group at the University of Massachusetts, Amherst. Lara [21] discusses a...

Lo?pez, Roberto Eugenio

1991-01-01

149

A Comparison of Computational Methods for Identifying Virulence Factors

Bacterial pathogens continue to threaten public health worldwide today. Identification of bacterial virulence factors can help to find novel drug/vaccine targets against pathogenicity. It can also help to reveal the mechanisms of the related diseases at the molecular level. With the explosive growth in protein sequences generated in the postgenomic age, it is highly desired to develop computational methods for rapidly and effectively identifying virulence factors according to their sequence information alone. In this study, based on the protein-protein interaction networks from the STRING database, a novel network-based method was proposed for identifying the virulence factors in the proteomes of UPEC 536, UPEC CFT073, P. aeruginosa PAO1, L. pneumophila Philadelphia 1, C. jejuni NCTC 11168 and M. tuberculosis H37Rv. Evaluated on the same benchmark datasets derived from the aforementioned species, the identification accuracies achieved by the network-based method were around 0.9, significantly higher than those by the sequence-based methods such as BLAST, feature selection and VirulentPred. Further analysis showed that the functional associations such as the gene neighborhood and co-occurrence were the primary associations between these virulence factors in the STRING database. The high success rates indicate that the network-based method is quite promising. The novel approach holds high potential for identifying virulence factors in many other various organisms as well because it can be easily extended to identify the virulence factors in many other bacterial species, as long as the relevant significant statistical data are available for them. PMID:22880014

Zheng, Lu-Lu; Li, Yi-Xue; Ding, Juan; Guo, Xiao-Kui; Feng, Kai-Yan; Wang, Ya-Jun; Hu, Le-Le; Cai, Yu-Dong; Hao, Pei; Chou, Kuo-Chen

2012-01-01

150

Fast car/human classification methods in the computer vision tasks

NASA Astrophysics Data System (ADS)

In this paper we propose a method for classification of moving objects of "human" and "car" types in computer vision systems using statistical hypotheses and integration of the results using two different decision rules. FAR-FRR graphs for all criteria and the decision rule are plotted. Confusion matrix for both ways of integration is presented. The example of the method application to the public video databases is provided. Ways of accuracy improvement are proposed.

Vishnyakov, Boris V.; Malin, Ivan K.; Vizilter, Yuri V.; Huang, Shih-Chia; Kuo, Sy-Yen

2013-04-01

151

Analytic and simulation methods in computer network design*

Analytic and simulation methods in computer network design* by LEONARD KLEINROCK University of California Los Angeles, California INTRODUCTION The Seventies are here and so are computer networks! The time sharing industry dominated the Sixties and it appears that computer networks will play a similar role

Kleinrock, Leonard

152

DEVELOPING METHODS FOR COMPUTER PROGRAMMING BY MUSICAL PERFORMANCE AND COMPOSITION

DEVELOPING METHODS FOR COMPUTER PROGRAMMING BY MUSICAL PERFORMANCE AND COMPOSITION Alexis Kirke successful work in sonifying computer program code to help debugging. This paper investigates the reverse process, allowing music to be used to write computer programs. Such an approach would be less language

Miranda, Eduardo Reck

153

Novel Methods for Communicating Plasma Science to the General Public

NASA Astrophysics Data System (ADS)

The broader implications of Plasma Science remains an elusive topic that the general public rarely discusses, regardless of their relevance to energy, the environment, and technology. Recently, we have looked beyond print media for methods to reach large numbers of people in creative and informative ways. These have included video, art, images, and music. For example, our submission to the ``What is a Flame?'' contest was ranked in the top 15 out of 800 submissions. Images of plasmas have won 3 out of 5 of the Princeton University ``Art of Science'' competitions. We use a plasma speaker to teach students of all ages about sound generation and plasma physics. We report on the details of each of these and future videos and animations under development.

Zwicker, Andrew; Merali, Aliya; Wissel, S. A.; Delooper, John

2012-10-01

154

Computer conferencing using the Canadian system CoSy is presented, and three related projects are discussed. 1. An extramural university course in epidemiology and medical statistics was taught using CoSy. Computer conferencing can be a useful vehicle for distance education, enabling health professionals to attend "classes" independent of geographical and time constraints. The subjects taught are well suited to this medium. 2. Internet was used to establish a small network of public health researchers and teachers. Participants are from Canada, Hungary, Israel, Norway, and Australia. Networks of this type not only facilitate international collaboration within public health, they also enable international collaborative research and teaching projects that would have been too cumbersome and time consuming to initiate and conduct without this communication facility. 3. "Development of Medical Education for a New Public Health in Hungary," a project funded by the European Community's TEMPUS program, is established with a view to developing the undergraduate and graduate education of public health professionals. It is a joint program between the five Hungarian medical schools and ten universities in the G24 countries. The TEMPUS listserver functions as an important vehicle for communication within this project. PMID:1309104

1992-12-17

155

Computers in Public Schools: Changing the Image with Image Processing.

ERIC Educational Resources Information Center

The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

Raphael, Jacqueline; Greenberg, Richard

1995-01-01

156

Computer-Aided Method Engineering: An Analysis of Existing Environments

Analogous to Computer-Aided Software Engineering (CASE), which aims to facilitate Software Engineering through specialized\\u000a tools, Computer-Aided Method Engineering (CAME) strives to support a wide range of activities carried out by method engineers.\\u000a Although there is consensus on the importance of tool support in method engineering, existing CAME environments are incomplete\\u000a prototypes, each covering just a few steps of the method

Ali Niknafs; Raman Ramsin

2008-01-01

157

Method of performing computational aeroelastic analyses

NASA Technical Reports Server (NTRS)

Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

Silva, Walter A. (Inventor)

2011-01-01

158

An efficient text input method for pen-based computers

Pen-based computing has not yet taken off, partly because of the lack of fast and easy text input methods. The situation is even worse for people using East Asian languages, where thousands of characters are used and handwriting recogni- tion is extremely difficult. In this paper, we propose a new fast text input method for pen-based computers, where text is

Toshiyuki Masui

1998-01-01

159

A Numerical Method for Computing an SVD-like Decomposition

We present a numerical method for computing the SVD-like decomposition B = QDS-1 , where Q is orthogonal, S is symplectic, and D is a permuted diagonal matrix. The method can be applied directly to compute the canonical form of the Hamiltonian...

Xu, Hongguo

2005-09-05

160

Overview of computational structural methods for modern military aircraft

NASA Technical Reports Server (NTRS)

Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

Kudva, J. N.

1992-01-01

161

Domain identification in impedance computed tomography by spline collocation method

NASA Technical Reports Server (NTRS)

A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

Kojima, Fumio

1990-01-01

162

Computational Methods for Analyzing Health News Coverage

ERIC Educational Resources Information Center

Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…

McFarlane, Delano J.

2011-01-01

163

17 CFR 43.3 - Method and timing for real-time public reporting.

Code of Federal Regulations, 2013 CFR

...2013-04-01 false Method and timing for real-time public reporting. 43.3 Section 43...COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...

2013-04-01

164

17 CFR 43.3 - Method and timing for real-time public reporting.

Code of Federal Regulations, 2012 CFR

...2012-04-01 false Method and timing for real-time public reporting. 43.3 Section 43...COMMODITY FUTURES TRADING COMMISSION REAL-TIME PUBLIC REPORTING § 43.3 Method and timing for real-time public reporting. (a)...

2012-04-01

165

Computational Methods to Predict Protein Interaction Partners

NASA Astrophysics Data System (ADS)

In the new paradigm for studying biological phenomena represented by Systems Biology, cellular components are not considered in isolation but as forming complex networks of relationships. Protein interaction networks are among the first objects studied from this new point of view. Deciphering the interactome (the whole network of interactions for a given proteome) has been shown to be a very complex task. Computational techniques for detecting protein interactions have become standard tools for dealing with this problem, helping and complementing their experimental counterparts. Most of these techniques use genomic or sequence features intuitively related with protein interactions and are based on "first principles" in the sense that they do not involve training with examples. There are also other computational techniques that use other sources of information (i.e. structural information or even experimental data) or are based on training with examples.

Valencia, Alfonso; Pazos, Florencio

166

Soft computing methods in design of superalloys

NASA Technical Reports Server (NTRS)

Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modeled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

1995-01-01

167

Soft Computing Methods in Design of Superalloys

NASA Technical Reports Server (NTRS)

Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

1996-01-01

168

COMSAC: Computational Methods for Stability and Control. Part 1

NASA Technical Reports Server (NTRS)

Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

2004-01-01

169

Computing with DNA 413 413 From: Methods in Molecular Biology, vol. 132: Bioinformatics Methods of molecular biology to solve a diffi- cult computational problem. Adleman's experiment solved an instance computations. The main idea was the encoding of data in DNA strands and the use of tools from molecular biology

Kari, Lila

170

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

Code of Federal Regulations, 2010 CFR

...rules apply to public access use of the Internet on NARA-supplied computers? 1254...rules apply to public access use of the Internet on NARA-supplied computers? (a...computers (workstations) are available for Internet use in all NARA research...

2010-07-01

171

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

Code of Federal Regulations, 2013 CFR

...rules apply to public access use of the Internet on NARA-supplied computers? 1254...rules apply to public access use of the Internet on NARA-supplied computers? (a...computers (workstations) are available for Internet use in all NARA research...

2013-07-01

172

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

Code of Federal Regulations, 2012 CFR

...rules apply to public access use of the Internet on NARA-supplied computers? 1254...rules apply to public access use of the Internet on NARA-supplied computers? (a...computers (workstations) are available for Internet use in all NARA research...

2012-07-01

173

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

Code of Federal Regulations, 2011 CFR

2011-07-01

174

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

Code of Federal Regulations, 2014 CFR

2014-07-01

175

Three-dimensional protein structure prediction: Methods and computational strategies.

A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. PMID:25462334

Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

2014-10-12

176

function in the Sobolev norms. As a meshless method, the convergence rate is measured by a new control in developing meshless approximations for Gale&in procedures to solve partial differential equations. Several. [8] and Liu [9]. It seems to us that the moving least square interpolant based meshless method has

Li, Shaofan

177

Discontinuous Galerkin Methods: Theory, Computation and Applications

This volume contains a survey article for Discontinuous Galerkin Methods (DGM) by the editors as well as 16 papers by invited speakers and 32 papers by contributed speakers of the First International Symposium on Discontinuous Galerkin Methods. It covers theory, applications, and implementation aspects of DGM.

Cockburn, B.; Karniadakis, G. E.; Shu, C-W (Eds.)

2000-12-31

178

ERIC Educational Resources Information Center

The aim of this study was to conduct a comparative assessment of computer literacy of private and public secondary school students. Although the definition of computer literacy varies widely, this study treated computer literacy in terms of access to, and use of, computers and the internet, basic knowledge and skills required to use computers and…

Osunwusi, Adeyinka Olumuyiwa; Abifarin, Michael Segun

2013-01-01

179

Computational methods for internal flows with emphasis on turbomachinery

NASA Technical Reports Server (NTRS)

Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

Mcnally, W. D.; Sockol, P. M.

1981-01-01

180

Consensus methods: review of original methods and their main alternatives used in public health

Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Données Santé Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039

Bourrée, Fanny; Michel, Philippe; Salmi, Louis Rachid

2008-01-01

181

Phase Field Method: Spinodal Decomposition Computer Laboratory

NSDL National Science Digital Library

In this lab, spinodal decomposition is numerically implemented in FiPy. A simple example python script (spinodal.py) summarizes the concepts. This lab is intended to complement the "Phase Field Method: An Introduction" lecture

García, R. Edwin

2008-08-25

182

Using a portable wireless computer lab to provide outreach training to public health workers.

Librarians at Louisiana State University Health Sciences Center in Shreveport developed an outreach program for public health workers in north Louisiana. This program provided hands-on training on how to find health information resources on the Web. Several challenges arose during this project. Public health units in the region lacked suitable teaching labs and faced limited travel budgets and tight staffing requirements, which made it impractical for public health workers to travel. One solution to these problems is a portable wireless computer lab that can be set up at each site. The outreach program utilized this approach to present on-site training to public health workers in the region. The paper discusses operational and technical issues encountered in implementing this public health outreach project. PMID:17135147

Watson, Michael M; Timm, Donna F; Parker, Dawn M; Adams, Mararia; Anderson, Angela D; Pernotto, Dennis A; Comegys, Marianne

2006-01-01

183

Low-Rank Incremental Methods for Computing Dominant Singular Subspaces

Computing the singular values and vectors of a matrix is a crucial kernel in numerous scientific and industrial applications. As such, numerous methods have been proposed to handle this problem in a computationally efficient way. This paper considers a family of methods for incrementally computing the dominant SVD of a large matrix A. Specifically, we describe a unification of a number of previously disparate methods for approximating the dominant SVD via a single pass through A. We tie the behavior of these methods to that of a class of optimization-based iterative eigensolvers on A'*A. An iterative procedure is proposed which allows the computation of an accurate dominant SVD via multiple passes through A. We present an analysis of the convergence of this iteration, and provide empirical demonstration of the proposed method on both synthetic and benchmark data.

Baker, Christopher G [ORNL; Gallivan, Dr. Kyle A [Florida State University; Van Dooren, Dr. Paul [Universite Catholique de Louvain

2012-01-01

184

Developing a multimodal biometric authentication system using soft computing methods.

Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision. PMID:25502384

Malcangi, Mario

2015-01-01

185

Comparison of Different Methods for Computing Lyapunov Exponents

Different discrete and continuous methods for computing the Lyapunov exponents of dynamical systems are compared for their efficiency and accuracy. All methods are either based on the QR or the singular value decomposition. The relationship between the discrete methods is discussed in terms of the iteration algorithms and the decomposition procedures used. We give simple derivations of the differential equations

Karlheinz Geist; Ulrich Parlitz; Werner Lauterborn

1990-01-01

186

Computer Experiments with Newton's Method J. Orlando Freitas*

Computer Experiments with Newton's Method J. Orlando Freitas* orlando@uma.pt Escola SecundÃ¡ria de's method has served as one of the most fruitful paradigms in the development of complex iteration theory. H.-O. Peitgen, 1988 Abstract We study the chaotic behaviour of Newton's method with graphic calculators

Paris-Sud XI, UniversitÃ© de

187

A New Highly Convergent Monte Carlo Method for Matrix Computations

A New Highly Convergent Monte Carlo Method for Matrix Computations I.T. Dimov 1 , V.N. Alexandrov 2 Abstract In this paper a second degree iterative Monte Carlo method for solving Systems of Linear be at least c2 N times less than the number of realizations Nc of the existing Monte Carlo method

Dimov, Ivan

188

METHODOLOGICAL NOTES: Computer viruses and methods of combatting them

NASA Astrophysics Data System (ADS)

This article examines the current virus situation for personal computers and time-sharing computers. Basic methods of combatting viruses are presented. Specific recommendations are given to eliminate the most widespread viruses. A short description is given of a universal antiviral system, PHENIX, which has been developed.

Landsberg, G. L.

1991-02-01

189

An Immersed Boundary Method for Computing Anisotropic Permeability of

Media source: http://gubbins.ncsu.edu/research.html Amorphous nano-porous material (e.g. porous glassAn Immersed Boundary Method for Computing Anisotropic Permeability of Structured Porous Media David for Computing Anisotropic Permeability of Structured Porous Media #12;Outline 1 Averaged transport in porous

Al Hanbali, Ahmad

190

Computational methods for Traditional Chinese Medicine: A survey

Traditional Chinese Medicine (TCM) has been actively researched through various approaches, including computational techniques. A review on basic elements of TCM is provided to illuminate various challenges and progresses in its study using computational methods. Information on various TCM formulations, in particular resources on databases of TCM formulations and their integration to Western medicine, are analyzed in several facets, such

Suryani Lukman; Yulan He; Siu-Cheung Hui

2007-01-01

191

Investigation on reconstruction methods applied to 3D terahertz computed

. Cheville, "Transmission terahertz waveguide-based imaging below the diffraction limit," Appl. Phys. LettInvestigation on reconstruction methods applied to 3D terahertz computed tomography B. Recur,3 A *em.abraham@loma.u-bordeaux1.fr Abstract: 3D terahertz computed tomography has been performed using

Boyer, Edmond

192

Theory and Computational Methods for Dynamic Projections in

Theory and Computational Methods for Dynamic Projections in High-Dimensional Data Visualization ANDREAS BUJA1 DIANNE COOK2 , DANIEL ASIMOV3 , CATHERINE HURLEY4 March 31, 2004 Projections are a common the resulting 3-D pointclouds, and presents a 2-D projection thereof to the viewer of a computer screen. Human

Buja, Andreas

193

The Application Based on Bracket Method for Planar Computational Geometry

In this paper, we study two basic problems about planar computational geometry with bracket method. One is how to judge whether a point is inside a given convex polygon, the other is how to compute the convex hull of planar points. The key idea of our criteria is to use the bracket, which is made up of the homogeneous coordinates

Ying Chen; Yaogang Du; Chunming Yuan

2009-01-01

194

A Fast Method for Local Penetration depth Computation

This paper presents a fast method for determining an approximation of the local penetration information for intersecting polyhedral models. As opposed to most techniques, this algorithm requires no specific knowledge of the object's geometry or topology, or any preprocessing computations. In order to achieve real-time performance even for complex, nonconvex models, we decouple the computation of the local penetration directions

Stephane Redon; Ming C. Lin

2006-01-01

195

Transonic Flow Computations Using Nonlinear Potential Methods

NASA Technical Reports Server (NTRS)

This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.

Holst, Terry L.; Kwak, Dochan (Technical Monitor)

2000-01-01

196

A comparison of skyshine computational methods.

A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes. PMID:16604692

Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

2005-01-01

197

2.093 Computer Methods in Dynamics, Fall 2002

Formulation of finite element methods for analysis of dynamic problems in solids, structures, fluid mechanics, and heat transfer. Computer calculation of matrices and numerical solution of equilibrium equations by direct ...

Bathe, Klaus-Jürgen

198

Platform-independent method for computer aided schematic drawings

A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

Vell, Jeffrey L. (Slingerlands, NY); Siganporia, Darius M. (Clifton Park, NY); Levy, Arthur J. (Fort Lauderdale, FL)

2012-02-14

199

12 CFR 227.25 - Unfair balance computation method.

Code of Federal Regulations, 2010 CFR

...RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM UNFAIR OR DECEPTIVE ACTS OR PRACTICES (REGULATION AA) Consumer Credit Card Account Practices Rule § 227.25 Unfair balance computation method. (a) General...

2010-01-01

200

A memory based method for computing robot-arm configuration

have the ultimate goal of providing high accuracy approximations with low computation time. 1. 1 Objectives and Procedure The overall purpose of this research was to investigate a. method for accurately approximating the inverse kinematic solution...A MEMORY BASED METHOD FOR COMPUTING ROBOT-ARM CONFIGURATION A Thesis by SALEEM KARIMJEE Submitted to the Graduate College of Texas ARM University in partial fulfillment of the requirement for the degree of MASTER OF SCIENCE August 1985...

Karimjee, Saleem

1985-01-01

201

Computational Methods in Quantum Field Theory

After a brief introduction to the statistical description of data, these lecture notes focus on quantum field theories as they emerge from lattice models in the critical limit. For the simulation of these lattice models, Markov chain Monte-Carlo methods are widely used. We discuss the heat bath and, more modern, cluster algorithms. The Ising model is used as a concrete illustration of important concepts such as correspondence between a theory of branes and quantum field theory or the duality map between strong and weak couplings. The notes then discuss the inclusion of gauge symmetries in lattice models and, in particular, the continuum limit in which quantum Yang-Mills theories arise.

Kurt Langfeld

2007-11-19

202

A publication for College of Computing and Digital Media alumni IN THE LOOP

to rule the world!" For individuals living with mental health disorders, the stigmas associatedA publication for College of Computing and Digital Media alumni IN THE LOOP 3| Demystifying mental with their illness can compound an already difficult and painful experience. "There's a lack of understanding among

Schaefer, Marcus

203

Computer crime and abuse: A survey of public attitudes and awareness

In recent years, a number of surveys have indicated a significant escalation in reported incidents of computer crime and abuse. This rise is coupled with increasing attention to the issue in the mass media, which has the effect of heightening public perceptions of problems with IT and may represent a barrier to the adoption of technologies such as the Internet

Paul Dowland; Steven Furnell; H. M. Illingworth; Paul L. Reynolds

1999-01-01

204

Despite the rich literature on disciplinary knowledge construction and multilingual scholars’ academic literacy practices, little is known about how novice scholars are engaged in knowledge construction in negotiation with various target discourse communities. In this case study, with a focused analysis of a Chinese computer science doctoral student's alternate forms of one paper, i.e., its Chinese version aimed at publication

Yongyan Li

2006-01-01

205

Accepted for publication in Spatial Cognition and Computation, 2004 Commonsense notions humans actually think about space, i.e., to be cognitively informed. Proximity and direction are two on aspects of these categories. The spaces investigated are at the environmental scale, that is at the scale

Worboys, Mike

206

Perceptions of Connectedness: Public Access Computing and Social Inclusion in Colombia

Of all the benefits public access computers (PAC) offer users, one stands apart: stronger personal connections with friends and family. A closer look at the results of a qualitative study among users of libraries, telecenters, and cybercafes in Colombia, South America, shows that social media and personal relationships can also have an important community and sociopolitical dimension. By fostering a

Luis Fernando Baron; Ricardo Gomez

2012-01-01

207

ERIC Educational Resources Information Center

The purpose of this study was to determine what recent progress had been made in Georgia public elementary school library media centers regarding access to advanced telecommunications and computer technologies as a result of special funding. A questionnaire addressed the following areas: automation and networking of the school library media center…

Rogers, Jackie L.

208

ERIC Educational Resources Information Center

Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

2013-01-01

209

Pretrial Publicity and the Jury: Research and Methods

\\u000a Research conducted over the past 40 years demonstrates that pretrial publicity (PTP) can negatively influence jurors’ perceptions\\u000a of parties in criminal and civil cases receiving substantial news coverage. Changes in the news media over the same period\\u000a of time have made news coverage more accessible to the public as traditional media including newspapers, television, and radio\\u000a are complemented with new

Lisa M. Spano; Jennifer L. Groscup; Steven D. Penrod

210

Methods for operating parallel computing systems employing sequenced communications

A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

Benner, R.E.; Gustafson, J.L.; Montry, G.R.

1999-08-10

211

Methods for operating parallel computing systems employing sequenced communications

A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

Benner, Robert E. (Albuquerque, NM); Gustafson, John L. (Albuquerque, NM); Montry, Gary R. (Albuquerque, NM)

1999-01-01

212

Convergence acceleration of the Proteus computer code with multigrid methods

NASA Technical Reports Server (NTRS)

Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

Demuren, A. O.; Ibraheem, S. O.

1992-01-01

213

Computational Simulations and the Scientific Method

NASA Technical Reports Server (NTRS)

As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

Kleb, Bil; Wood, Bill

2005-01-01

214

Computational methods to identify new antibacterial targets.

The development of resistance to all current antibiotics in the clinic means there is an urgent unmet need for novel antibacterial agents with new modes of action. One of the best ways of finding these is to identify new essential bacterial enzymes to target. The advent of a number of in silico tools has aided classical methods of discovering new antibacterial targets, and these programs are the subject of this review. Many of these tools apply a cheminformatic approach, utilizing the structural information of either ligand or protein, chemogenomic databases, and docking algorithms to identify putative antibacterial targets. Considering the wealth of potential drug targets identified from genomic research, these approaches are perfectly placed to mine this rich resource and complement drug discovery programs. PMID:24974974

McPhillie, Martin J; Cain, Ricky M; Narramore, Sarah; Fishwick, Colin W G; Simmons, Katie J

2015-01-01

215

Computer systems and methods for visualizing data

A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

Stolte, Chris (Palo Alto, CA); Hanrahan, Patrick (Portola Valley, CA)

2010-07-13

216

Method and computer program product for maintenance and modernization backlogging

According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

2013-02-19

217

Comparison of methods for computing streamflow statistics for Pennsylvania streams

Methods for computing streamflow statistics intended for use on ungaged locations on Pennsylvania streams are presented and compared to frequency distributions of gaged streamflow data. The streamflow statistics used in the comparisons include the 7-day 10-year low flow, 50-year flood flow, and the 100-year flood flow; additional statistics are presented. Streamflow statistics for gaged locations on streams in Pennsylvania were computed using three methods for the comparisons: 1) Log-Pearson type III frequency distribution (Log-Pearson) of continuous-record streamflow data, 2) regional regression equations developed by the U.S. Geological Survey in 1982 (WRI 82-21), and 3) regional regression equations developed by the Pennsylvania State University in 1981 (PSU-IV). Log-Pearson distribution was considered the reference method for evaluation of the regional regression equations. Low-flow statistics were computed using the Log-Pearson distribution and WRI 82-21, whereas flood-flow statistics were computed using all three methods. The urban adjustment for PSU-IV was modified from the recommended computation to exclude Philadelphia and the surrounding areas (region 1) from the adjustment. Adjustments for storage area for PSU-IV were also slightly modified. A comparison of the 7-day 10-year low flow computed from Log-Pearson distribution and WRI-82- 21 showed that the methods produced significantly different values for about 7 percent of the state. The same methods produced 50-year and 100-year flood flows that were significantly different for about 24 percent of the state. Flood-flow statistics computed using Log-Pearson distribution and PSU-IV were not significantly different in any regions of the state. These findings are based on a statistical comparison using the t-test on signed ranks and graphical methods.

Ehlke, Marla H.; Reed, Lloyd A.

1999-01-01

218

Analysing Privacy-Invasive Software Using Computer Forensic Methods

User privacy is widely affected by the occurrence of privacy-in- vasive software (PIS) on the Internet. We present a computer forensic inves- tigation method for detecting and analysing PIS. In an experiment we use this method to evaluate both the evolution of PIS and associated countermeas- ures, over a four year period. Background information on both PIS and coun- termeasure

Martin Boldt; Bengt Carlsson

219

Making sense of teaching methods in computing education

The goal of this paper is to provide an initial guidepost for computer science (CS) educators seeking to make better use of educational theory research. The authors provide an overview of the latest and most prominent methods for science teaching, and discuss their application to CS education. They provide examples of current applications. For those methods which have not yet

Kris D. Powers; Daniel T. Powers

1999-01-01

220

A computer method for thermal power cycle calculation

This paper describes a highly flexible computer method for thermodynamic power cycle calculations (PCC). With this method the user can model any cycle scheme by selecting components from a library and connecting them in an appropriate way. The flexibility is not restricted by a predefined cycle schemes. A power cycle is mathematically represented by a system of algebraic equations. The

E. Perz

1991-01-01

221

Monte Carlo Methods: A Computational Pattern for Our Pattern Language

Monte Carlo Methods: A Computational Pattern for Our Pattern Language Jike Chong University@eecs.berkeley.edu Kurt Keutzer University of California, Berkeley keutzer@eecs.berkeley.edu ABSTRACT The Monte Carlo for a particular data working set. This paper presents the Monte Carlo Methods software pro- gramming pattern

California at Berkeley, University of

222

Computer Subroutines for Analytic Rotation by Two Gradient Methods.

ERIC Educational Resources Information Center

Two computer subroutine packages for the analytic rotation of a factor matrix, A(p x m), are described. The first program uses the Flectcher (1970) gradient method, and the second uses the Polak-Ribiere (Polak, 1971) gradient method. The calculations in both programs involve the optimization of a function of free parameters. The result is a…

van Thillo, Marielle

223

Calculating PI Using Historical Methods and Your Personal Computer.

ERIC Educational Resources Information Center

Provides a software program for determining PI to the 15th place after the decimal. Explores the history of determining the value of PI from Archimedes to present computer methods. Investigates Wallis's, Liebniz's, and Buffon's methods. Written for Tandy GW-BASIC (IBM compatible) with 384K. Suggestions for Apple II's are given. (MVL)

Mandell, Alan

1989-01-01

224

Method for implementation of recursive hierarchical segmentation on parallel computers

NASA Technical Reports Server (NTRS)

A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

Tilton, James C. (Inventor)

2005-01-01

225

" IACMM Israel Association for Computational Methods in Mechanics 29th Israel Symposium.ac.technion.aerodyne@givolid ! TECHNION - Israel Institute of Technology Faculty of Aerospace Engineering and Faculty of Mechanical:40( ,,,,'''' ,,,, ''''---- -POD )12:05( **** ,,,, **** ,,,, 12121212::::33330000 Israel Association

Adler, Joan

226

Public library computer training for older adults to access high-quality Internet health information

An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at a state university. One hundred and thirty-one older adults aged 54–89 participated in the study between September 2007 and July 2008. Key findings include: a) participants had overwhelmingly positive perceptions of the training program; b) after learning about two NIH websites (http://nihseniorhealth.gov and http://medlineplus.gov) from the training, many participants started using these online resources to find high quality health and medical information and, further, to guide their decision-making regarding a health- or medically-related matter; and c) computer anxiety significantly decreased (p < .001) while computer interest and efficacy significantly increased (p = .001 and p < .001, respectively) from pre- to post-training, suggesting statistically significant improvements in computer attitudes between pre- and post-training. The findings have implications for public libraries, LIS academic programs, and other organizations interested in providing similar programs in their communities. PMID:20161649

Xie, Bo; Bugg, Julie M.

2010-01-01

227

Pedagogical Methods of Teaching "Women in Public Speaking."

ERIC Educational Resources Information Center

A course on women in public speaking, developed at the University of Cincinnati, focuses on the rhetoric of selected women who have been involved in various movements and causes in the United States in the twentieth century. Women studied include educator Mary McLeod Bethune, Congresswoman Jeannette Rankin, suffragette Carrie Chapman Catt, Helen…

Pederson, Lucille M.

228

Public Experiments and Their Analysis with the Replication Method

ERIC Educational Resources Information Center

One of those who failed to establish himself as a natural philosopher in 18th century Paris was the future revolutionary Jean Paul Marat. He did not only publish several monographs on heat, optics and electricity in which he attempted to characterise his work as being purely empirical but he also tried to establish himself as a public lecturer.…

Heering, Peter

2007-01-01

229

Demand for public transport services: Integrating qualitative and quantitative methods

also includes psychometric indicators for attitudes, perceptions and lifestyle preferences) that are impor- tant in mode choice behavior such as lifestyle preferences, personal attitudes or perceptions the market share of public transport. This paper presents an integrated discrete choice and latent variable

Bierlaire, Michel

230

Methods and systems for providing reconfigurable and recoverable computing resources

NASA Technical Reports Server (NTRS)

A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

2010-01-01

231

A stochastic method for computing hadronic matrix elements

We present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

Drach, Vincent [DESY, Zeuthen (Germany); Jansen, Karl [DESY, Zeuthen (Germany); Alexandrou, Constantia [University of Cyprus, Nicosia (Cyprus); Constantinou, Marth [University of Cyprus, Nicosia (Cyprus); Dinter, Simon [DESY, Zeuthen (Germany); Hadjiyiannakou, Kyriakos [University of Cyprus, Nicosia (Cyprus); Renner, Dru B. [JLAB, Newport News, VA (United States)

2014-01-01

232

A Reduced Order Cyclic Method for Computation of Limit Cycles

A reduced order cyclic method was developed to compute limit-cycle oscillations for large, nonlinear, multidisciplinary systems of equations. Method efficacy was demonstrated for two simplified models: a typical-section airfoil with nonlinear structural coupling and a nonlinear panel in high-speed flow. The cyclic method was verified to maintain second-order temporal accuracy, yield converged limit cycles in about 10 Newton iterates, and

P. S. Beran; D. J. Lucia

2005-01-01

233

The Spectral-Element Method, Beowulf Computing, and Global Seismology

The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables

Dimitri Komatitsch; Jeroen Ritsema; Jeroen Tromp

2002-01-01

234

Robust regression methods for computer vision: A review

Regression analysis (fitting a model to noisy data) is a basic technique in computer vision, Robust regression methods that remain reliable in the presence of various types of noise are therefore of considerable importance. We review several robust estimation techniques and describe in detail the least-median-of-squares (LMedS) method. The method yields the correct result even when half of the data

Peter Meer; Doron Mintz; Azriel Rosenfeld; Dong Yoon Kim

1991-01-01

235

Fully consistent CFD methods for incompressible flow computations

NASA Astrophysics Data System (ADS)

Nowadays collocated grid based CFD methods are one of the most efficient tools for computations of the flows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure-velocity coupling on collocated grids use the so-called momentum interpolation method of Rhie and Chow [1]. As known, the method and some of its widely spread modifications result in solutions, which are dependent of time step at convergence. In this paper the magnitude of the dependence is shown to contribute about 0.5% into the total error in a typical turbulent flow computation. Nevertheless if coarse grids are used, the standard interpolation methods result in much higher non-consistent behavior. To overcome the problem, a recently developed interpolation method, which is independent of time step, is used. It is shown that in comparison to other time step independent method, the method may enhance the convergence rate of the SIMPLEC algorithm up to 25 %. The method is verified using turbulent flow computations around a NACA 64618 airfoil and the roll-up of a shear layer, which may appear in wind turbine wake.

Kolmogorov, D. K.; Shen, W. Z.; Sørensen, N. N.; Sørensen, J. N.

2014-06-01

236

Data analysis through interactive computer animation method (DATICAM)

DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process.

Curtis, J.N.; Schwieder, D.H.

1983-01-01

237

26 CFR 1.167(b)-0 - Methods of computing depreciation.

Code of Federal Regulations, 2011 CFR

... 2011-04-01 false Methods of computing depreciation. 1.167(b)-0 ...Corporations § 1.167(b)-0 Methods of computing depreciation. (a) In general...reasonable and consistently applied method of computing depreciation may be used or...

2011-04-01

238

26 CFR 1.167(b)-0 - Methods of computing depreciation.

Code of Federal Regulations, 2010 CFR

... 2010-04-01 false Methods of computing depreciation. 1.167(b)-0 ...Corporations § 1.167(b)-0 Methods of computing depreciation. (a) In general...reasonable and consistently applied method of computing depreciation may be used or...

2010-04-01

239

A spectral method for computing complete synthetic seismograms

NASA Astrophysics Data System (ADS)

Much attention has been paid to the problem of the efficient computation of complete solution synthetic seismograms for flat, plane layered, laterally homogeneous elastic media and for high frequency bandwidths in the field of solid-earth geophysics. The only available methods for computing complete synthetic seismograms are computationally expensive and often suffer from numerical instabilities which limit their ranges of applicability. This report presents a new method based on a spectral representation of the solution of the elastic wave equation. Reformulating eigenvalue and eigenfunction computations avoids the numerical instabilities. A mode searching algorithm is developed which makes it possible to find large numbers of Rayleigh and Love dispersion curves efficiently and reliably. The locked mode approximation allows nearly complete synthetic seismograms to be computed using only the discrete part of the spectrum and using only normal modes with real eigen wave numbers. This is achieved by adding a high velocity cap layer to the bottom of the elastic model and by using phase velocity filtering to attenuate the spurious scattering caused by the cap layer. Comparisons of the results of the locked mode approximation with exact results obtained from other synthesis methods are presented and the limitations of the method are demonstrated and discussed. Examples of synthetic seismograms using the locked mode approximation are given along with a comparison of observed and synthetic seismograms.

Harvey, Danny J.

1987-03-01

240

GRACE: Public Health Recovery Methods following an Environmental Disaster

Different approaches are necessary when Community Based Participatory Research (CBPR) of environmental illness is initiated after an environmental disaster within a community. Often such events are viewed as golden scientific opportunities to do epidemiological studies. However, we believe that in such circumstances, community engagement and empowerment needs to be integrated into the public health service efforts in order for both those and any science to be successful, with special care being taken to address the immediate health needs of the community first rather than the pressing needs to answer important scientific questions. We will demonstrate how we have simultaneously provided valuable public health service, embedded generalizable scientific knowledge, and built a successful foundation for supplemental CBPR through our on-going recovery work after the chlorine gas disaster in Graniteville, South Carolina. PMID:20439226

Svendsen, ER; Whittle, N; Wright, L; McKeown, RE; Sprayberry, K; Heim, M; Caldwell, R; Gibson, JJ; Vena, J.

2014-01-01

241

Measuring coherence of computer-assisted likelihood ratio methods.

Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. PMID:25698513

Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

2015-04-01

242

Advances in turbulent flow computations using high-resolution methods

NASA Astrophysics Data System (ADS)

The paper reviews research activity in connection with the use of high-resolution methods in turbulent flow computations. High-resolution methods have proven to successfully compute a number of turbulent flows without need to resort to an explicit turbulence model. Here, we review the basic properties of these methods, present evidence from the successful implementation of these methods in turbulent flows, and discuss theoretical arguments and recent research aiming at justifying their use as an implicit turbulence model. Further, we discuss numerical issues that still need to be addressed. These include the relation of the dissipation and dispersion properties with turbulence properties such as turbulence anisotropy, as well as further validation of the methods in under-resolved simulations of near-wall turbulent attached and separated flows.

Drikakis, Dimitris

2003-10-01

243

The continuous slope-area method for computing event hydrographs

The continuous slope-area (CSA) method expands the slope-area method of computing peak discharge to a complete flow event. Continuously recording pressure transducers installed at three or more cross sections provide water-surface slopes and stage during an event that can be used with cross-section surveys and estimates of channel roughness to compute a continuous discharge hydrograph. The CSA method has been made feasible by the availability of low-cost recording pressure transducers that provide a continuous record of stage. The CSA method was implemented on the Babocomari River in Arizona in 2002 to monitor streamflow in the channel reach by installing eight pressure transducers in four cross sections within the reach. Continuous discharge hydrographs were constructed from five streamflow events during 2002-2006. Results from this study indicate that the CSA method can be used to obtain continuous hydrographs and rating curves can be generated from streamflow events.

Smith, Christopher F.; Cordova, Jeffrey T.; Wiele, Stephen M.

2010-01-01

244

AN ALGEBRAIC METHOD FOR PUBLIC-KEY CRYPTOGRAPHY

Algebraic key establishment protocols based on the di-culty of solv- ing equations over algebraic structures are described as a theoretical basis for constructing public{key cryptosystems. A protocol is a multi{party algorithm, deflned by a sequence of steps, speci- fying the actions required of two or more parties in order to achieve a specifled objective. Furthermore, a key establishment protocol is

Iris Anshel; Michael Anshel; Dorian Goldfeld

1999-01-01

245

Secure encapsulation and publication of biological services in the cloud computing environment.

Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

2013-01-01

246

Computational methods for the analysis of primate mobile elements

Transposable elements (TE), defined as discrete pieces of DNA that can move from site to another site in genomes, represent significant components of eukaryotic genomes, including primates. Comparative genome-wide analyses have revealed the considerable structural and functional impact of TE families on primate genomes. Insights into these questions have come in part from the development of computational methods that allow detailed and reliable identification, annotation and evolutionary analyses of the many TE families that populate primate genomes. Here, we present an overview of these computational methods, and describe efficient data mining strategies for providing a comprehensive picture of TE biology in newly available genome sequences. PMID:20238080

Cordaux, Richard; Sen, Shurjo K.; Konkel, Miriam K.; Batzer, Mark A.

2010-01-01

247

Software for computing eigenvalue bounds for iterative subspace matrix methods

NASA Astrophysics Data System (ADS)

This paper describes software for computing eigenvalue bounds to the standard and generalized hermitian eigenvalue problem as described in [Y. Zhou, R. Shepard, M. Minkoff, Computing eigenvalue bounds for iterative subspace matrix methods, Comput. Phys. Comm. 167 (2005) 90-102]. The software discussed in this manuscript applies to any subspace method, including Lanczos, Davidson, SPAM, Generalized Davidson Inverse Iteration, Jacobi-Davidson, and the Generalized Jacobi-Davidson methods, and it is applicable to either outer or inner eigenvalues. This software can be applied during the subspace iterations in order to truncate the iterative process and to avoid unnecessary effort when converging specific eigenvalues to a required target accuracy, and it can be applied to the final set of Ritz values to assess the accuracy of the converged results. Program summaryTitle of program: SUBROUTINE BOUNDS_OPT Catalogue identifier: ADVE Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVE Computers: any computer that supports a Fortran 90 compiler Operating systems: any computer that supports a Fortran 90 compiler Programming language: Standard Fortran 90 High speed storage required:5m+5 working-precision and 2m+7 integer for m Ritz values No. of bits in a word: The floating point working precision is parameterized with the symbolic constant WP No. of lines in distributed program, including test data, etc.: 2452 No. of bytes in distributed program, including test data, etc.: 281 543 Distribution format: tar.gz Nature of physical problem: The computational solution of eigenvalue problems using iterative subspace methods has widespread applications in the physical sciences and engineering as well as other areas of mathematical modeling (economics, social sciences, etc.). The accuracy of the solution of such problems and the utility of those errors is a fundamental problem that is of importance in order to provide the modeler with information of the reliability of the computational results. Such applications include using these bounds to terminate the iterative procedure at specified accuracy limits. Method of solution: The Ritz values and their residual norms are computed and used as input for the procedure. While knowledge of the exact eigenvalues is not required, we require that the Ritz values are isolated from the exact eigenvalues outside of the Ritz spectrum and that there are no skipped eigenvalues within the Ritz spectrum. Using a multipass refinement approach, upper and lower bounds are computed for each Ritz value. Typical running time: While typical applications would deal with m<20, for m=100000, the running time is 0.12 s on an Apple PowerBook.

Shepard, Ron; Minkoff, Michael; Zhou, Yunkai

2005-07-01

248

Informed public choices for low-carbon electricity portfolios using a computer decision tool.

Reducing CO2 emissions from the electricity sector will likely require policies that encourage the widespread deployment of a diverse mix of low-carbon electricity generation technologies. Public discourse informs such policies. To make informed decisions and to productively engage in public discourse, citizens need to understand the trade-offs between electricity technologies proposed for widespread deployment. Building on previous paper-and-pencil studies, we developed a computer tool that aimed to help nonexperts make informed decisions about the challenges faced in achieving a low-carbon energy future. We report on an initial usability study of this interactive computer tool. After providing participants with comparative and balanced information about 10 electricity technologies, we asked them to design a low-carbon electricity portfolio. Participants used the interactive computer tool, which constrained portfolio designs to be realistic and yield low CO2 emissions. As they changed their portfolios, the tool updated information about projected CO2 emissions, electricity costs, and specific environmental impacts. As in the previous paper-and-pencil studies, most participants designed diverse portfolios that included energy efficiency, nuclear, coal with carbon capture and sequestration, natural gas, and wind. Our results suggest that participants understood the tool and used it consistently. The tool may be downloaded from http://cedmcenter.org/tools-for-cedm/informing-the-public-about-low-carbon-technologies/ . PMID:24564708

Mayer, Lauren A Fleishman; Bruine de Bruin, Wändi; Morgan, M Granger

2014-04-01

249

Cloud Computing Research and Development Trend

With the development of parallel computing, distributed computing, grid computing, a new computing model appeared. The concept of computing comes from grid, public computing and SaaS. It is a new method that shares basic framework. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer or

Shuai Zhang; Shufen Zhang; Xuebin Chen; Xiuzhen Huo

2010-01-01

250

ERIC Educational Resources Information Center

The "Convince Me" computer environment supports critical thinking by allowing users to create and evaluate computer-based representations of arguments. This study investigates theoretical and design considerations pertinent to using "Convince Me" as an educational tool to support reasoning about public policy issues. Among computer environments…

Adams, Stephen T.

2003-01-01

251

Computer controlled fluorometer device and method of operating same

A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.

Kolber, Z.; Falkowski, P.

1990-07-17

252

Computational Methods for CLIP-seq Data Processing.

RNA-binding proteins (RBPs) are at the core of post-transcriptional regulation and thus of gene expression control at the RNA level. One of the principal challenges in the field of gene expression regulation is to understand RBPs mechanism of action. As a result of recent evolution of experimental techniques, it is now possible to obtain the RNA regions recognized by RBPs on a transcriptome-wide scale. In fact, CLIP-seq protocols use the joint action of CLIP, crosslinking immunoprecipitation, and high-throughput sequencing to recover the transcriptome-wide set of interaction regions for a particular protein. Nevertheless, computational methods are necessary to process CLIP-seq experimental data and are a key to advancement in the understanding of gene regulatory mechanisms. Considering the importance of computational methods in this area, we present a review of the current status of computational approaches used and proposed for CLIP-seq data. PMID:25336930

Reyes-Herrera, Paula H; Ficarra, Elisa

2014-01-01

253

Computational Methods for CLIP-seq Data Processing

RNA-binding proteins (RBPs) are at the core of post-transcriptional regulation and thus of gene expression control at the RNA level. One of the principal challenges in the field of gene expression regulation is to understand RBPs mechanism of action. As a result of recent evolution of experimental techniques, it is now possible to obtain the RNA regions recognized by RBPs on a transcriptome-wide scale. In fact, CLIP-seq protocols use the joint action of CLIP, crosslinking immunoprecipitation, and high-throughput sequencing to recover the transcriptome-wide set of interaction regions for a particular protein. Nevertheless, computational methods are necessary to process CLIP-seq experimental data and are a key to advancement in the understanding of gene regulatory mechanisms. Considering the importance of computational methods in this area, we present a review of the current status of computational approaches used and proposed for CLIP-seq data. PMID:25336930

Reyes-Herrera, Paula H; Ficarra, Elisa

2014-01-01

254

Computer controlled fluorometer device and method of operating same

A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

Kolber, Zbigniew (Shoreham, NY); Falkowski, Paul (Stony Brook, NY)

1990-01-01

255

Computational Methods for Dynamic Stability and Control Derivatives

NASA Technical Reports Server (NTRS)

Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

2003-01-01

256

Computational Methods for Dynamic Stability and Control Derivatives

NASA Technical Reports Server (NTRS)

Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

2004-01-01

257

Fast and Slow Dynamics for the Computational Singular Perturbation Method

The Computational Singular Perturbation (CSP) method of Lam and Goussis is an\\u000aiterative method to reduce the dimensionality of systems of ordinary\\u000adifferential equations with multiple time scales. In [J. Nonlin. Sci., to\\u000aappear], the authors showed that each iteration of the CSP algorithm improves\\u000athe approximation of the slow manifold by one order. In this paper, it is shown

Antonios Zagaris; Hans G. Kaper; Tasso J. Kaper

2004-01-01

258

Method and system for environmentally adaptive fault tolerant computing

NASA Technical Reports Server (NTRS)

A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

2010-01-01

259

Stress intensity estimates by a computer assisted photoelastic method

NASA Technical Reports Server (NTRS)

Following an introductory history, the frozen stress photoelastic method is reviewed together with analytical and experimental aspects of cracks in photoelastic models. Analytical foundations are then presented upon which a computer assisted frozen stress photoelastic technique is based for extracting estimates of stress intensity factors from three-dimensional cracked body problems. The use of the method is demonstrated for two currently important three-dimensional crack problems.

Smith, C. W.

1977-01-01

260

A new mixed finite element method for computing viscoelastic flows

A new mixed finite element method for computing viscoelastic flows is presented. The mixed formulation is based on the introduction of the rate of deformation tensor as an additional unknown. Contrary to the popular EVSS method [D. Rajagopalan, R.A. Brown and R.C. Armstrong, J. Non-Newtonian Fluid Mech., 36 (1990) 159], no change of variable is performed into the constitutive equation.

Robert Guénette; Michel Fortin

1995-01-01

261

A new mixed finite element method for computing viscoelastic flows

A new mixed finite element method for computing viscoelastic flows is presented. The mixed formulation is based on the introduction of the rate of deformation tensor as an additional unknown. Contrary to the popular EVSS method (D. Rajagopalan, R.A. Brown and R.C. Armstrong, J. Non-Newtonian Fluid Mech., 36 (1990) 1591, no change of variable is performed into the constitutive equation.

Robert GuCnette; Michel Fortin

1995-01-01

262

Computational Methods for Structural Mechanics and Dynamics, part 1

NASA Technical Reports Server (NTRS)

The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

Stroud, W. Jefferson (editor); Housner, Jerrold M. (editor); Tanner, John A. (editor); Hayduk, Robert J. (editor)

1989-01-01

263

Statistical methods in rehabilitation literature: A survey of recent publications

Objective: To document the use of statistical methods in the recent rehabilitation research literature.Design: Descriptive survey study.Methods: All 1,039 articles published between January 1990 and December 1993 in the American Journal of Physical Medicine and Rehabilitation and the Archives of Physical Medicine and Rehabilitation were reviewed for the use of statistical methods.Results: There were 682 (66%) research articles in the

Staci J. Schwartz; Marianne Sturr; Gary Goldberg

1996-01-01

264

Experience Papers Introducing Research Methods to Computer Science Honours

Experience Papers Introducing Research Methods to Computer Science Honours Students Vashti Galpina Research skills are important for any academic and can be of great benefit to any professional person at the University of the Witwatersrand we have for a number of years included the completion of a research report

Galpin, Vashti

265

15th April 2005 Analysis Of Newton's Method to Compute

15th April 2005 Analysis Of Newton's Method to Compute Travelling Waves in Discrete Media. Hermen. Typeset by FoilTEX 5 #12;CNN Pattern Recognition - Line Detection The coupling constants Ai,j should Networks can be implemented as electronic circuits. · Couplings Ak,l can be set by changing impedances

Hupkes, Hermen Jan

266

Computational Methods for Learning Population History from Large Scale Genetic

Computational Methods for Learning Population History from Large Scale Genetic Variation Datasets of Philosophy. Copyright 2013 Ming-Chi Tsai #12;Keywords: Population Genetics, Minimum Description Length is a fundamental question in population genetics with numerous implications for basic and applied research

Matsuda, Noboru

267

Logical Methods in Computer Science Vol. ? (?:?) 2???, ? pages

of security and functionality. 2000 ACM Subject Classification: D.3, K.6.5. Key words and phrases: roleLogical Methods in Computer Science Vol. ? (?:?) 2???, ? pages www.lmcs-online.org Submitted date permit access controls to be expressed, in situ, as part of the code realizing basic functionality

Riely, James

268

A Survey of Computer Methods in Forensic Handwritten Document Examination

1 A Survey of Computer Methods in Forensic Handwritten Document Examination Sargur SRIHARI Graham LEEDHAM Abstract Forensic document examination is at a cross-roads due to challenges posed to its recent efforts in the areas of establishing a scientific basis of forensic handwriting examination

269

Computer lessons for a social psychology research methods course

The development and use of three computer-based lessons for an undergraduate course on research methods in social psychology\\u000a are described. One lesson is essentially a tutorial on main effects and interactions. Two additional lessons are simulations\\u000a demonstrating the concepts of experimental power and survey sampling. A rationale and description of each lesson is provided.

Russell H. Fazio; Martin H. Backler

1983-01-01

270

Computer-Aided Method of Visual Absorption Capacity Estimation

In this paper, a novel method of computer aided VAC (Visual Absorption Capacity) estimation is presented which was applied to an investigation in Bad Muskau - ??knica region. English style Muskauer Park, located in the vicinity, possesses a wide world known values, which a fact was confirmed by its inscribing in UNESCO list in 2004. Therefore, during the process of

Agnieszka OZIMEK; Pawel OZIMEK

271

Convergence acceleration of the Proteus computer code with multigrid methods

NASA Technical Reports Server (NTRS)

This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

Demuren, A. O.; Ibraheem, S. O.

1995-01-01

272

EQUILIBRIUM AND NONEQUILIBRIUM FOUNDATIONS OF FREE ENERGY COMPUTATIONAL METHODS

Statistical mechanics provides a rigorous framework for the numerical estimation of free energy differences in complex systems such as biomolecules. This paper presents a brief review of the statistical mechanical identities underlying a number of techniques for computing free energy differences. Both equilibrium and nonequilibrium methods are covered.

C. JARZYNSKI

2001-03-01

273

Computational methods for calculating geometric parameters of tectonic plates

Present day and ancient plate tectonic configurations can be modelled in terms of non-overlapping polygonal regions, separated by plate boundaries, on the unit sphere. The computational methods described in this article allow an evaluation of the area and the inertial tensor components of a polygonal region on the unit sphere, as well as an estimation of the associated errors. These

Antonio Schettino

1999-01-01

274

Fast BEM computations with the adaptive multilevel fast multipole method

Since the storage requirements of the BEM are proportional to N 2, only relative small problems can be solved on a PC or a workstation. In this paper we present an adaptive multilevel fast multipole method for the solution of electrostatic problems with the BEM. We will show, that in practice the storage requirements and the computational costs are approximately

André Buchau; Christian J. Huber; Wolfgang Rieger; Wolfgang M. Rucker

2000-01-01

275

Disequilibration for teaching the scientific method in computer science

We present several introductory computer science laboratory assignments designed to reinforce the use of the scientific method. These assignments require students to make predictions, write simulations, perform experiments, collect data and analyze the results. The assignments are specifically designed to place student predictions in conflict with the observed results, thus producing a disequilibration. As a result, students are motivated to

Grant Braught; David W. Reed

2002-01-01

276

Designing and reporting on computational experiments with heuristic methods

This article discusses the design of computational experiments to test heuristic methods and provides reporting guidelines\\u000a for such experimentation. The goal is to promote thoughtful, well-planned, and extensive testing of heuristics, full disclosure\\u000a of experimental conditions, and integrity in and reproducibility of the reported results.

Richard S. Barr; Bruce L. Golden; James P. Kelly; Mauricio G. C. Resende

1995-01-01

277

Computer vision methods for DNA microarray spotting and quality

1 Computer vision methods for DNA microarray spotting and quality control Javier Cabrera Rutgers Â· DNA microarrays Â· Data processing steps Â· Spotting the raw image Â· Spotted image: Microarray Quality the process: DNA ? mRNA ? protein transcription translation #12;4 DNA microarrays One of the most promising

Cabrera, Javier

278

Computational Theory and Methods for Finding Multiple Critical Points

gradient. The objective of this research is to develop computational theory and methods for nding multiple critical points, i. e., solutions to the Euler-Lagrange equation J,(u) = 0: A critical point u is nondegenerate if J,(u ) is invertible. Otherwise u is degenerate. The rst candidates for a critical point are the local extrema to which the classical critical point

Jianxin Zhou

279

Interactive method for computation of viscous flow with recirculation

NASA Technical Reports Server (NTRS)

An interactive method is proposed for the solution of two-dimensional, laminar flow fields with identifiable regions of recirculation, such as the shear-layer-driven cavity flow. The method treats the flow field as composed of two regions, with an appropriate mathematical model adopted for each region. The shear layer is computed by the compressible boundary layer equations, and the slowly recirculating flow by the incompressible Navier-Stokes equations. The flow field is solved iteratively by matching the local solutions in the two regions. For this purpose a new matching method utilizing an overlap between the two computational regions is developed, and shown to be most satisfactory. Matching of the two velocity components, as well as the change in velocity with respect to depth is amply accomplished using the present approach, and the stagnation points corresponding to separation and reattachment of the dividing streamline are computed as part of the interactive solution. The interactive method is applied to the test problem of a shear layer driven cavity. The computational results are used to show the validity and applicability of the present approach.

Brandeis, J.; Rom, J.

1981-01-01

280

NSDL National Science Digital Library

The Nitrogen and Phosphorus Knowledge Web page is offered by Iowa State University Extension and the College of Agriculture. The publications page contains links to various newsletters, articles, publications, power point presentations, links to governmental publications, and more. For example, visitors will find articles written on phosphorous within the Integrated Crop Management Newsletter, power point presentations on Nitrogen Management and Carbon Sequestration, and links to other Iowa State University publications on various subjects such as nutrient management. Other links on the home page of the site contain soil temperature data, research highlights, and other similarly relevant information for those in similar fields.

1969-12-31

281

Variational-moment method for computing magnetohydrodynamic equilibria

A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed.

Lao, L.L.

1983-08-01

282

Computation of Pressurized Gas Bearings Using CE/SE Method

NASA Technical Reports Server (NTRS)

The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

2003-01-01

283

This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

Carpenter, D.C.

1998-01-01

284

The spectral-element method, Beowulf computing, and global seismology.

The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables us to simulate broadband seismograms without intrinsic restrictions on the level of heterogeneity or the frequency content. PMID:12459579

Komatitsch, Dimitri; Ritsema, Jeroen; Tromp, Jeroen

2002-11-29

285

Computational methods and experiments in analytic number theory

We cover some useful techniques in computational aspects of analytic number theory, with specific emphasis on ideas relevant to the evaluation of L-functions. These techniques overlap considerably with basic methods from analytic number theory. On the elementary side, summation by parts, Euler Maclaurin summation, and Mobius inversion play a prominent role. In the slightly less elementary sphere, we find tools from analysis, such as Poisson summation, generating function methods, Cauchy's residue theorem, asymptotic methods, and the fast Fourier transform. We then describe conjectures and experiments that connect number theory and random matrix theory.

Michael O. Rubinstein

2004-12-08

286

Computing the crystal growth rate by the interface pinning method

NASA Astrophysics Data System (ADS)

An essential parameter for crystal growth is the kinetic coefficient given by the proportionality between supercooling and average growth velocity. Here, we show that this coefficient can be computed in a single equilibrium simulation using the interface pinning method where two-phase configurations are stabilized by adding a spring-like bias field coupling to an order-parameter that discriminates between the two phases. Crystal growth is a Smoluchowski process and the crystal growth rate can, therefore, be computed from the terminal exponential relaxation of the order parameter. The approach is investigated in detail for the Lennard-Jones model. We find that the kinetic coefficient scales as the inverse square-root of temperature along the high temperature part of the melting line. The practical usability of the method is demonstrated by computing the kinetic coefficient of the elements Na and Si from first principles. A generalized version of the method may be used for computing the rates of crystal nucleation or other rare events.

Pedersen, Ulf R.; Hummel, Felix; Dellago, Christoph

2015-01-01

287

NASA Astrophysics Data System (ADS)

SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other Evolutionary Computation-based algorithms. The Levenberg-Marquardt optimization must be considered as the most efficient one due to its speed. Its drawback due to possible sticking in poor local optimum can be overcome by applying a multi-start approach.

Piotrowski, Adam P.; Napiorkowski, Jaros?aw J.

2011-09-01

288

Illumination invariant method to detect and track left luggage in public areas

NASA Astrophysics Data System (ADS)

Surveillance and its security applications have been critical subjects recently with various studies placing a high demand on robust computer vision solutions that can work effectively and efficiently in complex environments without human intervention. In this paper, an efficient illumination invariant template generation and tracking method to identify and track abandoned objects (bags) in public areas is described. Intensity and chromaticity distortion parameters are initially used to generate a binary mask containing all the moving objects in the scene. The binary blobs in the mask are tracked, and those found static through the use of a 'centroid-range' method are segregated. A Laplacian of Gaussian (LoG) filter is then applied to the parts of the current frame and the average background frame, encompassed by the static blobs, to pick up the high frequency components. The total energy is calculated for both the frames, current and background, covered by the detected edge map to ensure that illumination change has not resulted in false segmentation. Finally, the resultant edge-map is registered and tracked through the use of a correlation based matching process. The algorithm has been successfully tested on the iLIDs dataset, results being presented in this paper.

Hassan, Waqas; Mitra, Bhargav; Chatwin, Chris; Young, Rupert; Birch, Philip

2010-04-01

289

A new method for computing Moore-Penrose inverse matrices

NASA Astrophysics Data System (ADS)

The Moore-Penrose inverse of an arbitrary matrix (including singular and rectangular) has many applications in statistics, prediction theory, control system analysis, curve fitting and numerical analysis. In this paper, an algorithm based on the conjugate Gram-Schmidt process and the Moore-Penrose inverse of partitioned matrices is proposed for computing the pseudoinverse of an m×n real matrix A with m>=n and rank r<=n. Numerical experiments show that the resulting pseudoinverse matrix is reasonably accurate and its computation time is significantly less than that of pseudoinverses obtained by the other methods for large sparse matrices.

Toutounian, F.; Ataei, A.

2009-06-01

290

Digital data storage systems, computers, and data verification methods

Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

2005-12-27

291

The ensemble switch method for computing interfacial tensions.

We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension. PMID:25877563

Schmitz, Fabian; Virnau, Peter

2015-04-14

292

29 CFR 779.266 - Methods of computing annual volume of sales or business.

Code of Federal Regulations, 2012 CFR

...2012-07-01 false Methods of computing annual volume of sales or business. 779.266 Section...Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No...

2012-07-01

293

29 CFR 779.266 - Methods of computing annual volume of sales or business.

Code of Federal Regulations, 2013 CFR

...2013-07-01 false Methods of computing annual volume of sales or business. 779.266 Section...Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No...

2013-07-01

294

29 CFR 779.342 - Methods of computing annual volume of sales.

Code of Federal Regulations, 2012 CFR

...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...

2012-07-01

295

29 CFR 779.342 - Methods of computing annual volume of sales.

Code of Federal Regulations, 2013 CFR

...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...

2013-07-01

296

29 CFR 779.266 - Methods of computing annual volume of sales or business.

Code of Federal Regulations, 2014 CFR

...2014-07-01 false Methods of computing annual volume of sales or business. 779.266 Section...Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business. (a) No...

2014-07-01

297

29 CFR 779.342 - Methods of computing annual volume of sales.

Code of Federal Regulations, 2014 CFR

...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...

2014-07-01

298

29 CFR 779.342 - Methods of computing annual volume of sales.

Code of Federal Regulations, 2010 CFR

...2010-07-01 false Methods of computing annual volume of sales. 779...Retail or Service Establishments Computing Annual Dollar Volume and Combination...Exemptions § 779.342 Methods of computing annual volume of sales....

2010-07-01

299

29 CFR 779.266 - Methods of computing annual volume of sales or business.

Code of Federal Regulations, 2011 CFR

... 2011-07-01 false Methods of computing annual volume of sales or business...the Act May Apply; Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business....

2011-07-01

300

29 CFR 779.266 - Methods of computing annual volume of sales or business.

Code of Federal Regulations, 2010 CFR

... 2010-07-01 false Methods of computing annual volume of sales or business...the Act May Apply; Enterprise Coverage Computing the Annual Volume § 779.266 Methods of computing annual volume of sales or business....

2010-07-01

301

29 CFR 779.342 - Methods of computing annual volume of sales.

Code of Federal Regulations, 2011 CFR

...2011-07-01 false Methods of computing annual volume of sales. 779...Retail or Service Establishments Computing Annual Dollar Volume and Combination...Exemptions § 779.342 Methods of computing annual volume of sales....

2011-07-01

302

Benchmarking Gas Path Diagnostic Methods: A Public Approach

NASA Technical Reports Server (NTRS)

Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

2008-01-01

303

Research Support in Hungary Machine scheduling LED public lighting Microsimulation in public, 2011 Industrial Innovation Problems #12;Research Support in Hungary Machine scheduling LED public Innovation Problems #12;Research Support in Hungary Machine scheduling LED public lighting Microsimulation

BalÃ¡zs, BÃ¡nhelyi

304

Applications of meshless methods for damage computations with finite strains

NASA Astrophysics Data System (ADS)

Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis.

Pan, Xiaofei; Yuan, Huang

2009-06-01

305

Sensitivity of solutions computed through the Asymptotic Numerical Method

NASA Astrophysics Data System (ADS)

The Asymptotic Numerical Method (ANM) allows one to compute solution branches of sufficiently smooth non-linear PDE problems using truncated Taylor expansions. The Diamant approach of the ANM has been proposed for hiding definitively the differentiation aspects to the user. In this Note, this significant improvement in terms of genericity is exploited to compute the sensitivity of ANM solutions with respect to modelling parameters. The differentiation in the parameters is discussed at both the equation and code level to highlight the Automatic Differentiation (AD) purposes. A numerical example proves the interest of such techniques for a generic and efficient implementation of sensitivity computations. To cite this article: I. Charpentier, C. R. Mecanique 336 (2008).

Charpentier, Isabelle

2008-10-01

306

Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

2000-04-01

307

Computational Property of Hybrid Methods with PSO and DE

NASA Astrophysics Data System (ADS)

In this paper, we present a new type of hybrid methods for global optimization with Particle Swarm Optimization (PSO) and Differential Evolution (DE), which have attracted interests as heuristic and global optimization methods recently. Concretely, “p-best solutions” as the targets of PSO's particles are actuated by DE's evolutional mechanism in order to promote PSO's global searching ability. The presented hybrid method works effectively because PSO acts as a local optimizer and DE plays a role as a global optimizer. To evaluate performance of the hybridization, our method is applied to some benchmarks and is compared with the separated PSO and DE. Through computer simulations, it is certified that the proposed hybrid method performs fairy better than their separated algorithm.

Muranaka, Kenichi; Aiyoshi, Eitaro

308

Experiences using DAKOTA stochastic expansion methods in computational simulations.

Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

Templeton, Jeremy Alan; Ruthruff, Joseph R.

2012-01-01

309

Library Orientation Methods, Mental Maps, and Public Services Planning.

ERIC Educational Resources Information Center

Two library orientation methods, a self-guided cassette walking tour and a slide-tape program, were administered to 202 freshmen students to determine if moving through the library increased students' ability to develop a mental map of the library. An effort was made to ensure that the two orientation programs were equivalent. Results from the 148…

Ridgeway, Trish

310

IBEISEVIER Comput. Methods Appl. Mech. Engrg. 167 ( 1998) 261-273 Computer methods

method is the use of a Dirichlet-to-Neumann (DtN) map to replace the infinite layer [5]. Local DtN maps accurate way of modeling the effect of the infinite layer would be the use of a DtN map which is global

Guddati, Murthy N.

311

An Overview of a Decade of Journal Publications about Culture and Human-Computer Interaction (HCI)

NASA Astrophysics Data System (ADS)

In this paper, we analyze the concept of human-computer interaction in cultural and national contexts. Building and extending upon the framework for understanding research in usability and culture by Honold [3], we give an overview of publications in culture and HCI between 1998 and 2008, with a narrow focus on high-level journal publications only. The purpose is to review current practice in how cultural HCI issues are studied, and to analyse problems with the measures and interpretation of this studies. We find that Hofstede's cultural dimensions has been the dominating model of culture, participants have been picked because they could speak English, and most studies have been large scale quantitative studies. In order to balance this situation, we recommend that more researchers and practitioners do qualitative, empirical work studies.

Clemmensen, Torkil; Roese, Kerstin

312

The paper examines how public relations professionals are dealing with the potential for sabotage, rumors, and misinformation spread via the Internet by computer hackers. The author examines the public relations profession from a systems theory perspective and attempts to outline skills necessary for organizational survival in the new information age. Original data was gathered from a sample population of 41

Joseph Basso

1997-01-01

313

A hierarchical method for molecular docking using cloud computing.

Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs. PMID:23017886

Kang, Ling; Guo, Quan; Wang, Xicheng

2012-11-01

314

Computational biology in the cloud: methods and new insights from computing at scale.

The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available. PMID:23424149

Kasson, Peter M

2013-01-01

315

Implementation of an ADI method on parallel computers

NASA Technical Reports Server (NTRS)

The implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, an SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the FLEX/32 and CRAY/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

Fatoohi, Raad A.; Grosch, Chester E.

1987-01-01

316

Exploring the antiviral activity of juglone by computational method.

Nature has been the best source of medicines for a long time. Many plant extracts have been used as drugs. Juglone occurs in all parts of the Juglandaceae family and is found extensively in the black walnut plants. It possesses antifungal, antimalarial, antibacterial and antiviral properties besides exhibiting cytotoxic effects. Juglone has gained interest by the researchers for its anticancer properties. This article elucidates the antiviral activity of the Juglone by the computational method. PMID:24846583

Vardhini, Shailima R D

2014-12-01

317

Plane Couette Flow Computations by TRMC and MFS Methods

NASA Astrophysics Data System (ADS)

A new class of schemes of the DSMC type for computing near-continuum flows has been recently suggested: the time-relaxed Monte Carlo (TRMC) methods. An important step preceding the wide use of these schemes is their validation by classical homogeneous and one-dimensional problems of gas dynamics. For this purpose, a plane Couette flow is considered in the present paper. A comparison of TRMC results with the data obtained by time-proved schemes of the DSMC method (here we used the Majorant Frequency Scheme) in a wide range of Knudsen numbers and for different values of wall velocity is presented.

Russo, G.; Pareschi, L.; Trazzi, S.; Shevyrin, A. A.; Bondar, Ye. A.; Ivanov, M. S.

2005-05-01

318

Integration of viscous effects into inviscid computational methods

NASA Technical Reports Server (NTRS)

A variety of practical fluid dynamic problems related to the low-speed, high Reynolds number flow over aircraft and ground vehicles fall in a category where some simplified mathematical models become applicable. This provides the fluid dynamicists with a more economical computational tool, compared to the alternative solution of the Navier Stokes equations. The objective was to provide a brief survey of some of the viscous boundary layer solution methods and to propose a method for coupling between the inviscid outer flow and the viscous boundary layer solutions. Results of this survey and details of the viscous/inviscid flow coupling efforts are presented.

Katz, Joseph

1990-01-01

319

Computer method for identification of boiler transfer functions

NASA Technical Reports Server (NTRS)

An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

Miles, J. H.

1971-01-01

320

A method of ultrasonic 3-D computed velocimetry.

A method of ultrasonic three-dimensional (3-D) vector velocimetry, which is derived by extending the previously proposed two-dimensional (2-D) vector velocimetry, is presented. In the proposed method, the three vector components of velocity to be measured are defined as the velocity in the beam axial direction, and angle velocities in two transverse directions. To derive the three components of velocity, signals detected by a 2-D array transducer are first 2-D Fourier transformed in the spatial domain parallel to the 2-D array transducer and then are one-dimensional (1-D) Fourier transformed in the time domain. The advantage of the proposed method is that it uses a linear signal processing, so it can simultaneously measure individual velocities of multiple scatterers. Computer simulations clearly demonstrate the effectiveness of the proposed method. PMID:9282474

Ogura, Y; Katakura, K; Okujima, M

1997-09-01

321

Publicity and public relations

NASA Technical Reports Server (NTRS)

This paper addresses approaches to using publicity and public relations to meet the goals of the NASA Space Grant College. Methods universities and colleges can use to publicize space activities are presented.

Fosha, Charles E.

1990-01-01

322

Computation of multi-material interactions using point method

Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations. Numerical examples are given to demonstrate the new scheme.

Zhang, Duan Z [Los Alamos National Laboratory; Ma, Xia [Los Alamos National Laboratory; Giguere, Paul T [Los Alamos National Laboratory

2009-01-01

323

A method to encrypt information with DNA computing

In this paper we investigate how bimolecular automatons encrypt information. Biomolecular automaton based on DNA computing is a kind of nano-computer. This DNA computing model is different from the original DNA computing for it can realize the basic functions of automaton. Having both advantages of DNA computing and electrical computing, biomolecular automaton can improve the practicability of DNA computer and

Zheng Zhang; Xiaolong Shi; Jie Liu

2008-01-01

324

Computers, Ethics, and Social Responsibility Class SCBE MED250A & B: Medical Ethics I & 2, David MagnusDepartment Research Ethics Resources at Stanford University Type CS CS181 Computers, ethics, and public policy, Margaret Johnson Class Law LAW288: Governance and Ethics: Anti-corruption law, compliance

325

Evolutionary computational methods to predict oral bioavailability QSPRs.

This review discusses evolutionary and adaptive methods for predicting oral bioavailability (OB) from chemical structure. Genetic Programming (GP), a specific form of evolutionary computing, is compared with some other advanced computational methods for OB prediction. The results show that classifying drugs into 'high' and 'low' OB classes on the basis of their structure alone is solvable, and initial models are already producing output that would be useful for pharmaceutical research. The results also suggest that quantitative prediction of OB will be tractable. Critical aspects of the solution will involve the use of techniques that can: (i) handle problems with a very large number of variables (high dimensionality); (ii) cope with 'noisy' data; and (iii) implement binary choices to sub-classify molecules with behavior that are qualitatively different. Detailed quantitative predictions will emerge from more refined models that are hybrids derived from mechanistic models of the biology of oral absorption and the power of advanced computing techniques to predict the behavior of the components of those models in silico. PMID:11865672

Bains, William; Gilbert, Richard; Sviridenko, Lilya; Gascon, Jose-Miguel; Scoffin, Robert; Birchall, Kris; Harvey, Inman; Caldwell, John

2002-01-01

326

Review methods for image segmentation from computed tomography images

NASA Astrophysics Data System (ADS)

Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

2014-12-01

327

Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

Cai, Wei

2014-05-15

328

An analytical method for computing atomic contact areas in biomolecules.

We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. PMID:22965816

Mach, Paul; Koehl, Patrice

2013-01-15

329

A novel computational method for comparing vibrational circular dichroism spectra

NASA Astrophysics Data System (ADS)

A novel method, SimIR/VCD, for comparing experimental and calculated VCD (vibrational circular dichroism) spectra is developed, based on newly defined spectra similarities. With computationally optimized frequency scaling and shifting, a calculated spectrum can be easily identified to match an observed spectrum, which leads to an unbiased molecular chirality assignment. The time-consuming manual band-fitting work is greatly reduced. With (1S)-(-)-?-pinene as an example, it demonstrates that the calculated VCD similarity is correlated with VCD spectra matching quality and has enough sensitivity to identify variations in the spectra. The study also compares spectra calculated using different DFT methods and basis sets. Using this method should facilitate the spectra matching, reduce human error and provide a confidence measure in the chiral assignment using VCD spectroscopy.

Shen, Jian; Zhu, Chengyue; Reiling, Stephan; Vaz, Roy

2010-08-01

330

A modified Henyey method for computing radiative transfer hydrodynamics

NASA Technical Reports Server (NTRS)

The implicit hydrodynamic code of Kutter and Sparks (1972), which is limited to optically thick regions and employs the diffusion approximation for radiative transfer, is modified to include radiative transfer effects in the optically thin regions of a model star. A modified Henyey method is used to include the solution of the radiative transfer equation in this implicit code, and the convergence properties of this method are proven. A comparison is made between two hydrodynamic models of a classical Cepheid with a 12-day period, one of which was computed with the diffusion approximation and the other with the modified Henyey method. It is found that the two models produce nearly identical light and velocity curves, but differ in the fact that the former never has temperature inversions in the atmosphere while the latter does when sufficiently strong shocks are present.

Karp, A. H.

1975-01-01

331

Ray casting method for integral calculation of computational flow imaging

NASA Astrophysics Data System (ADS)

Computational flow imaging (CFI) uses theoretical predictions of the interaction and transmission of optical waves through theoretical flowfield to generate digital pictures that simulate real observations, but the phase data produced by interferometric, Schlieren or shadow graph is the result of the integration of the refractive index or refractive index or refractive index gradient along the line-of-sight. The goal of this research is to develop a fast, efficeint ray casting method for integral calculation of computational flow imaging. We use a preprocessing procedure that the whole flow region is divided into new regular blocks in which the voxels are distributed by x and y coordinates to increase the processing speed. The algorithms are generally enough to handle non-convex, curvilinear or irregular grids and cells and grids constructed from multiple grids of CFD solutions.

Wu, Yingchuan; Le, Jialing; He, Anzhi

2003-04-01

332

COMSAC: Computational Methods for Stability and Control. Part 2

NASA Technical Reports Server (NTRS)

The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

2004-01-01

333

Computing in Education There is a wide variety of computer uses in education today. The significant uses for elementary and secondary schools can be categorized in roughly three ways: (1) Administrative Data Processing, (2) Instructional Support and (3) Instructional Computing. Instructional computing can be delivered by many methods, but the main vehicle for supplying it, and the one having the

John C Storlie; U. W.-La Crosse

1975-01-01

334

[A new method of measuring temporal resolution for computed tomography].

In this study, we proposed a new method of measuring temporal resolution using an impulse signal in the time domain in computed tomography (CT). We employed a metal ball with a diameter of 11 mm as the source of the impulse signal, which was shot to a slice plane at a very fast speed during scanning, along the perpendicular direction to the plane. A 4-slice multi detector-row CT (MDCT) system was employed to evaluate the new method, and images for region of interest (ROI) measurement were reconstructed with a z-increment corresponding to a very short time (< or = 0.03 sec). Temporal sensitivity profiles (TSPs) for various helical beam-pitches were obtained by plotting averaged CT values within the ROIs against the temporal axis. The accuracy of the method was examined by comparing the measured TSPs with theoretical TSPs corresponding to respective helical beam-pitches. As a result, the theoretical TSPs and measured TSPs demonstrated high coincidence in all beam-pitches. Since the TSPs indicated the profiles with sharp shapes faithful to the theoretical TSPs, it was proved that the new method had sufficient inherent temporal resolution. It was indicated that the new method we proposed would be an effective method for evaluating temporal resolution in CT. PMID:18840955

Ichikawa, Katsuhiro; Takada, Tadanori; Hara, Takanori; Ohashi, Kazuya; Niwa, Shinji

2008-09-20

335

Proliferation of degrees-of-freedom has plagued discontinuous Galerkin methodology from its inception over 30 years ago. This paper develops a new computational formulation that combines the advantages of discontinuous Galerkin methods with the data structure of their continuous Galerkin counterparts. The new method uses local, element-wise problems to project a continuous finite element space into a given discontinuous space, and then applies a discontinuous Galerkin formulation. The projection leads to parameterization of the discontinuous degrees-of-freedom by their continuous counterparts and has a variational multiscale interpretation. This significantly reduces the computational burden and, at the same time, little or no degradation of the solution occurs. In fact, the new method produces improved solutions compared with the traditional discontinuous Galerkin method in some situations.

Buffa, Annalisa (IMATI - Consiglio Nazionale delle Ricerche, Pavia, Italy); Bochev, Pavel Blagoveston; Scovazzi, Guglielmo; Hughes, Thomas J. R. (The University of Texas at Austin)

2005-03-01

336

Proliferation of degrees-of-freedom has plagued discontinuous Galerkin methodology from its inception over 30 years ago. This paper develops a new computational formulation that combines the advantages of discontinuous Galerkin methods with the data structure of their continuous Galerkin counterparts. The new method uses local, element-wise problems to project a continuous finite element space into a given discontinuous space, and then applies a discontinuous Galerkin formulation. The projection leads to parameterization of the discontinuous degrees-of-freedom by their continuous counterparts and has a variational multiscale interpretation. This significantly reduces the computational burden and, at the same time, little or no degradation of the solution occurs. In fact, the new method produces improved solutions compared with the traditional discontinuous Galerkin method in some situations.

Sangalli, Giancarlo (University of Pavia, Italy); Buffa, Annalisa (University of Pavia, Italy); Bochev, Pavel Blagoveston; Scovazzi, Guglielmo; Hughes, Thomas J. R. (The University of Texas at Austin)

2005-07-01

337

A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

NASA Technical Reports Server (NTRS)

A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

2011-01-01

338

Proliferation of degrees-of-freedom has plagued discontinuous Galerkin methodology from its inception over 30years ago. This paper develops a new computational formulation that combines the advantages of discontinuous Galerkin methods with the data structure of their continuous Galerkin counterparts. The new method uses local, element-wise problems to project a continuous finite element space into a given discontinuous space, and then applies

Thomas J. R. Hughes; Guglielmo Scovazzi; Pavel B. Bochev; Annalisa Buffa

2006-01-01

339

Federal Register 2010, 2011, 2012, 2013, 2014

...Privacy Act of 1974, as Amended by Public Law 100-503; Notice of a Computer Matching...Privacy Act of 1974, as amended by Public Law 100-503. SUMMARY: In compliance with the Privacy Act of 1974, as amended by Public Law 100-503, the Computer Matching and...

2012-04-13

340

Fan Flutter Computations Using the Harmonic Balance Method

NASA Technical Reports Server (NTRS)

An experimental forward-swept fan encountered flutter at part-speed conditions during wind tunnel testing. A new propulsion aeroelasticity code, based on a computational fluid dynamics (CFD) approach, was used to model the aeroelastic behavior of this fan. This threedimensional code models the unsteady flowfield due to blade vibrations using a harmonic balance method to solve the Navier-Stokes equations. This paper describes the flutter calculations and compares the results to experimental measurements and previous results from a time-accurate propulsion aeroelasticity code.

Bakhle, Milind A.; Thomas, Jeffrey P.; Reddy, T.S.R.

2009-01-01

341

Assessment of nonequilibrium radiation computation methods for hypersonic flows

NASA Technical Reports Server (NTRS)

The present understanding of shock-layer radiation in the low density regime, as appropriate to hypersonic vehicles, is surveyed. Based on the relative importance of electron excitation and radiation transport, the hypersonic flows are divided into three groups: weakly ionized, moderately ionized, and highly ionized flows. In the light of this division, the existing laboratory and flight data are scrutinized. Finally, an assessment of the nonequilibrium radiation computation methods for the three regimes in hypersonic flows is presented. The assessment is conducted by comparing experimental data against the values predicted by the physical model.

Sharma, Surendra

1993-01-01

342

Method and apparatus for managing transactions with connected computers

The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.

Goldsmith, Steven Y. (Albuquerque, NM); Phillips, Laurence R. (Corrales, NM); Spires, Shannon V. (Albuquerque, NM)

2003-01-01

343

Computation of Sound Propagation by Boundary Element Method

NASA Technical Reports Server (NTRS)

This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.

Guo, Yueping

2005-01-01

344

ERIC Educational Resources Information Center

Graphical methods for displaying data, as well as available computer software and hardware, are reviewed. The authors have emphasized the types of graphs which are most relevant to the needs of the National Center for Education Statistics (NCES) and its readers. The following types of graphs are described: tabulations, stem-and-leaf displays,…

Bessey, Barbara L.; And Others

345

An experiment in hurricane track prediction using parallel computing methods

NASA Technical Reports Server (NTRS)

The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

1994-01-01

346

Applications of Computational Methods for Dynamic Stability and Control Derivatives

NASA Technical Reports Server (NTRS)

Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

Green, Lawrence L.; Spence, Angela M.

2004-01-01

347

Parallel computation of meshless methods for explicit dynamic analysis.

A parallel computational implementation of modern meshless methods is presented for explicit dynamic analysis. The procedures are demonstrated by application of the Reproducing Kernel Particle Method (RKPM). Aspects of a coarse grain parallel paradigm are detailed for a Lagrangian formulation using model partitioning. Integration points are uniquely defined on separate processors and particle definitions are duplicated, as necessary, so that all support particles for each point are defined locally on the corresponding processor. Several partitioning schemes are considered and a reduced graph-based procedure is presented. Partitioning issues are discussed and procedures to accommodate essential boundary conditions in parallel are presented. Explicit MPI message passing statements are used for all communications among partitions on different processors. The effectiveness of the procedure is demonstrated by highly deformable inelastic example problems.

Danielson, K. T.; Hao, S.; Liu, W. K.; Uras, R. A.; Li, S.; Reactor Engineering; Northwestern Univ.; Waterways Experiment Station

2000-03-10

348

Ventricular hemodynamics using cardiac computed tomography and optical flow method.

Ventricular hemodynamics plays an important role in assessing cardiac function in clinical practice. The aim of this study was to determine the ventricular hemodynamics based on contrast movement in the left ventricle (LV) between the phases in a cardiac cycle recorded using an electrocardiography (ECG) with cardiac computed tomography (CT) and optical flow method. Cardiac CT data were acquired at 120 kV and 280 mA with a 350 ms gantry rotation, which covered one cardiac cycle, on the 640-slice CT scanner with ECG for a selected patient without heart disease. Ventricular hemodynamics (mm/phase) were calculated using the optical flow method based on contrast changes with ECG phases in anterior-posterior, lateral and superior-inferior directions. Local hemodynamic information of the LV with color coating was presented. The visualization of the functional information made the hemodynamic observation easy. PMID:24463391

Lin, Yang-Hsien; Huang, Yung-Hui; Lin, Kang-Ping; Liu, Juhn-Cherng; Huang, Tzung-Chi

2014-01-01

349

A computational design method for transonic turbomachinery cascades

NASA Technical Reports Server (NTRS)

This paper describes a systematical computational procedure to find configuration changes necessary to modify the resulting flow past turbomachinery cascades, channels and nozzles, to be shock-free at prescribed transonic operating conditions. The method is based on a finite area transonic analysis technique and the fictitious gas approach. This design scheme has two major areas of application. First, it can be used for design of supercritical cascades, with applications mainly in compressor blade design. Second, it provides subsonic inlet shapes including sonic surfaces with suitable initial data for the design of supersonic (accelerated) exits, like nozzles and turbine cascade shapes. This fast, accurate and economical method with a proven potential for applications to three-dimensional flows is illustrated by some design examples.

Sobieczky, H.; Dulikravich, D. S.

1982-01-01

350

Radiation Transport Computation in Stochastic Media: Method and Application

NASA Astrophysics Data System (ADS)

Stochastic media, characterized by the stochastic distribution of inclusions in a background medium, are typical radiation transport media encountered in natural or engineering systems. In the community of radiation transport computation, there is always a demand of accurate and efficient methods that can account for the nature of the stochastic distribution. In this dissertation, we focus on methodology development for the radiation transport computation that is applied to neutronic analyses of nuclear reactor designs characterized by the stochastic distribution of particle fuel. Reactor concepts with the employment of a fuel design consisting of a random heterogeneous mixture of fissile material and non-fissile moderator are constantly proposed. Key physical quantities such as core criticality and power distribution, reactivity control design parameters, depletion and fuel burn-up need to be carefully evaluated. In order to meet these practical requirements, we first need to develop accurate and fast computational methods that can effectively account for the stochastic nature of double heterogeneity configuration. A Monte Carlo based method called Chord Length Sampling (CLS) method is considered to be a promising method for analyzing those TRISO-type fueled reactors. Although the CLS method has been proposed for more than two decades and much research has been conducted to enhance its applicability, further efforts are still needed to address some key research gaps that exist for the CLS method. (1) There is a general lack of thorough investigation of the factors that give rise to the inaccuracy of the CLS method found by many researchers. The accuracy of the CLS method depends on the optical and geometric properties of the system. In some specific scenarios, considerable inaccuracies have been reported. However, no research has been providing a clear interpretation of the reasons responsible for the inaccuracy in the reported scenarios. Furthermore, no any correction methods have been proposed or developed to improve the accuracy of the CLS in all the applied scenarios. (2) Previous CLS method only deals with the on-the-fly sample of fuel particles in analyzing TRISO-type fueled reactors. Within the fuel particle, which consists of a fuel kernel and a coating, conventional Monte Carlo simulations apply. This strategy may not fully achieve the highest computational efficiency since extra simulation time is taken for tracking neutrons in the coating region. The coating region has negligible neutronic effect on the overall reactor core performance. This indicates a possible strategy to further increase the computational efficiency by directly sampling fuel kernels on-the-fly in the CLS simulations. In order to test the new strategy, a new model of the chord length distribution function is needed, which requires new research effort to develop and test the new model. (3) The previous evaluations and applications of the CLS method have been limited to single-type single-size fuel particle systems, i.e. only one type of fuel particles with constant size is assumed in the fuel zone, which is the case for typical VHTR designs. In practice, however, for different application purposes, two or more types of TRISO fuel particles may be loaded in the same fuel zone, e.g. fissile fuel particles and fertile fuel particles are used for transmutation purpose in some reactors. Moreover, the fuel particle size may not be kept constant and can vary with a range. Typical design containing such fuel particles can be found in the FSV reactor. Therefore, it is desired to develop new computational model to treat multi-type poly-sized particle systems in the neutornic analysis. This requires extending the current CLS method to on-the-fly sample not only the location of the fuel particle, but also the type and size of the fuel particles in order to be applied to a broad range of reactor designs in neutronic analyses. New sampling functions need to be developed for the extended on-the-fly sampling strategy. This Ph.D. dissertation addressed these

Liang, Chao

351

Parallel computation of multigroup reactivity coefficient using iterative method

One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

Susmikanti, Mike [Center for Development of Nuclear Informatics, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)] [Center for Development of Nuclear Informatics, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia); Dewayatna, Winter [Center for Nuclear Fuel Technology, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)] [Center for Nuclear Fuel Technology, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)

2013-09-09

352

A FAST NEW PUBLIC CODE FOR COMPUTING PHOTON ORBITS IN A KERR SPACETIME

Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. We present a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semianalytically for the first time. We discuss the method, its implementation in a freely available FORTRAN code, and its application to toy problems from the literature.

Dexter, Jason [Department of Physics, University of Washington, Seattle, WA 98195-1560 (United States); Agol, Eric [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States)], E-mail: jdexter@u.washington.edu

2009-05-10

353

34 CFR 682.304 - Methods for computing interest benefits and special allowance.

Code of Federal Regulations, 2010 CFR

... 3 2010-07-01 2010-07-01 false Methods for computing interest benefits and special allowance. 682.304 Section...Interest and Special Allowance § 682.304 Methods for computing interest benefits and special allowance. (a)...

2010-07-01

354

34 CFR 682.304 - Methods for computing interest benefits and special allowance.

Code of Federal Regulations, 2011 CFR

... 4 2011-07-01 2011-07-01 false Methods for computing interest benefits and special allowance. 682.304 Section...Interest and Special Allowance § 682.304 Methods for computing interest benefits and special allowance. (a)...

2011-07-01

355

Characterization of heterogeneous solids via wave methods in computational microelasticity

NASA Astrophysics Data System (ADS)

Real solids are inherently heterogeneous bodies. While the resolution at which they are observed may be disparate from one material to the next, heterogeneities heavily affect the dynamic behavior of all microstructured solids. This work introduces a wave propagation simulation methodology, based on Mindlin's microelastic continuum theory, as a tool to dynamically characterize microstructured solids in a way that naturally accounts for their inherent heterogeneities. Wave motion represents a natural benchmark problem to appreciate the full benefits of the microelastic theory, as in high-frequency dynamic regimes do microstructural effects unequivocally elucidate themselves. Through a finite-element implementation of the microelastic continuum and the interpretation of the resulting computational multiscale wavefields, one can estimate the effect of microstructures upon the wave propagation modes, phase and group velocities. By accounting for microstructures without explicitly modeling them, the method allows reducing the computational time with respect to classical methods based on a direct numerical simulation of the heterogeneities. The numerical method put forth in this research implements the microelastic theory through a finite-element scheme with enriched super-elements featuring microstructural degrees of freedom, and implementing constitutive laws obtained by homogenizing the microstructure characteristics over material meso-domains. It is possible to envision the use of this modeling methodology in support of diverse applications, ranging from structural health monitoring in composite materials to the simulation of biological and geomaterials. From an intellectual point of view, this work offers a mathematical explanation of some of the discrepancies often observed between one-scale models and physical experiments by targeting the area of wave propagation, one area where these discrepancies are most pronounced.

Gonella, Stefano; Steven Greene, M.; Kam Liu, Wing

2011-05-01

356

Matching wind turbine rotors and loads: computational methods for designers

This report provides a comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications. The user must supply: (1) turbine aerodynamic efficiency as a function of tipspeed ratio; (2) mechanical load torque as a function of rotation speed; (3) useful delivered power as a function of incoming mechanical power; (4) site average windspeed and, for maximum accuracy, distribution data. The description of the data includes governing limits consistent with the capacities of components. The report develops, a step-by-step method for converting the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) a decision is made how turbine power is to be governed (it may self-govern) to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics come into play to predict longterm energy output. Most systems can be approximated by a graph-and-calculator approach: Computer-generated families of coefficient curves provide data for algebraic scaling formulas. The method leads not only to energy predictions, but also to insight into the processes being modeled. Direct use of a computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out witn in-depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps, including three different load-compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

Seale, J.B.

1983-04-01

357

The Role of Public Extension in Introducing Environment-Friendly Farming Methods in Turkey.

ERIC Educational Resources Information Center

Currently, the Turkish extension service plays a minimal role in reducing adverse environmental effects of farming methods. Public investment in research and extension on sustainable agriculture is needed to ensure long-term production practices that maintain the food supply without damaging the environment. (SK)

Kumuk, T.; Akgungor, S.

1995-01-01

358

Relationship of Instructional Methods to Student Engagement in Two Public High Schools

ERIC Educational Resources Information Center

This study investigated the argument that schools that emphasize relational learning are better able to serve the motivational needs of adolescents. Matched-pair samples (n=80) from two public secondary schools were compared using the experience sampling method (ESM). Students attending a "non-traditional" school (which employed group decision…

Johnson, Lisa S.

2008-01-01

359

Evaluation of some methods for the relative assessment of scientific publications

Some bibliometric methods for the assessment of the publication activity of research units are discussed on the basis of impact factors and citations of papers. Average subfield impact factor of periodicals representing subfields in chemistry is suggested. This indicator characterizes the average citedness of a paper in a given subfield. Comparing the total sum of impact factors of corresponding periodicals

P. Vinkler

1986-01-01

360

Land-use planning and public preferences: What can we learn from choice experiment method?

In this article we discuss the economic approach to evaluate landscape preferences for land-use planning. We then use the choice experiment method to examine public preferences for three landscape features – hedgerows, farm buildings and scrubland – in the Monts d’Arrée region (in Brittany, France), in the context of re-design of landscape conservation policy by the local environmental institute. Surveys

Mbolatiana Rambonilaza; Jeanne Dachary-Bernard

2007-01-01

361

Computational modeling of multicellular constructs with the material point method.

Computational modeling of the mechanics of cells and multicellular constructs with standard numerical discretization techniques such as the finite element (FE) method is complicated by the complex geometry, material properties and boundary conditions that are associated with such systems. The objectives of this research were to apply the material point method (MPM), a meshless method, to the modeling of vascularized constructs by adapting the algorithm to accurately handle quasi-static, large deformation mechanics, and to apply the modified MPM algorithm to large-scale simulations using a discretization that was obtained directly from volumetric confocal image data. The standard implicit time integration algorithm for MPM was modified to allow the background computational grid to remain fixed with respect to the spatial distribution of material points during the analysis. This algorithm was used to simulate the 3D mechanics of a vascularized scaffold under tension, consisting of growing microvascular fragments embedded in a collagen gel, by discretizing the construct with over 13.6 million material points. Baseline 3D simulations demonstrated that the modified MPM algorithm was both more accurate and more robust than the standard MPM algorithm. Scaling studies demonstrated the ability of the parallel code to scale to 200 processors. Optimal discretization was established for the simulations of the mechanics of vascularized scaffolds by examining stress distributions and reaction forces. Sensitivity studies demonstrated that the reaction force during simulated extension was highly sensitive to the modulus of the microvessels, despite the fact that they comprised only 10.4% of the volume of the total sample. In contrast, the reaction force was relatively insensitive to the effective Poisson's ratio of the entire sample. These results suggest that the MPM simulations could form the basis for estimating the modulus of the embedded microvessels through a parameter estimation scheme. Because of the generality and robustness of the modified MPM algorithm, the relative ease of generating spatial discretizations from volumetric image data, and the ability of the parallel computational implementation to scale to large processor counts, it is anticipated that this modeling approach may be extended to many other applications, including the analysis of other multicellular constructs and investigations of cell mechanics. PMID:16095601

Guilkey, James E; Hoying, James B; Weiss, Jeffrey A

2006-01-01

362

A simple method to estimate renal volume from computed tomography

Introduction: Renal parenchymal volume can be used clinically to estimate differential renal function. Unfortunately, conventional methods to determine renal volume from computed tomography (CT) are time-consuming or difficult due to software limitations. We evaluated the accuracy of simple renal measurements to estimate renal volume as compared with estimates made using specialized CT volumetric software. Methods: We reviewed 28 patients with contrast-enhanced abdominal CT. Using a standardized technique, one urologist and one urology resident independently measured renal length, lateral diameter and anterior-posterior diameter. Using the ellipsoid method, the products of the linear measurements were compared to 3D volume measurements made by a radiologist using specialized volumetric software. Results: Linear kidney measurements were highly consistent between the urologist and the urology resident (intraclass correlation coefficients: 0.97 for length, 0.96 for lateral diameter, and 0.90 for anterior-posterior diameter). Average renal volume was 170 (SD: 36) cm3 using the ellipsoid method compared with 186 (SD 37) cm3 using volumetric software, for a mean absolute bias of ?15.2 (SD 15.0) cm3 and a relative volume bias of ?8.2% (p < 0.001). Thirty-one of 56 (55.3%) estimated volumes were within 10% of the 3D measured volume and 54 of 56 (96.4%) were within 30%. Conclusion: Renal volume can be easily approximated from contrast-enhanced CT scans using the ellipsoid method. These findings may obviate the need for 3D volumetric software analysis in certain cases. Prospective validation is warranted. PMID:23826046

Breau, Rodney H.; Clark, Edward; Bruner, Bryan; Cervini, Patrick; Atwell, Thomas; Knoll, Greg; Leibovich, Bradley C.

2013-01-01

363

Computational Acoustic Methods for the Design of Woodwind Instruments

NASA Astrophysics Data System (ADS)

This thesis presents a number of methods for the computational analysis of woodwind instruments. The Transmission-Matrix Method (TMM) for the calculation of the input impedance of an instrument is described. An approach based on the Finite Element Method (FEM) is applied to the determination of the transmission-matrix parameters of woodwind instrument toneholes, from which new formulas are developed that extend the range of validity of current theories. The effect of a hanging keypad is investigated and discrepancies with current theories are found for short toneholes. This approach was applied as well to toneholes on a conical bore, and we conclude that the tonehole transmission matrix parameters developed on a cylindrical bore are equally valid for use on a conical bore. A boundary condition for the approximation of the boundary layer losses for use with the FEM was developed, and it enables the simulation of complete woodwind instruments. The comparison of the simulations of instruments with many open or closed toneholes with calculations using the TMM reveal discrepancies that are most likely attributable to internal or external tonehole interactions. This is not taken into account in the TMM and poses a limit to its accuracy. The maximal error is found to be smaller than 10 cents. The effect of the curvature of the main bore is investigated using the FEM. The radiation impedance of a wind instrument bell is calculated using the FEM and compared to TMM calculations; we conclude that the TMM is not appropriate for the simulation of flaring bells. Finally, a method is presented for the calculation of the tonehole positions and dimensions under various constraints using an optimization algorithm, which is based on the estimation of the playing frequencies using the Transmission-Matrix Method. A number of simple woodwind instruments are designed using this algorithm and prototypes evaluated.

Lefebvre, Antoine

2011-12-01

364

Computational Methods for MicroRNA Target Prediction

MicroRNAs (miRNAs) have been identified as one of the most important molecules that regulate gene expression in various organisms. miRNAs are short, 21–23 nucleotide-long, single stranded RNA molecules that bind to 3' untranslated regions (3' UTRs) of their target mRNAs. In general, they silence the expression of their target genes via degradation of the mRNA or by translational repression. The expression of miRNAs, on the other hand, also varies in different tissues based on their functions. It is significantly important to predict the targets of miRNAs by computational approaches to understand their effects on the regulation of gene expression. Various computational methods have been generated for miRNA target prediction but the resulting lists of candidate target genes from different algorithms often do not overlap. It is crucial to adjust the bioinformatics tools for more accurate predictions as it is equally important to validate the predicted target genes experimentally. PMID:25153283

Ekimler, Semih; Sahin, Kaniye

2014-01-01

365

ERIC Educational Resources Information Center

This study used self-report and observation techniques to investigate how students study computer-based materials. In addition, it examined if a study method called SOAR can facilitate computer-based learning. SOAR is an acronym that stands for the method's 4 theoretically driven and empirically supported components: select (S), organize (O),…

Jairam, Dharmananda; Kiewra, Kenneth A.

2010-01-01

366

Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

1999-09-20

367

Methods and computer readable medium for improved radiotherapy dosimetry planning

Methods and computer readable media are disclosed for ultimately developing a dosimetry plan for a treatment volume irradiated during radiation therapy with a radiation source concentrated internally within a patient or incident from an external beam. The dosimetry plan is available in near "real-time" because of the novel geometric model construction of the treatment volume which in turn allows for rapid calculations to be performed for simulated movements of particles along particle tracks therethrough. The particles are exemplary representations of alpha, beta or gamma emissions emanating from an internal radiation source during various radiotherapies, such as brachytherapy or targeted radionuclide therapy, or they are exemplary representations of high-energy photons, electrons, protons or other ionizing particles incident on the treatment volume from an external source. In a preferred embodiment, a medical image of a treatment volume irradiated during radiotherapy having a plurality of pixels of information is obtained.

Wessol, Daniel E.; Frandsen, Michael W.; Wheeler, Floyd J.; Nigg, David W.

2005-11-15

368

Computational and experimental methods to decipher the epigenetic code

A multi-layered set of epigenetic marks, including post-translational modifications of histones and methylation of DNA, is finely tuned to define the epigenetic state of chromatin in any given cell type under specific conditions. Recently, the knowledge about the combinations of epigenetic marks occurring in the genome of different cell types under various conditions is rapidly increasing. Computational methods were developed for the identification of these states, unraveling the combinatorial nature of epigenetic marks and their association to genomic functional elements and transcriptional states. Nevertheless, the precise rules defining the interplay between all these marks remain poorly characterized. In this perspective we review the current state of this research field, illustrating the power and the limitations of current approaches. Finally, we sketch future avenues of research illustrating how the adoption of specific experimental designs coupled with available experimental approaches could be critical for a significant progress in this area. PMID:25295054

de Pretis, Stefano; Pelizzola, Mattia

2014-01-01

369

1.1 This practice facilitates the interoperability of computed radiography (CR) imaging and data acquisition equipment by specifying image data transfer and archival storage methods in commonly accepted terms. This practice is intended to be used in conjunction with Practice E2339 on Digital Imaging and Communication in Nondestructive Evaluation (DICONDE). Practice E2339 defines an industrial adaptation of the NEMA Standards Publication titled Digital Imaging and Communications in Medicine (DICOM, see http://medical.nema.org), an international standard for image data acquisition, review, storage and archival storage. The goal of Practice E2339, commonly referred to as DICONDE, is to provide a standard that facilitates the display and analysis of NDE results on any system conforming to the DICONDE standard. Toward that end, Practice E2339 provides a data dictionary and a set of information modules that are applicable to all NDE modalities. This practice supplements Practice E2339 by providing information objec...

American Society for Testing and Materials. Philadelphia

2010-01-01

370

January 28, 2007 21:7 Computer Methods in Biomechanics and Biomedical Engineering NSodes Computer Methods in Biomechanics and Biomedical Engineering Vol. ?, No. ?, ? ?, 1Â19 A comparison of non-standard solvers for ODEs describing cellular reactions in the heart Mary C. MacLachlan , Joakim Sundnes , Raymond

Spiteri, Raymond J.

371

NASA Astrophysics Data System (ADS)

ICRP Publication 116 on ‘Conversion coefficients for radiological protection quantities for external radiation exposures’, provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantities revealed that the latter quantities continue to provide a good approximation of effective dose for photons, neutrons and electrons for the ‘conventional’ energy ranges considered previously (ICRP 1996, ICRU 1998), but not at the higher energies of ICRP Publication 116.

Petoussi-Henss, Nina; Bolch, Wesley E.; Eckerman, Keith F.; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G.; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria

2014-09-01

372

Development of computational methods for heavy lift launch vehicles

NASA Technical Reports Server (NTRS)

The research effort has been focused on the development of an advanced flow solver for complex viscous turbulent flows with shock waves. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. A new computer program named CENS3D has been developed for viscous turbulent flows with discontinuities. Details of the code are described in Appendix A and Appendix B. With the developments of the numerical algorithm and dissipation model, the simulation of three-dimensional viscous compressible flows has become more efficient and accurate. The results of the research are expected to yield a direct impact on the design process of future liquid fueled launch systems.

Yoon, Seokkwan; Ryan, James S.

1993-01-01

373

A stoichiometric calibration method for dual energy computed tomography

NASA Astrophysics Data System (ADS)

The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0 mm, 0.5 mm and 0.4 mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy.

Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

2014-04-01

374

Interactive computer methods for generating mineral-resource maps

Inasmuch as maps are a basic tool of geologists, the U.S. Geological Survey's CRIB (Computerized Resources Information Bank) was constructed so that the data it contains can be used to generate mineral-resource maps. However, by the standard methods used-batch processing and off-line plotting-the production of a finished map commonly takes 2-3 weeks. To produce computer-generated maps more rapidly, cheaply, and easily, and also to provide an effective demonstration tool, we have devised two related methods for plotting maps as alternatives to conventional batch methods. These methods are: 1. Quick-Plot, an interactive program whose output appears on a CRT (cathode-ray-tube) device, and 2. The Interactive CAM (Cartographic Automatic Mapping system), which combines batch and interactive runs. The output of the Interactive CAM system is final compilation (not camera-ready) paper copy. Both methods are designed to use data from the CRIB file in conjunction with a map-plotting program. Quick-Plot retrieves a user-selected subset of data from the CRIB file, immediately produces an image of the desired area on a CRT device, and plots data points according to a limited set of user-selected symbols. This method is useful for immediate evaluation of the map and for demonstrating how trial maps can be made quickly. The Interactive CAM system links the output of an interactive CRIB retrieval to a modified version of the CAM program, which runs in the batch mode and stores plotting instructions on a disk, rather than on a tape. The disk can be accessed by a CRT, and, thus, the user can view and evaluate the map output on a CRT immediately after a batch run, without waiting 1-3 days for an off-line plot. The user can, therefore, do most of the layout and design work in a relatively short time by use of the CRT, before generating a plot tape and having the map plotted on an off-line plotter.

Calkins, James Alfred; Crosby, A.S.; Huffman, T.E.; Clark, A.L.; Mason, G.T.; Bascle, R.J.

1980-01-01

375

1 Accepted for Publication in Computers and Operations Research (2010) Faster Integer-Feasibility in Mixed-Integer Linear Programs by Branching to Force Change Jennifer Pryor (jpryor@sce.carleton.ca) John, Ontario K1S 5B6 Canada October 22, 2010 Abstract Branching in mixed-integer (or integer) linear

Chinneck, J.W.

2010-01-01

376

Publications Forrest M. Ho#man and William W. Hargrove. Cluster computing: Linux taken to the extreme. Linux Magazine, 1(1):56--59, 1999. Forrest M. Ho#man. Concepts in Beowulfery. Linux Magazine, 4(1):40--41, January 2002a. Forrest M. Ho#man. Configuring a Beowulf Cluster. Linux Magazine, 4(2):42--45, February

Hoffman, Forrest M.

377

ERIC Educational Resources Information Center

To a large extent the Southwest can be described as a rural area. Under these circumstances, programs for public understanding of technology become, first of all, exercises in logistics. In 1982, New Mexico State University introduced a program to inform teachers about computer technology. This program takes microcomputers into rural classrooms…

Amodeo, Luiza B.; Martin, Jeanette

378

1 Deception in Speeches of Candidates for Public Office D.B. Skillicorn, School of Computing. Queen by candidates in the 2008 U.S. presidential election; and the observation of both short-term and medium, deception is used as a lens through which to observe the self-perceptions of candidates and campaigns

Paris-Sud XI, Université de

379

Computer technology to assist states, tribes, and clients in the design of wellhead and source water protection areas for public water supply wells is being developed through two distinct SubTasks: (Sub task 1) developing a web-based wellhead decision support system, WellHEDSS, t...

380

Computational intelligence methods for information understanding and information management

Wlodzislaw Duch1,2 , Norbert Jankowski1 and Krzysztof Grbczewski1 1 Department of Informatics, Nicolaus Copernicus University, Torun, Poland, and 2 Department of Computer Science, School of Computer Engineering

Jankowski, Norbert

381

Computational Methods for Domain Partitioning of Protein Structures

NASA Astrophysics Data System (ADS)

Analysis of protein structures typically begins with decomposition of structure into more basic units, called "structural domains". The underlying goal is to reduce a complex protein structure to a set of simpler yet structurally meaningful units, each of which can be analyzed independently. Structural semi-independence of domains is their hallmark: domains often have compact structure and can fold or function independently. Domains can undergo so-called "domain shuffling"when they reappear in different combinations in different proteins thus implementing different biological functions (Doolittle, 1995). Proteins can then be conceived as being built of such basic blocks: some, especially small proteins, consist usually of just one domain, while other proteins possess a more complex architecture containing multiple domains. Therefore, the methods for partitioning a structure into domains are of critical importance: their outcome defines the set of basic units upon which structural classifications are built and evolutionary analysis is performed. This is especially true nowadays in the era of structural genomics. Today there are many methods that decompose the structure into domains: some of them are manual (i.e., based on human judgment), others are semiautomatic, and still others are completely automatic (based on algorithms implemented as software). Overall there is a high level of consistency and robustness in the process of partitioning a structure into domains (for ˜80% of proteins); at least for structures where domain location is obvious. The picture is less bright when we consider proteins with more complex architectures—neither human experts nor computational methods can reach consistent partitioning in many such cases. This is a rather accurate reflection of biological phenomena in general since domains are formed by different mechanisms, hence it is nearly impossible to come up with a set of well-defined rules that captures all of the observed cases.

Veretnik, Stella; Shindyalov, Ilya

382

Analytic and simulation methods in computer network design

The Seventies are here and so are computer networks! The time sharing industry dominated the Sixties and it appears that computer networks will play a similar role in the Seventies. The need has now arisen for many of these time-shared systems to share each others' resources by coupling them together over a communication network thereby creating a computer network. The

Leonard Kleinrock

1970-01-01

383

Research on assessment methods for urban public transport development in China.

In recent years, with the rapid increase in urban population, the urban travel demands in Chinese cities have been increasing dramatically. As a result, developing comprehensive urban transport systems becomes an inevitable choice to meet the growing urban travel demands. In urban transport systems, public transport plays the leading role to promote sustainable urban development. This paper aims to establish an assessment index system for the development level of urban public transport consisting of a target layer, a criterion layer, and an index layer. Review on existing literature shows that methods used in evaluating urban public transport structure are dominantly qualitative. To overcome this shortcoming, fuzzy mathematics method is used for describing qualitative issues quantitatively, and AHP (analytic hierarchy process) is used to quantify expert's subjective judgment. The assessment model is established based on the fuzzy AHP. The weight of each index is determined through the AHP and the degree of membership of each index through the fuzzy assessment method to obtain the fuzzy synthetic assessment matrix. Finally, a case study is conducted to verify the rationality and practicability of the assessment system and the proposed assessment method. PMID:25530756

Zou, Linghong; Dai, Hongna; Yao, Enjian; Jiang, Tian; Guo, Hongwei

2014-01-01

384

Research on Assessment Methods for Urban Public Transport Development in China

In recent years, with the rapid increase in urban population, the urban travel demands in Chinese cities have been increasing dramatically. As a result, developing comprehensive urban transport systems becomes an inevitable choice to meet the growing urban travel demands. In urban transport systems, public transport plays the leading role to promote sustainable urban development. This paper aims to establish an assessment index system for the development level of urban public transport consisting of a target layer, a criterion layer, and an index layer. Review on existing literature shows that methods used in evaluating urban public transport structure are dominantly qualitative. To overcome this shortcoming, fuzzy mathematics method is used for describing qualitative issues quantitatively, and AHP (analytic hierarchy process) is used to quantify expert's subjective judgment. The assessment model is established based on the fuzzy AHP. The weight of each index is determined through the AHP and the degree of membership of each index through the fuzzy assessment method to obtain the fuzzy synthetic assessment matrix. Finally, a case study is conducted to verify the rationality and practicability of the assessment system and the proposed assessment method. PMID:25530756

Zou, Linghong; Guo, Hongwei

2014-01-01

385

Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

2011-10-11

386

Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

Luttman, A.

2012-03-30

387

Non-unitary probabilistic quantum computing circuit and method

NASA Technical Reports Server (NTRS)

A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

2009-01-01

388

1 Computer Physics Communications 66(1991) 243-258 Function parametrization: a fast inverse mapping (FP) is a method to invert computer models that map physical parameters describing the state that requires little computing time to evaluate. The major advantages of FP over other analysis methods are

van Milligen, Boudewijn

389

A method for online adaptation of computer-game AI rulebase

It is not an easy task to balance the level of computer controlled characters that play computer games against human players. In this paper, we focus on a method called Dynamic Scripting (DS) that has been recently proposed for this task. This method online updates rule weights in rule-base that describe the behavior of the computer controlled character. However, the

Ruck Thawonmas; Syota Osaka

2006-01-01

390

PUBLICATIONS [1] M. J. Ward, F. M. Odeh, D. S. Cohen, Asymptotic Methods for MOSFET Modeling, NASEC-

PUBLICATIONS [1] M. J. Ward, F. M. Odeh, D. S. Cohen, Asymptotic Methods for MOSFET Modeling, NASEC400. [4] M. J. Ward, F. M. Odeh, D. S. Cohen, Asymptotic Methods for MOSFET Modeling, SIAM J. Appl. Math

Jellinek, Mark

391

The Repeated Replacement Method: A Pure Lagrangian Meshfree Method for Computational Fluid Dynamics

In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids’ tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly “chops out” fluid from active areas and replaces it with new “flattened” fluid cells with the same mass, momentum, and energy. We call the new cells “flattened” because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175

Walker, Wade A.

2012-01-01

392

One-to-One Computing in Public Schools: Lessons from "Laptops for All" Programs

ERIC Educational Resources Information Center

The basic tenet of one-to-one computing is that the student and teacher have Internet-connected, wireless computing devices in the classroom and optimally at home as well. Also known as "ubiquitous computing," this strategy assumes that every teacher and student has her own computing device and obviates the need for moving classes to computer…

Abell Foundation, 2008

2008-01-01

393

Improving diet is one important pathway for addressing cancer disparities. We conducted mixed-method analyses of 468 24-h\\u000a dietary recalls from 156 African–American women residents of Washington DC public housing to better understand dietary patterns.\\u000a Recalls were rated for five cancer-related preventive characteristics (adequate fruits\\/vegetables, moderate fat, moderate\\u000a calories, no alcohol, and adequate Healthy Eating Index score), combined as a scale.

Ann C. Klassen; Katherine Clegg Smith; Maureen M. Black; Laura E. Caulfield

2009-01-01

394

Recent advances in computational structural reliability analysis methods

NASA Technical Reports Server (NTRS)

The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

1993-01-01

395

A Review of Data Quality Assessment Methods for Public Health Information Systems

High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users’ concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process. PMID:24830450

Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping

2014-01-01

396

between smartphones and public displays [2]. Best practices for social applications on public interactiveSocial Networking Services for Public Spaces Simo Hosio University of Oulu, Finland, Department displays and smartphones as the interaction mediums. Collaboration with local authorities to understand

Chaudhuri, Surajit

397

A Marching Method for Computing Intersection Curves of Two Subdivision Solids

A Marching Method for Computing Intersection Curves of Two Subdivision Solids Xu-Ping Zhu1 , Shi-Min Hu1 , Chiew-Lan Tai2 , and Ralph R. Martin3 1 Department of Computer Science and Technology, Tsinghua

Martin, Ralph R.

398

Lanczos eigensolution method for high-performance computers

NASA Technical Reports Server (NTRS)

The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

Bostic, Susan W.

1991-01-01

399

High performance computing and quantum trajectory method in CPU and GPU systems

NASA Astrophysics Data System (ADS)

Nowadays, a dynamic progress in computational techniques allows for development of various methods, which offer significant speed-up of computations, especially those related to the problems of quantum optics and quantum computing. In this work, we propose computational solutions which re-implement the quantum trajectory method (QTM) algorithm in modern parallel computation environments in which multi-core CPUs and modern many-core GPUs can be used. In consequence, new computational routines are developed in more effective way than those applied in other commonly used packages, such as Quantum Optics Toolbox (QOT) for Matlab or QuTIP for Python.

Wi?niewska, Joanna; Sawerwain, Marek; Leo?ski, Wies?aw

2015-01-01

400

Evaluating Computer Automated Scoring: Issues, Methods, and an Empirical Illustration

ERIC Educational Resources Information Center

With the continual progress of computer technologies, computer automated scoring (CAS) has become a popular tool for evaluating writing assessments. Research of applications of these methodologies to new types of performance assessments is still emerging. While research has generally shown a high agreement of CAS system generated scores with those…

Yang, Yongwei; Buckendahl, Chad W.; Juszkiewicz, Piotr J.; Bhola, Dennison S.

2005-01-01

401

Students' Attitudes towards Control Methods in Computer-Assisted Instruction.

ERIC Educational Resources Information Center

Describes study designed to investigate dental students' attitudes toward computer-assisted teaching as applied in programs for oral radiology in Denmark. Programs using personal computers and slide projectors with varying degrees of learner and teacher control are described, and differences in attitudes between male and female students are…

Hintze, Hanne; And Others

1988-01-01

402

We present a primal-dual interior point method (IPM) for solving smooth convex optimization problems which arise during the placement of integrated circuits. The interior point method represents a substantial enhancement in flexibility versus other methods while having similar computational requirements. We illustrate that iterative solvers are efficient for calculation of search directions during optimization. Computational results are presented on a

Andrew Kennings; Mark Frazer; Anthony Vannelli

1998-01-01

403

A collabortive framework for integrating Modelica Models and computational design methods

Modelica language is been developed to model complex product systems. However, Modelica Models usually are used to simulate the performance not optimize. Computational design methods are very important methods in design process of complex product systems, which is not included in Modelica based tool. In the paper, computational design method integrating design optimization and probabilistic analysis is discussed. The design

Yuming Zhu; Jihong Liu; Bo Li

2012-01-01

404

Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

ERIC Educational Resources Information Center

Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

Heald, Emerson F.

1978-01-01

405

One of the objectives of the surveillance systems implemented by the French National Institute for Public Health Surveillance is to detect communicable diseases and to reduce their impact. For emerging infections, the detection and risk analysis pose specific challenges due to lack of documented criteria for the event. The surveillance systems detect a variety of events, or "signals" which represent a potential risk, such as a novel germ, a pathogen which may disseminate in a non-endemic area, or an abnormal number of cases for a well-known disease. These signals are first verified and analyzed, then classified as: potential public health threat, event to follow-up, or absence of threat. Through various examples, we illustrate the method and criteria which are used to analyze and classify these events considered to be emerging. The examples highlight the importance of host characteristics and exposure in groups at particular risk, such as professionals in veterinarian services, health care workers, travelers, immunodepressed patients, etc. The described method should allow us to identify future needs in terms of surveillance and to improve timeliness, quality of expertise, and feedback information regarding the public health risk posed by events which are insufficiently documented. PMID:21251782

Bitar, D; Che, D; Capek, I; de Valk, H; Saura, C

2011-02-01

406

3D modeling method for computer animate based on modified weak structured light method

NASA Astrophysics Data System (ADS)

A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

2010-11-01

407

philosopher Karl Popper was instrumental in delineating the scientific method by laying down the notions, · Falsification requires quantification of computational modelling error, · Deniability is at the heart

Hatton, Les

408

A New Method of Building Keyboarding Speed on the Computer.

ERIC Educational Resources Information Center

Use of digraphs (pairs of letters representing single speech sounds) in keyboarding is facilitated by computer technology allowing analysis of speed between keystrokes. New software programs provide a way to develop keyboarding speed. (SK)

Sharp, Walter M.

1998-01-01

409

Optimization methods for complex sheet metal stamping computer aided engineering

Nowadays, sheet metal stamping processes design is not a trivial task due to the complex issues to be taken into account (complex\\u000a shapes forming, conflicting design goals and so on). Therefore, proper design methodologies to reduce times and costs have\\u000a to be developed mostly based on computer aided procedures. In this paper, a computer aided approach is proposed with the

Giuseppe Ingarao; Rosa Di Lorenzo

2010-01-01

410

Leading Computational Methods on Scalar and Vector HEC Platforms

The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than

Leonid Oliker; Jonathan Carter; Michael Wehner; Andrew Canning; Stéphane Ethier; Arthur Mirin; David Parks; Patrick H. Worley; Shigemune Kitawaki; Yoshinori Tsuda

2005-01-01

411

ACM Journal of Educational Resources in Computing, Vol. 7, No. 3, Art. 2. Publication Date application in the domain of cyber security education. Categories and Subject Descriptors: I.6 [Simulation Information Systems - Animation; Evaluation/Methodology; K.3 [Computers and Education]: Computer Uses

412

Constructing analysis-suitable parameterization of computational domain from CAD boundary-suitable parameterization of computational domain from CAD boundary for 2D and 3D isogeometric applications. Different from computational approach that offers the possibility of seamless integration between CAD and CAE. The method uses

Paris-Sud XI, UniversitÃ© de

413

A New Analytical Method for Computing Solvent-Accessible Surface Area of Macromolecules

A New Analytical Method for Computing Solvent-Accessible Surface Area of Macromolecules and its, it is important to have an efficient algorithm for computing the solvent-accessible surface area of macromolecules-accessible surface area is reduced to the problem of computing the corresponding curve integrals on the plane

414

European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2012)

European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2012) J , Thomas J. R. Hughes2 , Ernst Rank1 1Lehrstuhl f¨ur Computation in Engineering, Technische Universit¨at M¨unchen Arcisstr. 21, 80333 M¨unchen, Germany e-mail: {schillinger,rank}@bv.tum.de 2Institute for Computational

Evans, John A.

415

Distributed privacy-preserving network size computation: A system-identification based method

Distributed privacy-preserving network size computation: A system-identification based method Federica Garin and Ye Yuan Abstract-- In this study, we propose an algorithm for comput- ing the network to correctly compute the number of nodes in the network. Moreover, numerical implementation has been taken

Paris-Sud XI, Université de

416

Thermal radiation view factor: Methods, accuracy and computer-aided procedures

NASA Technical Reports Server (NTRS)

The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

Kadaba, P. V.

1982-01-01

417

The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

NASA Technical Reports Server (NTRS)

In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

Beltran, Adriana; Salvador, James

1997-01-01

418

This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

419

Four-stage computational technology with adaptive numerical methods for computational aerodynamics

NASA Astrophysics Data System (ADS)

Computational aerodynamics is a key technology in aircraft design which is ahead of physical experiment and complements it. Of course all three components of computational modeling are actively developed: mathematical models of real aerodynamic processes, numerical algorithms, and high-performance computing. The most impressive progress has been made in the field of computing, though with a considerable complication of computer architecture. Numerical algorithms are developed more conservative. More precisely, they are offered and theoretically justified for more simple mathematical problems. Nevertheless, computational mathematics now has amassed a whole palette of numerical algorithms that can provide acceptable accuracy and interface between modern mathematical models in aerodynamics and high-performance computers. A significant step in this direction was the European Project ADIGMA whose positive experience will be used in International Project TRISTAM for further movement in the field of computational technologies for aerodynamics. This paper gives a general overview of objectives and approaches intended to use and a description of the recommended four-stage computer technology.

Shaydurov, V.; Liu, T.; Zheng, Z.

2012-10-01

420

Permeability computation on a REV with an immersed finite element method

An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

Laure, P. [Laboratoire J.-A. Dieudonne, CNRS UMR 6621, Universite de Nice-Sophia Antipolis, Parc Valrose, 06108 Nice, Cedex 02 (France); Puaux, G.; Silva, L.; Vincent, M. [MINES ParisTech, CEMEF-Centre de Mise en Forme des Materiaux, CNRS UMR 7635, BP 207 1 rue Claude, Daunesse 06904 Sophia Antipolis cedex (France)

2011-05-04

421

Progress Towards Computational Method for Circulation Control Airfoils

NASA Technical Reports Server (NTRS)

The compressible Reynolds-averaged Navier-Stokes equations are solved for circulation control airfoil flows. Numerical solutions are computed with both structured and unstructured grid solvers. Several turbulence models are considered, including the Spalart-Allmaras model with and without curvature corrections, the shear stress transport model of Menter, and the k-enstrophy model. Circulation control flows with jet momentum coefficients of 0.03, 0.10, and 0.226 are considered. Comparisons are made between computed and experimental pressure distributions, velocity profiles, Reynolds stress profiles, and streamline patterns. Including curvature effects yields the closest agreement with the measured data.

Swanson, R. C.; Rumsey, C. L.; Anders, S. G.

2005-01-01

422

Code of Federal Regulations, 2010 CFR

...private and public systems of communications. 90.483 Section 90.483 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY...private and public systems of communications. Interconnection...

2010-10-01

423

Code of Federal Regulations, 2013 CFR

...private and public systems of communications. 90.483 Section 90.483 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY...private and public systems of communications. Interconnection...

2013-10-01

424

Code of Federal Regulations, 2011 CFR

...private and public systems of communications. 90.483 Section 90.483 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY...private and public systems of communications. Interconnection...

2011-10-01

425

Code of Federal Regulations, 2012 CFR

2012-10-01

426

Real-Time Optic Flow Computation with Variational Methods

of the LucasÂKanade method. For the linear system of equations result- ing from the discretised Euler and Schunck approach [8], and the local least-square technique of Lucas and Kanade [9]. For the CLG method we

427

29 CFR 794.123 - Method of computing annual volume of sales.

Code of Federal Regulations, 2013 CFR

...2013-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123...7(b)(3) of the Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the...

2013-07-01

428

29 CFR 794.123 - Method of computing annual volume of sales.

Code of Federal Regulations, 2012 CFR

...2012-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123...7(b)(3) of the Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the...

2012-07-01

429

29 CFR 794.123 - Method of computing annual volume of sales.

Code of Federal Regulations, 2014 CFR

...2014-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123...7(b)(3) of the Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the...

2014-07-01

430

To improve introductory computer science courses and to update the teaching of computer programming, new teaching methods emphasizing structured programming and top-down design have been presented and a variety of automated instructional tools have been developed. The purpose of this paper is: (1) to survey a number of methods and tools used in the teaching of programming; (2) to present,

Miguel Ulloa

1980-01-01

431

A finite element method for the computation of transonic flow past airfoils

NASA Technical Reports Server (NTRS)

A finite element method for the computation of the transonic flow with shocks past airfoils is presented using the artificial viscosity concept for the local supersonic regime. Generally, the classic element types do not meet the accuracy requirements of advanced numerical aerodynamics requiring special attention to the choice of an appropriate element. A series of computed pressure distributions exhibits the usefulness of the method.

Eberle, A.

1980-01-01

432

29 CFR 794.123 - Method of computing annual volume of sales.

Code of Federal Regulations, 2010 CFR

... 3 2010-07-01 2010-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123...Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the...

2010-07-01

433

29 CFR 794.123 - Method of computing annual volume of sales.

Code of Federal Regulations, 2011 CFR

... 3 2011-07-01 2011-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123...Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the...

2011-07-01

434

FINITE VOLUME METHODS APPLIED TO THE COMPUTATIONAL MODELLING OF WELDING PHENOMENA

) and Computational Solid Mechanics (CSM). With regard to the CFD modelling of the weld pool fluid dynamics, heat of reference solutions. KEYWORDS: Welding Phenomena, Fluid Dynamics, Solid Mechanics, Finite Volume Methods #121 FINITE VOLUME METHODS APPLIED TO THE COMPUTATIONAL MODELLING OF WELDING PHENOMENA Gareth A

Taylor, Gary

435

ERIC Educational Resources Information Center

The authors present reports of current research on distance education at the FernUniversitat in West Germany. Fritsch discusses adapting distance education techniques for small classes. Kuffner describes procedures for providing feedback to students using personalized computer-generated letters. Klute discusses using a computer with tutorial…

Fritsch, Helmut; And Others

1989-01-01

436

Multi-centred mixed-methods PEPFAR HIV care & support public health evaluation: study protocol

Background A public health response is essential to meet the multidimensional needs of patients and families affected by HIV disease in sub-Saharan Africa. In order to appraise curret provision of HIV care and support in East Africa, and to provide evidence-based direction to future care programming, and Public Health Evaluation was commissioned by the PEPFAR programme of the US Government. Methods/Design This paper described the 2-Phase international mixed methods study protocol utilising longitudinal outcome measurement, surveys, patient and family qualitative interviews and focus groups, staff qualitative interviews, health economics and document analysis. Aim 1) To describe the nature and scope of HIV care and support in two African countries, including the types of facilities available, clients seen, and availability of specific components of care [Study Phase 1]. Aim 2) To determine patient health outcomes over time and principle cost drivers [Study Phase 2]. The study objectives are as follows. 1) To undertake a cross-sectional survey of service configuration and activity by sampling 10% of the facilities being funded by PEPFAR to provide HIV care and support in Kenya and Uganda (Phase 1) in order to describe care currently provided, including pharmacy drug reviews to determine availability and supply of essential drugs in HIV management. 2) To conduct patient focus group discussions at each of these (Phase 1) to determine care received. 3) To undertake a longitudinal prospective study of 1200 patients who are newly diagnosed with HIV or patients with HIV who present with a new problem attending PEPFAR care and support services. Data collection includes self-reported quality of life, core palliative outcomes and components of care received (Phase 2). 4) To conduct qualitative interviews with staff, patients and carers in order to explore and understand service issues and care provision in more depth (Phase 2). 5) To undertake document analysis to appraise the clinical care procedures at each facility (Phase 2). 6) To determine principle cost drivers including staff, overhead and laboratory costs (Phase 2). Discussion This novel mixed methods protocol will permit transparent presentation of subsequent dataset results publication, and offers a substantive model of protocol design to measure and integrate key activities and outcomes that underpin a public health approach to disease management in a low-income setting. PMID:20920241

2010-01-01

437

Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Methods: Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Results: Interview analysis identified twenty-five information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create twenty-five scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. Conclusion: The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design. PMID:21807120

Reeder, Blaine; Turner, Anne M

2011-01-01

438

Analytical methods for computing the polar curves of airplanes

NASA Technical Reports Server (NTRS)

This report presents a method of calculating polar curves which is at least as precise as graphical methods, but it more rapid. Knowing the wind tunnel test of a wing and the performances of an airplane of the same profile, it is easy to verify the characteristic coefficients and, at the same time, the methods determining induced resistances.

LE SUEUR

1921-01-01

439

A rational interpolation method to compute frequency response

NASA Technical Reports Server (NTRS)

A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

1993-01-01

440

Method for measuring the public's appreciation and knowledge of bank notes

NASA Astrophysics Data System (ADS)

No matter how sophisticated a banknotes' security features are, they are only effective if the public uses them. Surveys conducted by the De Nederlandsche Bank (the Dutch central bank, hereinafter: DNB) in the period 1989-1999 have shown that: the more people like a banknote, the more they know about it, including its security features; there is a positive correlation between the appreciation of a banknote (beautiful or ugly) and the knowledge of its security features, its picture and text elements; hardly anybody from the general public knows more than 4 security features by heart, which is why the number of security features for the public should be confined to a maximum of 4; the average number of security features known to a Dutchman was about 1.7 in 1999; over the years, the awareness of banknote security features gradually increased from 1.03 to 1983 to 1.7 in 1999, as a result of new banknote design and information campaigns. In 1999, DNB conducted its last opinion poll on NLG-notes. After the introduction of the euro banknotes on 1 January 2002, a new era of measurements will start. It is DNB's intention to apply the same method for the euro notes as it is used to for the NLG-notes, as this will permit: A comparison of the results of surveys on Dutch banknotes with those of surveys on the new euro notes (NLG) x (EUR); a comparison between the results of similar surveys conducted in other euro countries: (EUR1)x(EUR2). Furthermore, it will enable third parties to compare their banknote model XXX with the euro: (XXX)x(EUR). This article deals with the survey and the results regarding the NLG- notes and is, moreover, intended as an invitation to use the survey method described.

de Heij, Hans A. M.

2002-04-01

441

Verifying a computational method for predicting extreme ground motion

In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, B.T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

2011-01-01

442

New Methods of Mobile Computing: From Smartphones to Smart Education

ERIC Educational Resources Information Center

Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It…

Sykes, Edward R.

2014-01-01

443

A Method for Attenuation Correction in Radionuclide Computed Tomography

The development of algorithms for Radionuclide Computed Tomography (RCT) is complicated by the presence of attenuation of gamma-rays inside the body. Some of the existing RCT reconstruction algorithms apply approximation formulas to the projection data for attenuation correction, while others take attenuation into account through some iterative procedures. The drawbacks of these algorithms are that the approximation formulas commonly used

Lee-Tzuu Chang

1978-01-01

444

Simple computer method provides contours for radiological images

NASA Technical Reports Server (NTRS)

Computer is provided with information concerning boundaries in total image. Gradient of each point in digitized image is calculated with aid of threshold technique; then there is invoked set of algorithms designed to reduce number of gradient elements and to retain only major ones for definition of contour.

Newell, J. D.; Keller, R. A.; Baily, N. A.

1975-01-01

445

A Pathsearch Damped Newton Method for Computing General Equilibria

Science Foundation Grant CCR--9157632 and the Air Force Office of Scientific Research Grant F49620 as a GAMS subsystem, using an interface library developed for this purpose. Computational results obtained as the optimality conditions for nonÂ linear programming. VI's are common in mathematical programming, game theory

Ferris, Michael C.

446

Discovery of Regulatory Elements by a Computational Method

), -globin (Tagle et al. 1988), rbcL (Manen et al. 1994), cystic fibrosis transmembrane conductance regulator and complex mechanisms that regulate gene expression. We focus on one important aspect of this challenge, the identification of binding sites for the factors involved in such regulation. A number of computer algorithms have

Batzoglou, Serafim

447

A METHOD OF ASSESSING USERS' VS MANAGERS' PERCEPTIONS OF SAFETY AND SECURITY PROBLEMS IN PUBLIC BEACH PARK SETTINGS A Thesis by ROBERT JAMES SCOTT STEELE Submitted to the Graduate College of Texas A&M University In Par ial Fulfillment... of the Requirements for the Degree of MASTER GF SCIENCE August 1986 Major Subject: Recreation and Resource Development A METHOD OF ASSESSING USERS' YS MANAGERS' PERCEPTIONS OF SAFETY AND SECURITY PROBLEMS IN PUBLIC BEACH PARK AREAS A Thesis by ROBERT JAMES...

Steele, Robert James Scott

1986-01-01

448

Vectorization on the star computer of several numerical methods for a fluid flow problem

NASA Technical Reports Server (NTRS)

A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

Lambiotte, J. J., Jr.; Howser, L. M.

1974-01-01

449

Computational methods for constructing protein structure models from 3D electron microscopy maps

Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3 Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. PMID:23796504

Esquivel-Rodríguez, Juan; Kihara, Daisuke

2013-01-01

450

An historical survey of computational methods in optimal control.

NASA Technical Reports Server (NTRS)

Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

Polak, E.

1973-01-01

451

ERIC Educational Resources Information Center

The performance of methods for detecting publication bias in meta-analysis was evaluated using Monte Carlo methods. Four methods of bias detection were investigated: Begg's rank correlation, Egger's regression, funnel plot regression, and trim and fill. Five factors were included in the simulation design: number of primary studies in each…

Kromrey, Jeffrey D.; Rendina-Gobioff, Gianna

2006-01-01

452

Comparing subjective image quality measurement methods for the creation of public databases

NASA Astrophysics Data System (ADS)

The Single Stimulus (SS) method is often chosen to collect subjective data testing no-reference objective metrics, as it is straightforward to implement and well standardized. At the same time, it exhibits some drawbacks; spread between different assessors is relatively large, and the measured ratings depend on the quality range spanned by the test samples, hence the results from different experiments cannot easily be merged . The Quality Ruler (QR) method has been proposed to overcome these inconveniences. This paper compares the performance of the SS and QR method for pictures impaired by Gaussian blur. The research goal is, on one hand, to analyze the advantages and disadvantages of both methods for quality assessment and, on the other, to make quality data of blur impaired images publicly available. The obtained results show that the confidence intervals of the QR scores are narrower than those of the SS scores. This indicates that the QR method enhances consistency across assessors. Moreover, QR scores exhibit a higher linear correlation with the distortion applied. In summary, for the purpose of building datasets of subjective quality, the QR approach seems promising from the viewpoint of both consistency and repeatability.

Redi, Judith; Liu, Hantao; Alers, Hani; Zunino, Rodolfo; Heynderickx, Ingrid

2010-01-01

453

Using Mixed Methods and Collaboration to Evaluate an Education and Public Outreach Program (Invited)

NASA Astrophysics Data System (ADS)

Traditional indicators (such as the number of participants or Likert-type ratings of participant perceptions) are often used to provide stakeholders with basic information about program outputs and to justify funding decisions. However, use of qualitative methods can strengthen the reliability of these data and provide stakeholders with more meaningful information about program challenges, successes, and ultimate impacts (Stern, Stame, Mayne, Forss, David & Befani, 2012). In this session, presenters will discuss how they used a mixed methods evaluation to determine the impact of an education and public outreach (EPO) program. EPO efforts were intended to foster more effective, sustainable, and efficient utilization of science discoveries and learning experiences through three main goals 1) increase engagement and support by leveraging of resources, expertise, and best practices; 2) organize a portfolio of resources for accessibility, connectivity, and strategic growth; and 3) develop an infrastructure to support coordination. The evaluation team used a mixed methods design to conduct the evaluation. Presenters will first discuss five potential benefits of mixed methods designs: triangulation of findings, development, complementarity, initiation, and value diversity (Greene, Caracelli & Graham, 2005). They will next demonstrate how a 'mix' of methods, including artifact collection, surveys, interviews, focus groups, and vignettes, was included in the EPO project's evaluation design, providing specific examples of how alignment between the program theory and the evaluation plan was best achieved with a mixed methods approach. The presentation will also include an overview of different mixed methods approaches and information about important considerations when using a mixed methods design, such as selection of data collection methods and sources, and the timing and weighting of quantitative and qualitative methods (Creswell, 2003). Ultimately, this presentation will provide insight into how a mixed methods approach was used to provide stakeholders with important information about progress toward program goals. Creswell, J.W. (2003). Research design: Qualitative, quantitative, and mixed approaches. Thousand Oaks, CA: Sage. Greene, J. C., Caracelli, V. J., & Graham, W. D. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11(3), 255-274. Stern, E; Stame, N; Mayne, J; Forss, K; Davis, R & Befani, B (2012) Broadening the range of designs and methods for impact evaluation. Department for International Development.

Shebby, S.; Shipp, S. S.

2013-12-01

454

Systematic Methods for the Computation of the Directional Fields and Singular Points of Fingerprints

The first subject of the paper is the estimation of a high resolution directional field of fingerprints. Traditional methods are discussed and a method, based on principal component analysis, is proposed. The method not only computes the direction in any pixel location, but its coherence as well. It is proven that this method provides exactly the same results as the

Asker M. Bazen; Sabih H. Gerez

2002-01-01

455

An efficient method for computer aided analysis of noisy electromagnetic fields

This work presents an efficient method for the numerical simulation of noisy electromagnetic fields, accounting for arbitrary correlations between the noise radiation sources. It allows us to compute the spatial distribution of the spectral energy density. Method of moments is applied to model noisy electromagnetic fields by network methods using correlation matrix techniques. The method can be combined with available

Johannes A. Russer; Peter Russer

2011-01-01

456

Computational Intelligence, Volume 11, Number 1, 1995 A NEW METHOD FOR INFLUENCE DIAGRAM EVALUATION

Computational Intelligence, Volume 11, Number 1, 1995 A NEW METHOD FOR INFLUENCE DIAGRAM EVALUATION, two--phase method for influence diagram evaluation. In our method, an influence diagram is first in the literature, our method also provides a clean interface between influence diagram evaluation and Bayesian net

Poole, David

457

Computational simulation of worker exposure using a particle trajectory method.

The velocity field downstream of a worker is approximated with a discrete vortex algorithm. This information is used to calculate trajectories of massless tracer 'particles' released from a point-source of contaminant. Concentrations in the plane of this source are estimated by averaging over a number of such trajectories. Approximations include: (1) representing the worker by a two-dimensional elliptical cylinder; and (2) representing tracer gas contaminant by massless particles generated without momentum. These particles are transported by both vortex shedding and turbulent diffusion. Computer-predicted mean concentrations in the near-wake region downstream of the worker compare well with results from wind-tunnel tracer gas experiments employing a mannequin. Subsequently, the concept of a computational breathing zone is introduced, and predictions of worker exposure are made. These simulations of time-integrated breathing zone concentration also compare well with measured values. PMID:7793747

Flynn, M R; Chen, M M; Kim, T; Muthedath, P

1995-06-01

458

Adaptive computational methods for SSME internal flow analysis

NASA Technical Reports Server (NTRS)

Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.

Oden, J. T.

1986-01-01

459

A MULTI-SCALE METHOD FOR MASONRY COMPUTATIONS

This contribution presents a multi-scale framework for the representation of the non-linear behaviour of planar masonry structures based on computational homogenization techniques. In order to avoid the troublesome for- mulation of closed-form constitutive equations, the first-order multi-level finite element scheme is enhanced to capture the non-linear macroscopic behaviour of brick masonry in the presence of quasi-brittle damage. This multi-scale technique

T. J. Massart; R. H. J. Peerlings; M. G. D. Geers

460

Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

Alex J. Dragt

2012-08-31

461

Frequency response modeling and control of flexible structures: Computational methods

NASA Technical Reports Server (NTRS)

The dynamics of vibrations in flexible structures can be conventiently modeled in terms of frequency response models. For structural control such models capture the distributed parameter dynamics of the elastic structural response as an irrational transfer function. For most flexible structures arising in aerospace applications the irrational transfer functions which arise are of a special class of pseudo-meromorphic functions which have only a finite number of right half place poles. Computational algorithms are demonstrated for design of multiloop control laws for such models based on optimal Wiener-Hopf control of the frequency responses. The algorithms employ a sampled-data representation of irrational transfer functions which is particularly attractive for numerical computation. One key algorithm for the solution of the optimal control problem is the spectral factorization of an irrational transfer function. The basis for the spectral factorization algorithm is highlighted together with associated computational issues arising in optimal regulator design. Options for implementation of wide band vibration control for flexible structures based on the sampled-data frequency response models is also highlighted. A simple flexible structure control example is considered to demonstrate the combined frequency response modeling and control algorithms.

Bennett, William H.

1989-01-01

462

Advanced Computational Methods for Security Constrained Financial Transmission Rights

Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulation of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.

Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria; Zhou, Ning; Huang, Zhenyu

2012-07-26

463

Computationally Efficient Method of Simulating Creation of Electropores

NASA Astrophysics Data System (ADS)

Electroporation, in which electric pulses create transient pores in the cell membrane, is an important technique for drug and DNA delivery. Electroporation kinetics is described by an advection-diffusion boundary value problem. This problem must be solved numerically with very small time and space steps, in order to resolve very fast processes occurring during pore creation. This study derives a reduced description of the pore creation transient. This description consists of a single integrodifferential equation for the transmembrane voltage V(t) and collateral formulas for computing the number of pores and the distribution of their radii from V(t). For pulse strengths corresponding to those used in drug and DNA delivery, relative differences in predictions of the reduced versus original problem are: voltage V(t), below 1%; number of pores, below 10%; pore radii, below 6%. Computational efficiency increases with the number of pores and thus with the pulse strength. For the strongest pulses, the run time of the reduced problem was below 1% of the original one. Such time savings can bridge the gap between problems that can be simulated on today's computers and problems that are of practical importance.

Neu, John; Krassowska, Wanda

2006-03-01

464

Meshless methods: A review and computer implementation aspects

The aim of this manuscript is to give a practical overview of meshless methods (for solid mechanics) based on global weak forms through a simple and well-structured MATLAB code, to illustrate our discourse. The source code is available for download on our website and should help students and researchers get started with some of the basic meshless methods; it includes

Vinh Phu Nguyen; Timon Rabczuk; Stéphane Bordas; Marc Duflot

2008-01-01

465

Computational Method for Electrical Potential and Other Field Problems

ERIC Educational Resources Information Center

Proposes the finite differences relaxation method as a teaching tool in secondary and university level courses discussing electrical potential, temperature distribution in a region, and similar problems. Outlines the theory and operating procedures of the method, and discusses examples of teaching applications, including possible laboratory…

Hastings, David A.

1975-01-01

466

Bayesian methods in bioinformatics and computational systems biology

Bayesian methods are valuable, inter alia, whenever there is a need to extract information from data that are uncertain or subject to any kind of error or noise (including measurement error and experimental error, as well as noise or random variation intrinsic to the process of interest). Bayesian methods offer a number of advantages over more conventional statistical techniques that

Darren J. Wilkinson

2007-01-01

467

Computational methods in metallic alloys within multiple scattering theory

Designing materials, particularly at the nano-scale, is an important scientific research area. It includes a large spectrum of basic science and technological developments. In order to provide results that are relevant to real materials, quantum mechanical simulations involving thousands to millions of atoms must be carried out. The locally self-consistent multiple scattering (LSMS) method is the method of choice for

Aurelian Rusanu

2005-01-01

468

Computational method for general multicenter electronic structure calculations

NASA Astrophysics Data System (ADS)

Here a three-dimensional fully numerical (i.e., chemical basis-set free) method [P. F. Batcho, Phys. Rev. A 57, 6 (1998)], is formulated and applied to the calculation of the electronic structure of general multicenter Hamiltonian systems. The numerical method is presented and applied to the solution of Schrödinger-type operators, where a given number of nuclei point singularities is present in the potential field. The numerical method combines the rapid ``exponential'' convergence rates of modern spectral methods with the multiresolution flexibility of finite element methods, and can be viewed as an extension of the spectral element method. The approximation of cusps in the wave function and the formulation of multicenter nuclei singularities are efficiently dealt with by the combination of a coordinate transformation and a piecewise variational spectral approximation. The complete system can be efficiently inverted by established iterative methods for elliptical partial differential equations; an application of the method is presented for atomic, diatomic, and triatomic systems, and comparisons are made to the literature when possible. In particular, local density approximations are studied within the context of Kohn-Sham density functional theory, and are presented for selected subsets of atomic and diatomic molecules as well as the ozone molecule.

Batcho, P. F.

2000-06-01

469

New Computational Methods for the Prediction and Analysis of Helicopter Noise

NASA Technical Reports Server (NTRS)

This paper describes several new methods to predict and analyze rotorcraft noise. These methods are: 1) a combined computational fluid dynamics and Kirchhoff scheme for far-field noise predictions, 2) parallel computer implementation of the Kirchhoff integrations, 3) audio and visual rendering of the computed acoustic predictions over large far-field regions, and 4) acoustic tracebacks to the Kirchhoff surface to pinpoint the sources of the rotor noise. The paper describes each method and presents sample results for three test cases. The first case consists of in-plane high-speed impulsive noise and the other two cases show idealized parallel and oblique blade-vortex interactions. The computed results show good agreement with available experimental data but convey much more information about the far-field noise propagation. When taken together, these new analysis methods exploit the power of new computer technologies and offer the potential to significantly improve our prediction and understanding of rotorcraft noise.

Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

1996-01-01

470

Tracking Replicability as a Method of Post-Publication Open Evaluation

Recent reports have suggested that many published results are unreliable. To increase the reliability and accuracy of published papers, multiple changes have been proposed, such as changes in statistical methods. We support such reforms. However, we believe that the incentive structure of scientific publishing must change for such reforms to be successful. Under the current system, the quality of individual scientists is judged on the basis of their number of publications and citations, with journals similarly judged via numbers of citations. Neither of these measures takes into account the replicability of the published findings, as false or controversial results are often particularly widely cited. We propose tracking replications as a means of post-publication evaluation, both to help researchers identify reliable findings and to incentivize the publication of reliable results. Tracking replications requires a database linking published studies that replicate one another. As any such database is limited by the number of replication attempts published, we propose establishing an open-access journal dedicated to publishing replication attempts. Data quality of both the database and the affiliated journal would be ensured through a combination of crowd-sourcing and peer review. As reports in the database are aggregated, ultimately it will be possible to calculate replicability scores, which may be used alongside citation counts to evaluate the quality of work published in individual journals. In this paper, we lay out a detailed description of how this system could be implemented, including mechanisms for compiling the information, ensuring data quality, and incentivizing the research community to participate. PMID:22403538

Hartshorne, Joshua K.; Schachner, Adena

2011-01-01

471

ERIC Educational Resources Information Center

Information is provided on a practicum that addressed the lack of access to computer-aided instruction by elementary level students with learning disabilities, due to lack of diverse software, limited funding, and insufficient teacher training. The strategies to improve the amount of access time included: increasing the number of computer programs…

McInturff, Johanna R.

472

A New Method to Compute Standard-Weight Equations That Reduces Length-Related Bias

We propose a new method for developing standard-weight (Ws) equations for use in the computation of relative weight (Wr) because the regression line–percentile (RLP) method often leads to length-related biases in Ws equations. We studied the structural properties of Ws equations developed by the RLP method through simulations, identified reasons for biases, and compared Ws equations computed by the RLP

Kenneth G. Gerow; Richard C. Anderson-Sprecher; Wayne A. Hubert

2005-01-01

473

A rapid method for the computation of equilibrium chemical composition of air to 15000 K

NASA Technical Reports Server (NTRS)

A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

Prabhu, Ramadas K.; Erickson, Wayne D.

1988-01-01

474

Describes the the use of Geographical Information Systems (GIS) as decision support tools in public libraries in England. A GIS is a computer software system that represents data in a geographic dimension. GIS as a decision support tool in public libraries is in its infancy; only seven out of 40 libraries contacted in the survey have GIS projects, three of

Andrew M. Hawkins

1994-01-01

475

Shielding analysis methods available in the scale computational system

Computational tools have been included in the SCALE system to allow shielding analysis to be performed using both discrete-ordinates and Monte Carlo techniques. One-dimensional discrete ordinates analyses are performed with the XSDRNPM-S module, and point dose rates outside the shield are calculated with the XSDOSE module. Multidimensional analyses are performed with the MORSE-SGC/S Monte Carlo module. This paper will review the above modules and the four Shielding Analysis Sequences (SAS) developed for the SCALE system. 7 refs., 8 figs.

Parks, C.V.; Tang, J.S.; Hermann, O.W.; Bucholz, J.A.; Emmett, M.B.

1986-01-01

476

A method of computational magnetohydrodynamics defining stable Scyllac equilibria

A computer code has been developed for the numerical calculation of sharp boundary equilibria of a toroidal plasma with diffuse pressure profile. This generalizes earlier work that was done separately on the sharp boundary and diffuse models, and it allows for large amplitude distortions of the plasma in three-dimensional space. By running the code, equilibria that are stable to the so-called m = 1, k = 0 mode have been found for Scyllac, which is a high beta toroidal confinement device of very large aspect ratio. PMID:16592383

Betancourt, Octavio; Garabedian, Paul

1977-01-01

477

Combinatorial Methods for Computing: Plethysms of Schur Functions.

NASA Astrophysics Data System (ADS)

The plethysm of two Schur functions s _lambda[ s_mu] was first introduced by Littlewood (13). Littlewood showed that for any partition lambda of m and mu of n, s_ lambda[ s_mu] = sum_nu c_sp{lambda, mu}{nu}s_nu where the sum runs over all partitions nu of mn and c_sp{lambda, mu}{nu} are nonnegative integers. The problem of computing the coefficients c _sp{lambda,mu}{nu } is one of the fundamental open problems in the theory of symmetric functions. In this thesis, we focus on the problem of computing the plethysms s_2[ s_ mu] and s_{1 ^2}[ s_mu] . The problem of computing the Schur function expansion of s_2[ s_{n} ] and s_{1^2 }[ s_{n}] was solved by Littlewood. Recently, Carbonara, Remmel, and Yang (3) gave explicit formulas for the Schur function expansion of the plethysms s_2[ s_mu] and s_ {1^2}[ s_mu] where mu is a hook shape, i.e., mu is of the form (1 ^{k},l) where 1 <=q l. Building on the ideas of Carbonara, Remmel, and Yang we show that one can develop efficient algorithms for computing the Schur function expansion of the plethysms s_2[ s_mu] and s_{1^2}[ s_mu], where mu is of the form (1^{r},n ^{k}) where 1 <=q n, (n^{r},s) where n<=q s,(r,n^{k}), where r<=q n or ((n-1)^ {r},n^{k}). The significance of the shapes mu of the form (1 ^{r},n^{k}) where 1<=q n,(n^{r},s) where n<=q s,(r,n^{k}) where r<=q n, or ((n-1)^ {r},n^{k}) is that these are a complete list of all shapes mu such that p_2[ s_mu ] is multiplicity free, i.e., those shapes for which the coefficients < p_2 [ s_mu], s_lambda >in{0,+/- 1} . As an application of our algorithms, we derive explicit formulas for the Schur function expansion of the plethysms s_2[ s_mu ] and s_{1^2 }[ s_mu] where mu has either two rows or two columns.

Carini, Luisa

1995-01-01

478

Computation of nonparametric convex hazard estimators via profile methods

This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females. PMID:20300560

Jankowski, Hanna K.; Wellner, Jon A.

2010-01-01

479

Imaging and Computational Methods for Exploring Sub-cellular Anatomy

of the requirements for the degree of DOCTOR OF PHILOSOPHY Approved by: Chair of Committee, John Keyser Committee Members, Yoonsuck Choe Louise Abbott Donald House Head of Department, Valerie Taylor May 2009 Major Subject: Computer Science iii ABSTRACT Imaging... that doesn?t always cooperate; Louise for walking me through everything I?ve ever learned about biology and neuroanatomy; and Don for being the best lecturer I?ve had and teaching me everything I know about Physically Based Modeling. I would also like...

Mayerich, David

2010-01-16

480

3D point vortex methods for parallel flow computations

This paper considers the use of discrete point vortices in 3-D (or {open_quotes}vortons{close_quotes}), to compute the development of unsteady incompressible flows, using a parallel distributed memory architecture. The implementation of the basic inviscid vortex algorithm is described, and it is shown that high efficiencies are readily achieved (using a 32 transputer MIMD machine as a testbed). Procedures to model the vortex stretching and tilting, to approximate the far field interactions, and to allow viscous diffusion to be modelled are discussed.

Doorly, D.J.; Hilka, M. [Imperial College, London (United Kingdom)

1993-12-31

481

NASA Technical Reports Server (NTRS)

Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

2012-01-01

482

A FILTER METHOD WITH UNIFIED STEP COMPUTATION FOR ...

“infeasible” interior-point methods avoid this defect to some degree. ?Research ... solving multiple quadratic programs during each iteration. .... are linear and quadratic model approximations, respectively, of the objective function f for a given.

2013-05-09

483

Statistical and Computational Methods for Comparative Proteomic Profiling Using Liquid

analysis, with an emphasis on methods that can be applied to improve the dependability of biological the search for interesting and relevant proteomic patterns remains a challenging task. Capillary-scale HPLC

Roweis, Sam

484

Computational methods for high-throughput pooled genetic experiments

Advances in high-throughput DNA sequencing have created new avenues of attack for classical genetics problems. This thesis develops and applies principled methods for analyzing DNA sequencing data from multiple pools of ...

Edwards, Matthew Douglas

2011-01-01

485

The Voronoi Implicit Interface Method for computing multiphase physics

We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. PMID:22106269

Saye, Robert I.; Sethian, James A.

2011-01-01

486

Comparison of Two Numerical Methods for Computing Fractal Dimensions

NASA Astrophysics Data System (ADS)

From cosmology to economics, the examples of fractals can be found virtually everywhere. However, since few fractals permit the analytical evaluation of generalized fractal dimensions or R'enyi dimensions, the search for effective numerical methods is inevitable. In this project two promising numerical methods for obtaining generalized fractal dimensions, based on the distribution of distances within a set, are examined. They can be applied, in principle, to any set even if no closed-form expression is available. The biggest advantage of these methods is their ability to generate a spectrum of generalized dimensions almost simultaneously. It should be noted that this feature is essential to the analysis of multifractals. As a test of their effectiveness, here the methods were applied to the generalized Cantor set and the multiplicative binomial process. The generalized dimensions of both sets can be readily derived analytically, thus enabling the accuracy of the numerical methods to be verified. Here we will present a comparison of the analytical results and the predictions of the methods. We will show that, while they are effective, care must be taken in their interpretation.

Shiozawa, Yui; Miller, Bruce; Rouet, Jean-Louis

2012-10-01

487

Molecular solutions of the RSA public-key cryptosystem on a DNA-based computer

The RSA public-key cryptosystem is an algorithm that converts a plain-text to its corresponding cipher-text, and then converts\\u000a the cipher-text back into its corresponding plain-text. In this article, we propose five DNA-based algorithms—parallel adder,\\u000a parallel subtractor, parallel multiplier, parallel comparator, and parallel modular arithmetic—that construct molecular solutions\\u000a for any (plain-text, cipher-text) pair for the RSA public-key cryptosystem. Furthermore, we demonstrate

Weng-Long Chang; Kawuu Weicheng Lin; Ju-Chin Chen; Chih-Chiang Wang; Lai Chin Lu; Minyi Guo

488

Integrated method of stereo matching for computer vision

NASA Astrophysics Data System (ADS)

It is an important problem for computer vision to match the stereo image pair. Only the problems of stereo matching are solved, the accurate location or measurement of object can be realized. In this paper, an integrated stereo matching approach is presented. Unlike most stereo matching approach, it integrates area-based and feature-based primitives. This allows it to take advantages of the unique attributes of each of these techniques. The feature-based process is used to match the image feature. It can provide a more precise sparse disparity map and accurate location of discontinuities. The area-based process is used to match the continuous surfaces.It can provide a dense disparity map. The techniques of stereo matching with adaptive window are adopted in the area-based process. It can make the results of area-based process get high precise. An integrated process is also used in this approach. It can integrate the results of feature-based process and area-based process, so that the approach can provide not only a dense disparity map but also an accurate location of discontinuities. The approach has been tested by some synthetic and nature images. From the results of matched wedding cake and matched aircraft model, we can see that the surfaces and configuration are well reconstructed. The integrated stereo matching approach can be used in 3D part recognition in intelligent assembly system and computer vision.

Xiong, Yingen; Wang, Dezong; Zhang, Guangzhao

1996-11-01

489

Infodemiology can be defined as the science of distribution and determinants of information in an electronic medium, specifically the Internet, or in a population, with the ultimate aim to inform public health and public policy. Infodemiology data can be collected and analyzed in near real time. Examples for infodemiology applications include: the analysis of queries from Internet search engines to predict disease outbreaks (eg. influenza); monitoring peoples' status updates on microblogs such as Twitter for syndromic surveillance; detecting and quantifying disparities in health information availability; identifying and monitoring of public health relevant publications on the Internet (eg. anti-vaccination sites, but also news articles or expert-curated outbreak reports); automated tools to measure information diffusion and knowledge translation, and tracking the effectiveness of health marketing campaigns. Moreover, analyzing how people search and navigate the Internet for health-related information, as well as how they communicate and share this information, can provide valuable insights into health-related behavior of populations. Seven years after the infodemiology concept was first introduced, this paper revisits the emerging fields of infodemiology and infoveillance and proposes an expanded framework, introducing some basic metrics such as information prevalence, concept occurrence ratios, and information incidence. The framework distinguishes supply-based applications (analyzing what is being published on the Internet, eg. on Web sites, newsgroups, blogs, microblogs and social media) from demand-based methods (search and navigation behavior), and further distinguishes passive from active infoveillance methods. Infodemiology metrics follow population health relevant events or predict them. Thus, these metrics and methods are potentially useful for public health practice and research, and should be further developed and standardized. PMID:19329408

2009-01-01

490

NASA Astrophysics Data System (ADS)

Governments worldwide are concerned for efficient production of services to customers. To improve quality of services and to make service production more efficient, information and communication technology (ICT) is largely exploited in public administration (PA). Succeeding in this exploitation calls for large-scale planning which embraces issues from strategic to technological level. In this planning the notion of enterprise architecture (EA) is commonly applied. One of the sub-architectures of EA is business architecture (BA). BA planning is challenging in PA due to a large number of stakeholders, a wide set of customers, and solid and hierarchical structures of organizations. To support EA planning in Finland, a project to engineer a government EA (GEA) method was launched. In this chapter, we analyze the discussions and outputs of the project workshops and reflect emerged issues on current e-government literature. We bring forth insights into and suggestions for government BA and its development.

Valtonen, Katariina; Leppänen, Mauri

491

A new method to compute standard-weight equations that reduces length-related bias

We propose a new method for developing standard-weight (Ws) equations for use in the computation of relative weight (Wr) because the regression line-percentile (RLP) method often leads to length-related biases in Ws equations. We studied the structural properties of W s equations developed by the RLP method through simulations, identified reasons for biases, and compared Ws equations computed by the RLP method and the new method. The new method is similar to the RLP method but is based on means of measured weights rather than on means of weights predicted from regression models. The new method also models curvilinear W s relationships not accounted for by the RLP method. For some length-classes in some species, the relative weights computed from Ws equations developed by the new method were more than 20 Wr units different from those using Ws equations developed by the RLP method. We recommend assessment of published Ws equations developed by the RLP method for length-related bias and use of the new method for computing new Ws equations when bias is identified. ?? Copyright by the American Fisheries Society 2005.

Gerow, K.G.; Anderson-Sprecher, R. C.; Hubert, W.A.

2005-01-01

492

Computation of molecular electrostatics with boundary element methods.

In continuum approaches to molecular electrostatics, the boundary element method (BEM) can provide accurate solutions to the Poisson-Boltzmann equation. However, the numerical aspects of this method pose significant problems. We describe our approach, applying an alpha shape-based method to generate a high-quality mesh, which represents the shape and topology of the molecule precisely. We also describe an analytical method for mapping points from the planar mesh to their exact locations on the surface of the molecule. We demonstrate that derivative boundary integral formulation has numerical advantages over the nonderivative formulation: the well-conditioned influence matrix can be maintained without deterioration of the condition number when the number of the mesh elements scales up. Singular integrand kernels are characteristics of the BEM. Their accurate integration is an important issue. We describe variable transformations that allow accurate numerical integration. The latter is the only plausible integral evaluation method when using curve-shaped boundary elements. Images FIGURE 3 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 8 PMID:9336178

Liang, J; Subramaniam, S

1997-01-01

493

An Overview of Public Access Computer Software Management Tools for Libraries

ERIC Educational Resources Information Center

An IT decision maker gives an overview of public access PC software that's useful in controlling session length and scheduling, Internet access, print output, security, and the latest headaches: spyware and adware. In this article, the author describes a representative sample of software tools in several important categories such as setup…

Wayne, Richard

2004-01-01

494

Computational methods of robust controller design for aerodynamic flutter suppression

NASA Technical Reports Server (NTRS)

The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

Anderson, L. R.

1981-01-01

495

Solution NMR and Computational Methods for Understanding Protein Allostery

Allosterism is an essential biological regulatory mechanism. In enzymes, allosteric regulation results in an activation or inhibition of catalytic turnover. The mechanisms by which this is accomplished are unclear and vary significantly depending on the enzyme. It is commonly the case that a metabolite binds to the enzyme at a site distant from the catalytic site yet its binding is coupled to and sensed by the active site. This coupling can manifest in changes in structure, dynamics, or both at the active site. These interactions between allosteric and active site, which are often quite distant from one another involve numerous atoms as well as complex conformational rearrangements of the protein secondary and tertiary structure. Interrogation of this complex biological phenomenon necessitates multiple experimental approaches. In this article, we outline a combined solution NMR spectroscopic and computational approach using molecular dynamics and network models to uncover mechanistic aspects of allostery in the enzyme imidazole glycerol phosphate synthase. PMID:23445323

Manley, Gregory; Rivalta, Ivan

2014-01-01

496

Standardized development of computer software. Part 1: Methods

NASA Technical Reports Server (NTRS)

This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

Tausworthe, R. C.

1976-01-01

497

Comparison of Monte Carlo methods for fluorescence molecular tomography—computational efficiency

Purpose: The Monte Carlo method is an accurate model for time-resolved quantitative fluorescence tomography. However, this method suffers from low computational efficiency due to the large number of photons required for reliable statistics. This paper presents a comparison study on the computational efficiency of three Monte Carlo-based methods for time-domain fluorescence molecular tomography. Methods: The methods investigated to generate time-gated Jacobians were the perturbation Monte Carlo (pMC) method, the adjoint Monte Carlo (aMC) method and the mid-way Monte Carlo (mMC) method. The effects of the different parameters that affect the computation time and statistics reliability were evaluated. Also, the methods were applied to a set of experimental data for tomographic application. Results:In silico results establish that, the investigated parameters affect the computational time for the three methods differently (linearly, quadratically, or not significantly). Moreover, the noise level of the Jacobian varies when these parameters change. The experimental results in preclinical settings demonstrates the feasibility of using both aMC and pMC methods for time-resolved whole body studies in small animals within a few hours. Conclusions: Among the three Monte Carlo methods, the mMC method is a computationally prohibitive technique that is not well suited for time-domain fluorescence tomography applications. The pMC method is advantageous over the aMC method when the early gates are employed and large number of detectors is present. Alternatively, the aMC method is the method of choice when a small number of source-detector pairs are used. PMID:21992393

Chen, Jin; Intes, Xavier

2011-01-01

498

models for studying trade-offs in the area of renewable energy. This Expedition has been organized, atmospheric science, materials science, renewable energy, and biological and environmental engineering, and dynamical systems. Furthermore, it requires the design and development of models that enable computationally

Gomes, Carla P.

499

A method to decrease computation time for fourth order Lucas sequence

NASA Astrophysics Data System (ADS)

The fourth order Lucas sequence is a linear recurrence relation related to quartic polynomial and based on Lucas function. This sequence had been used to develop the LUC4,6 cryptosystem. As we know, the efficiency is one of the crucial parts of the cryptosystem and it is depended on computation time for Lucas sequence which is used to develop the process encryption and decryption in the LUC4.6 cryptosystem. In this paper, a method will be proposed to decrease the computation time for fourth order Lucas sequence. This method omits some terms of the sequence to decrease the computation time. Thus, if the LUC4,6 cryptosystem is using this method to compute the plaintexts and cipher texts, then the computation time had been decreased.

Jin, Wong Tze; Said, Mohd. Rushdan Md.; Othman, Mohamed; Feng, Koo Lee

2013-09-01