Note: This page contains sample records for the topic methods publications computer from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: August 15, 2014.
1

Computer Science and Technology Publications.  

National Technical Information Service (NTIS)

This bibliography lists publications of the Institute for Computer Sciences and Technology of the National Bureau of Standards. Publications are listed by subject in the areas of computer security, computer networking, and automation technology. Sections ...

1977-01-01

2

Communicating the Impact of Free Access to Computers and the Internet in Public Libraries: A Mixed Methods Approach to Developing Outcome Indicators  

Microsoft Academic Search

The U.S. IMPACT studies have two research projects underway that employ a mixed method research design to develop and validate performance indicators related specifically to the outcomes of public access computing (PAC) use in public libraries. Through the use of a nationwide telephone survey (n  =  1130), four case studies, and a nationwide Internet survey of PAC users administered through

Samantha Becker; Michael D. Crandall; Karen E. Fisher

2009-01-01

3

BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability  

SciTech Connect

The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

Sellers, C.; Fox, B.; Paulz, J.

1996-03-01

4

List of Free Computer-Related Publications  

NSDL National Science Digital Library

The List of Free Computer-Related Publications includes hardcopy magazines, newspapers, and journals related to computing which can be subscribed to free of charge. Each entry contains a brief overview of that publication, including its primary focus, typical content, publication frequency, subscription information, as well as an (admittedly) subjective overall rating. Note that some publications have qualifications you must meet in order for the subscription to be free.

5

Managing Computers for Public Service.  

National Technical Information Service (NTIS)

The size of our population, the variety of public services offered and public demands for accountability by government have increased (1) the recordkeeping functions of government, (2) the real-time control functions of government and (3) the amount of ma...

R. M. Davis

1973-01-01

6

Computing in Public Administration: Practice and Education.  

ERIC Educational Resources Information Center

Presents a survey of common and leading-edge computer use practices followed by municipal government personnel and the directors of 12 masters degree programs in public administration. Concludes by suggesting directions for future developments both in public agencies and in the academy. (GEA)

Norris, Donald F.; Thompson, Lyke

1988-01-01

7

Public Databases Supporting Computational Toxicology  

EPA Science Inventory

A major goal of the emerging field of computational toxicology is the development of screening-level models that predict potential toxicity of chemicals from a combination of mechanistic in vitro assay data and chemical structure descriptors. In order to build these models, resea...

8

Protecting Public-Access Computers in Libraries.  

ERIC Educational Resources Information Center

Describes one public library's development of a computer-security plan, along with helpful products used. Discussion includes Internet policy, physical protection of hardware, basic protection of the operating system and software on the network, browser dilemmas and maintenance, creating clear intuitive interface, and administering fair use and…

King, Monica

1999-01-01

9

Acquisition of Computing Literacy on Shared Public Computers: Children and the "Hole in the Wall"  

ERIC Educational Resources Information Center

Earlier work, often referred to as the "hole in the wall" experiments, has shown that groups of children can learn to use public computers on their own. This paper presents the method and results of an experiment conducted to investigate whether such unsupervised group learning in shared public spaces is universal. The experiment was conducted…

Mitra, Sugata; Dangwal, Ritu; Chatterjee, Shiffon; Jha, Swati; Bisht, Ravinder S.; Kapur, Preeti

2005-01-01

10

Computer intensive statistical methods  

NASA Astrophysics Data System (ADS)

The special session “Computer-Intensive Statistical Methods” was held in morning and afternoon parts at the 1985 AGU Fall Meeting in San Francisco. Calif. Its mission was to provide a forum for hydrologists and statisticians who are active in bringing unconventional, algorithmic-oriented statistical techniques to bear on problems of hydrology. Statistician Emanuel Parzen (Texas A&M University, College Station, Tex.) opened the session by relating recent developments in quantile estimation methods and showing how properties of such methods can be used to advantage to categorize runoff data previously analyzed by I. Rodriguez-Iturbe (Universidad Simon Bolivar, Caracas, Venezuela). Statistician Eugene Schuster (University of Texas, El Paso) discussed recent developments in nonparametric density estimation which enlarge the framework for convenient incorporation of prior and ancillary information. These extensions were motivated by peak annual flow analysis. Mathematician D. Myers (University of Arizona, Tucson) gave a brief overview of “kriging” and outlined some recently developed methodology.

Yakowitz, S.

11

Closing the "Digital Divide": Building a Public Computing Center  

ERIC Educational Resources Information Center

The public computing center offers an economical and environmentally friendly model for providing additional public computer access when and where it is needed. Though not intended to be a replacement for a full-service branch, the public computing center does offer a budget-friendly option for quickly expanding high-demand services into the…

Krebeck, Aaron

2010-01-01

12

Computational methods for multiphase flow  

Microsoft Academic Search

The balance equations of multiphase flows are classified and techniques are reviewed for rendering these partial differential into partially or completely discretized equations. Numerical methods are presented for integrating the partially, and for solving the completely discretized, equations. The issues of computing accuracy and economy are discussed. Numerical methods used in major computer codes for multiphase flow analyses are reviewed,

Wulff

1987-01-01

13

Computers in Public Education: The Second Time Around.  

ERIC Educational Resources Information Center

Suggests that computers today have even more uses in schools than they did in the sixties. Presents the plan used in the Lexington (Massachusetts) Public Schools to employ computers in education. (JM)

DiGiammarino, Frank P.

1980-01-01

14

Relative status of journal and conference publications in computer science  

Microsoft Academic Search

Though computer scientists agree that conference publications enjoy greater status in computer science than in other disciplines, there is little quantitative evidence to support this view. The importance of journal publication in academic promotion makes it a highly personal issue, since focusing exclusively on journal papers misses many significant papers published by CS conferences. Here, we aim to quantify the

Jill Freyne; Lorcan Coyle; Barry Smyth; Padraig Cunningham

2010-01-01

15

Computational Methods Development at Ames  

NASA Technical Reports Server (NTRS)

This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

Kwak, Dochan; Smith, Charles A. (Technical Monitor)

1998-01-01

16

Computational Modeling Method for Superalloys  

NASA Technical Reports Server (NTRS)

Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.

Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John

1997-01-01

17

Meshless methods in computational mechanics  

NASA Astrophysics Data System (ADS)

In this thesis, two meshless methods, the element free Galerkin (EFG) method and the meshless method based on the local boundary integral equation (LBIE), are described. The first part (Chapters II and III) of the thesis is to propose efficient methods in enforcing the essential boundary conditions in the EFG method. A modified collocation method has been proposed for enforcing the discrete essential boundary conditions in the element free Galerkin (EFG) method. The present method substantially improves the computational accuracy as compared to the previous direct collocation method reported in literature. A penalty method has also been developed for imposing the essential boundary conditions in the EFG method. The penalty method is easy to implement and can reduce computational costs while retaining high accuracy. Compared with the Lagrange multiplier method which will increase the number of unknowns and result in a non-positive system stiffness matrix, the present penalty method is able to yield a banded, symmetric and positive definite system matrix. In the second part (Chapters IV--VI), new local boundary integral equation (LBIE) formulations, for linear and nonlinear potential problems, and for problems in linear elasticity, have been proposed, and the new meshless method based on the LBIE formulations and the moving least squares approximation has been developed. This new method is a true meshless method as no boundary and domain elements are needed in the implementation. The essential boundary conditions can be easily and directly imposed even when a non-interpolative approximation scheme such as the MLS approximation is used to approximate the trial function. Numerical examples show that the present LBIE formulation possesses a tremendous potential in solving linear and non-linear problems in computational mechanics.

Zhu, Tulong

1998-11-01

18

Methods for computing color anaglyphs  

NASA Astrophysics Data System (ADS)

A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

McAllister, David F.; Zhou, Ya; Sullivan, Sophia

2010-02-01

19

Systems Science Methods in Public Health  

PubMed Central

Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example in preparing for global pandemics. However, systems science study designs and analytic methods remain underutilized and are not widely featured in public health curricula or training. In this review we present an argument for the utility of systems science methods in public health, introduce three important systems science methods (system dynamics, network analysis, and agent-based modeling), and provide three case studies where these methods have been used to answer important public health science questions in the areas of infectious disease, tobacco control, and obesity.

Luke, Douglas A.; Stamatakis, Katherine A.

2012-01-01

20

Listing of Japanese Periodical Publications in Computer Science.  

National Technical Information Service (NTIS)

Many Western researchers remain unaware of Japanese developments in computer science. Although there are many reasons for this, one fundamental problem is that much of the work gets reported in publications widely distributed only within Japan. Some of th...

R. Jacoby R. D. Schlichtung

1991-01-01

21

Computer-Assisted Management of Instruction in Veterinary Public Health  

ERIC Educational Resources Information Center

Reviews a course in Food Hygiene and Public Health at the University of Illinois College of Veterinary Medicine in which students are sequenced through a series of computer-based lessons or autotutorial slide-tape lessons, the computer also being used to route, test, and keep records. Since grades indicated mastery of the subject, the course will…

Holt, Elsbeth; And Others

1975-01-01

22

Computational methods for stellerator configurations  

NASA Astrophysics Data System (ADS)

This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings.

Betancourt, O.

23

Wildlife software: procedures for publication of computer software  

USGS Publications Warehouse

Computers and computer software have become an integral part of the practice of wildlife science. Computers now play an important role in teaching, research, and management applications. Because of the specialized nature of wildlife problems, specific computer software is usually required to address a given problem (e.g., home range analysis). This type of software is not usually available from commercial vendors and therefore must be developed by those wildlife professionals with particular skill in computer programming. Current journal publication practices generally prevent a detailed description of computer software associated with new techniques. In addition, peer review of journal articles does not usually include a review of associated computer software. Thus, many wildlife professionals are usually unaware of computer software that would meet their needs or of major improvements in software they commonly use. Indeed most users of wildlife software learn of new programs or important changes only by word of mouth.

Samuel, M. D.

1990-01-01

24

A Computer-Assisted Instruction in Teaching Abstract Statistics to Public Affairs Undergraduates  

ERIC Educational Resources Information Center

This article attempts to demonstrate the applicability of a computer-assisted instruction supported with simulated data in teaching abstract statistical concepts to political science and public affairs students in an introductory research methods course. The software is called the Elaboration Model Computer Exercise (EMCE) in that it takes a great…

Ozturk, Ali Osman

2012-01-01

25

The Public-Access Computer Systems Forum: A Computer Conference on BITNET.  

ERIC Educational Resources Information Center

Describes the Public Access Computer Systems Forum (PACS), a computer conference that deals with all computer systems that libraries make available to patrons. Areas discussed include how to subscribe to PACS, message distribution and retrieval capabilities, subscriber lists, documentation, generic list server capabilities, and other…

Bailey, Charles W., Jr.

1990-01-01

26

Wavelet Methods for Radiance Computations  

Microsoft Academic Search

This paper describes a new algorithm to compute radiance in a synthetic environment. Motivated by the success of wavelet meth- ods for radiosity computations we have applied multi wavelet bases to the computation of radiance in the presence of glossy reflectors. We have implemented this algorithm and report on some experiments performed with it. In particular we show that the

Peter Schröder; Pat Hanrahan

1994-01-01

27

BOINC: A System for Public-Resource Computing and Storage  

Microsoft Academic Search

BOINC (Berkeley Open Infrastructure for Network Com- puting) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify how their resources are allocated among these projects. We describe the goals

David P. Anderson

2004-01-01

28

Computer-Based Test Interpretation and the Public Interest.  

ERIC Educational Resources Information Center

Computer-based test interpretation (CBTI) is discussed in terms of its potential dangers to the public interest, problems with professional review of CBTI systems, and needed policies for these systems. Several problems with CBTI systems are outlined: (1) they may be nicely packaged, but it is difficult to establish their value; (2) they do not…

Mitchell, James V., Jr.

29

User Access of Public Shared Devices in Pervasive Computing Environments  

Microsoft Academic Search

To allow for an efficient usage of a device in pervasive computing environments when a user intends to selectively utilize multiple devices within his\\/her vicinity, reliable and yet convenient authentication is an important requirement. The problem becomes more complex when the accessed device is shared by the public with many different individuals. This paper first illustrates the issues of establishing

David Jea; Ian Yap; Mani Srivastava

2007-01-01

30

47 CFR 61.14 - Method of filing publications.  

Code of Federal Regulations, 2013 CFR

... 2013-10-01 false Method of filing publications. 61.14 Section 61.14 Telecommunication...Electronic Filing § 61.14 Method of filing publications. (a) Publications filed electronically must be addressed...

2013-10-01

31

47 CFR 61.20 - Method of filing publications.  

Code of Federal Regulations, 2013 CFR

...2013-10-01 false Method of filing publications. 61.20 Section 61.20... § 61.20 Method of filing publications. (a) All issuing carriers...tariffs shall file all tariff publications and associated...

2013-10-01

32

Distributed Data Mining using a Public Resource Computing Framework  

NASA Astrophysics Data System (ADS)

The public resource computing paradigm is often used as a successful and low cost mechanism for the management of several classes of scientific and commercial applications that require the execution of a large number of independent tasks. Public computing frameworks, also known as “Desktop Grids”, exploit the computational power and storage facilities of private computers, or “workers”. Despite the inherent decentralized nature of the applications for which they are devoted, these systems often adopt a centralized mechanism for the assignment of jobs and distribution of input data, as is the case for BOINC, the most popular framework in this realm. We present a decentralized framework that aims at increasing the flexibility and robustness of public computing applications, thanks to two basic features: (i) the adoption of a P2P protocol for dynamically matching the job specifications with the worker characteristics, without relying on centralized resources; (ii) the use of distributed cache servers for an efficient dissemination and reutilization of data files. This framework is exploitable for a wide set of applications. In this work, we describe how a Java prototype of the framework was used to tackle the problem of mining frequent itemsets from a transactional dataset, and show some preliminary yet interesting performance results that prove the efficiency improvements that can derive from the presented architecture.

Cesario, Eugenio; de Caria, Nicola; Mastroianni, Carlo; Talia, Domenico

33

Computational methods for stealth design  

SciTech Connect

A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.

Cable, V.P. (Lockheed Advanced Development Co., Sunland, CA (United States))

1992-08-01

34

Optimization Methods for Computer Animation.  

ERIC Educational Resources Information Center

Emphasizing the importance of economy and efficiency in the production of computer animation, this master's thesis outlines methodologies that can be used to develop animated sequences with the highest quality images for the least expenditure. It is assumed that if computer animators are to be able to fully exploit the available resources, they…

Donkin, John Caldwell

35

Soft Computing Methods for Control and Instrumentation.  

National Technical Information Service (NTIS)

In this work, the existing soft computing techniques have been enhanced, and applied to control and instrumentation areas. First, new soft computing methods are proposed. A Modified Elman Neural Network (MENN) is introduced to provide fast convergence spe...

X. Z. Gao

1999-01-01

36

Computational methods for probability of instability calculations  

NASA Technical Reports Server (NTRS)

This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.

Wu, Y.-T.; Burnside, O. H.

1990-01-01

37

Meshless methods in computational mechanics  

Microsoft Academic Search

In this thesis, two meshless methods, the element free Galerkin (EFG) method and the meshless method based on the local boundary integral equation (LBIE), are described. The first part (Chapters II and III) of the thesis is to propose efficient methods in enforcing the essential boundary conditions in the EFG method. A modified collocation method has been proposed for enforcing

Tulong Zhu

1998-01-01

38

Publicly Accessible Computers: An Exploratory Study of the Determinants of Transactional Website Use in Public Locations  

Microsoft Academic Search

Businesses and governments are continuing to expand the use of the internet to provide a wide range of information and transactional services to consumers. These changes present barriers to people without internet connections in their homes. Public libraries provide a source of access to these resources however it is not clear if people are willing to use computers in these

A. D. Rensel; J. M. Abbas; H. Raghav Rao

2006-01-01

39

Seek Alternative Methods to Provide School Publications  

ERIC Educational Resources Information Center

Presents convincing arguments in favor of secondary school publications, lists titles and brief descriptions of student jobs in putting out various publications, and suggests means of getting out student publications even when no journalism classes are offered. (RB)

Mercer, Linda M.

1974-01-01

40

Computational methods for passive solar simulation  

Microsoft Academic Search

The use of network models and a range of computational methods for the simulation of passive solar buildings are described, compared, and assessed. The following methods are considered: (i) steady-state methods; (ii) finite difference methods, explicit and implicit; (iii) modal or analytic spectral methods; (iv) Fourier series methods. Methods (ii) to (iv) are compared by applying them to a series

C CARTER

1990-01-01

41

Computational method for testing computer-generated holograms  

Microsoft Academic Search

A procedure based entirely on computer simulation is developed for testing the quality of computer-generated holographic optical elements (CGHOEs). This new testing method may be of help in optimization techniques in the future. We introduce a mathematical quantity, the correlation, to characterize the performance of the studied CGHOE. Our procedure examines how the correlation between the object and the reconstructed

Nandor Bokor; Zsolt Papp

1996-01-01

42

Survey of Public IaaS Cloud Computing API  

NASA Astrophysics Data System (ADS)

Recently, Cloud computing is spread rapidly and many Cloud providers start their Cloud services. One of the Cloud computing problems is Cloud provider Lock In” for users. Actually, Cloud computing management APIs such as ordering or provisioning are different in each Cloud provider, so that users need to study and implement new APIs when they change Cloud providers. OGF and DMTF start the discussions of standardization of Cloud computing APIs, but there is no standard now. In this technical note, to clarify what APIs cloud providers should provide, we study common APIs for Cloud computing. We survey and compare Cloud computing APIs such as Rackspace Cloud Server, Sun Cloud, GoGrid, ElasticHosts, Amazon EC2 and FlexiScale which are currently provided as public IaaS Cloud APIs in the market. From the survey, the common APIs should support REST access style and provide account management, virtual server management, storage management, network management and resource usage management capabilities. We also show an example of OSS to provide these common APIs compared to normal hosting services OSS.

Yamato, Yoji; Moriya, Takaaki; Ogawa, Takeshi; Akahani, Junichi

43

Ant Inspired Methods for Organic Computing  

Microsoft Academic Search

\\u000a In recent years social insects have been a major inspiration in the design of new computational methods. This chapter describes\\u000a three examples of the application of ant-inspired methods in the domain of Organic Computing. The first example illustrates\\u000a implications of theoretical findings in response-threshold models that explain division of labour in ants for Organic Computing\\u000a systems. The second example outlines

Alexander Scheidler; Arne Brutschy; Konrad Diwold; Daniel Merkle; Martin Middendorf

44

Multiprocessor computer overset grid method and apparatus  

DOEpatents

A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

Barnette, Daniel W. (Veguita, NM); Ober, Curtis C. (Los Lunas, NM)

2003-01-01

45

Computational methods in radionuclide dosimetry  

NASA Astrophysics Data System (ADS)

The various approaches in radionuclide dosimetry depend on the size and spatial relation of the sources and targets considered in conjunction with the emission range of the radionuclide used. We present some of the frequently reported computational techniques on the basis of the source/target size. For whole organs, or for sources or targets bigger than some centimetres, the acknowledged standard was introduced 30 years ago by the MIRD committee and is still being updated. That approach, based on the absorbed fraction concept, is mainly used for radioprotection purposes but has been updated to take into account the dosimetric challenge raised by therapeutic use of vectored radiopharmaceuticals. At this level, the most important computational effort is in the field of photon dosimetry. On the millimetre scale, photons can often be disregarded, and or electron dosimetry is generally reported. Heterogeneities at this level are mainly above the cell level, involving groups of cell or a part of an organ. The dose distribution pattern is often calculated by generalizing a point source dose distribution, but direct calculation by Monte Carlo techniques is also frequently reported because it allows media of inhomogeneous density to be considered. At the cell level, and electron (low-range or Auger) are the predominant emissions examined. Heterogeneities in the dose distribution are taken into account, mainly to determine the mean dose at the nucleus. At the DNA level, Auger electrons or -particles are considered from a microdosimetric point of view. These studies are often connected with radiobiological experiments on radionuclide toxicity.

Bardiès, M.; Myers, M. J.

1996-10-01

46

Numerical Methods for Computing Casimir Interactions  

Microsoft Academic Search

\\u000a We review several different approaches for computing Casimir forces and related fluctuation-induced interactions between bodies\\u000a of arbitrary shapes and materials. The relationships between this problem and well known computational techniques from classical\\u000a electromagnetism are emphasized. We also review the basic principles of standard computational methods, categorizing them\\u000a according to three criteria—choice of problem, basis, and solution technique—that can be used

Steven G. Johnson

47

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?  

Code of Federal Regulations, 2010 CFR

...public access use of the Internet on NARA-supplied computers? 1254.32 Section...public access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for Internet use in all...

2009-07-01

48

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?  

Code of Federal Regulations, 2010 CFR

...public access use of the Internet on NARA-supplied computers? 1254.32 Section...public access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for Internet use in all...

2010-07-01

49

Computing Discharge Using the Index Velocity Method.  

National Technical Information Service (NTIS)

Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity met...

K. A. Oberg V. A. Levesque

2012-01-01

50

Computational Chemistry Using Modern Electronic Structure Methods  

ERIC Educational Resources Information Center

Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

2007-01-01

51

An Introduction To Computer Simulation Methods Examples  

NSDL National Science Digital Library

Ready to run Launcher package containing examples for An Introduction to Computer Simulation Methods by Harvey Gould, Jan Tobochnik, and Wolfgang Christian. Source code for examples in this textbook is distributed in the Open Source Physics Eclipse Workspace.

Christian, Wolfgang; Gould, Harvey; Tobochnik, Jan

2008-05-17

52

Teaching Practical Public Health Evaluation Methods  

ERIC Educational Resources Information Center

Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…

Davis, Mary V.

2006-01-01

53

Monte Carlo method in computer holography  

Microsoft Academic Search

A method based on the Monte Carlo procedure is suggested to simulate the reconstruction of non-Fourier-type computer- generated holograms (CGHs). The cases of amplitude holograms (CGAHs) and phase holograms (CGPHs), or `kinoform lenses,' are investigated. A method to model the finite pixel size of the hologram is suggested. An importance sampling method is proposed to simulate the reconstruction of CGAHs.

Nandor Bokor; Zsolt Papp

1997-01-01

54

Computational methods for global/local analysis  

NASA Technical Reports Server (NTRS)

Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

1992-01-01

55

32 CFR 310.52 - Computer matching publication and review requirements.  

Code of Federal Regulations, 2013 CFR

...2013-07-01 false Computer matching publication and review requirements. 310.52...Procedures § 310.52 Computer matching publication and review requirements. (a...will be used in the match to ensure the publication requirements of subpart G have...

2013-07-01

56

Method for computing protein binding affinity  

Microsoft Academic Search

A Monte Carlo method is given to compute the binding affinity of a ligand to a protein. The method involves extending configuration space by a dis- crete variable indicating whether the ligand is bound to the protein and a special Monte Carlo move which allows transitions between the unbound and bound states. Provided that an accurate protein structure is given,

Charles F. F. Karney; Jason E. Ferrara; Stephan Brunner

2005-01-01

57

Accurate computational method for solving electromagnetic wave  

Microsoft Academic Search

This paper studied in solving monochromatic electromagnetic wave. Under such circumstances of axial symmetric and passive electromagnetic wave that is finite and differentiable on the symmetric axis, a new approximation theory and evolutionary computing method are provided by using Maxwell equations. Out-of-axis electromagnetic wave can be expressed as series with the method. This series contains electromagnetic wave on the symmetric

Li Zijun; Fang Benying

2009-01-01

58

Computational Methods for Rough Classification and Discovery.  

ERIC Educational Resources Information Center

Rough set theory is a new mathematical tool to deal with vagueness and uncertainty. Computational methods are presented for using rough sets to identify classes in datasets, finding dependencies in relations, and discovering rules which are hidden in databases. The methods are illustrated with a running example from a database of car test results.…

Bell, D. A.; Guan, J. W.

1998-01-01

59

The Methodical Avoidance of Experiments in Public Relations Research  

Microsoft Academic Search

Despite its ranking and reputation as the most rigorous of the available research methods, the experiment is rarely used in public relations research. In contrast, our colleagues in advertising and marketing have long accepted and effectively applied the experimental method to better explore and understand consumer behaviour and media effects. The 'methodical avoidance' of experiments is manifest in the public

Lois Boynton; Elizabeth Dougall

60

Soft computing methods for geoidal height transformation  

NASA Astrophysics Data System (ADS)

Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.

Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.

2009-07-01

61

The Contingent Valuation Method in Public Libraries  

ERIC Educational Resources Information Center

This study aims to present a new model measuring the economic value of public libraries, combining the dissonance minimizing (DM) and information bias minimizing (IBM) format in the contingent valuation (CV) surveys. The possible biases which are tied to the conventional CV surveys are reviewed. An empirical study is presented to compare the model…

Chung, Hye-Kyung

2008-01-01

62

Methods for Scalable Optical Quantum Computation  

NASA Astrophysics Data System (ADS)

We propose a scalable method for implementing linear optics quantum computation using the “linked-state” approach. Our method avoids the two-dimensional spread of errors occurring in the preparation of the linked state. Consequently, a proof is given for the scalability of this modified linked-state model, and an exact expression for the efficiency of the method is obtained. Moreover, a considerable improvement in the efficiency, relative to the original linked-state method, is achieved. The proposed method is applicable to Nielsen’s optical “cluster-state” approach as well.

Mor, Tal; Yoran, Nadav

2006-09-01

63

Methods for scalable optical quantum computation  

NASA Astrophysics Data System (ADS)

We propose a scalable method for implementing linear optics quantum computation using the "linked-state" approach. Our method avoids the two-dimensional spread of errors occurring in the preparation of the linked-state. Consequently, a proof is given for the scalability of this modified linked-state model, and an exact expression for the efficiency of the method is obtained. Moreover, a considerable improvement in the efficiency is achieved. The proposed method is applicable to the "cluster-state" approach as well.

Mor, Tal; Yoran, Nadav

2005-05-01

64

Computationally efficient method to construct scar functions.  

PubMed

The performance of a simple method [E. L. Sibert III, E. Vergini, R. M. Benito, and F. Borondo, New J. Phys. 10, 053016 (2008)] to efficiently compute scar functions along unstable periodic orbits with complicated trajectories in configuration space is discussed, using a classically chaotic two-dimensional quartic oscillator as an illustration. PMID:22463306

Revuelta, F; Vergini, E G; Benito, R M; Borondo, F

2012-02-01

65

Interior-Point Methods in Parallel Computation.  

National Technical Information Service (NTIS)

In this paper the authors use interior-point methods for linear programming, developed in the context of sequential computation, to obtain a parallel algorithm for the bipartite matching problem. The algorithm runs in O(square root of m) time 1. The resul...

A. V. Goldberg S. A. Plotkin D. B. Shmoys E. Tardos

1989-01-01

66

Predicting the Number of Public Computer Terminals Needed for an On-Line Catalog: A Queuing Theory Approach.  

ERIC Educational Resources Information Center

Describes a method for estimating the number of cathode ray tube terminals needed for public use of an online library catalog. Authors claim method could also be used to estimate needed numbers of microform readers for a computer output microform (COM) catalog. Formulae are included. (Author/JD)

Knox, A. Whitney; Miller, Bruce A.

1980-01-01

67

Validating New Tuberculosis Computational Models with Public Whole Cell Screening Aerobic Activity Datasets  

Microsoft Academic Search

Purpose  The search for small molecules with activity against Mycobacterium tuberculosis (Mtb) increasingly uses high throughput screening and computational methods. Several public datasets from the Collaborative\\u000a Drug Discovery Tuberculosis (CDD TB) database have been evaluated with cheminformatics approaches to validate their utility\\u000a and suggest compounds for testing.\\u000a \\u000a \\u000a \\u000a \\u000a Methods  Previously reported Bayesian classification models were used to predict a set of 283 Novartis

Sean Ekins; Joel S. Freundlich

68

Shifted power method for computing tensor eigenvalues.  

SciTech Connect

Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

Mayo, Jackson R.; Kolda, Tamara Gibson

2010-07-01

69

Computational Thermochemistry and Benchmarking of Reliable Methods  

SciTech Connect

During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

2006-06-20

70

A method to compute periodic sums  

NASA Astrophysics Data System (ADS)

In a number of problems in computational physics, a finite sum of kernel functions centered at N particle locations located in a box in three dimensions must be extended by imposing periodic boundary conditions on box boundaries. Even though the finite sum can be efficiently computed via fast summation algorithms, such as the fast multipole method (FMM), the periodized extension is usually treated via a different algorithm, Ewald summation, accelerated via the fast Fourier transform (FFT). A different approach to compute this periodized sum just using a blackbox finite fast summation algorithm is presented in this paper. The method splits the periodized sum into two parts. The first, comprising the contribution of all points outside a large sphere enclosing the box, and some of its neighbors, is approximated inside the box by a collection of kernel functions (“sources”) placed on the surface of the sphere or using an expansion in terms of spectrally convergent local basis functions. The second part, comprising the part inside the sphere, and including the box and its immediate neighborhood, is treated via available summation algorithms. The coefficients of the sources are determined by least squares collocation of the periodicity condition of the total potential, imposed on a circumspherical surface for the box. While the method is presented in general, details are worked out for the case of evaluating electrostatic potentials and forces. Results show that when used with the FMM, the periodized sum can be computed to any specified accuracy, at an additional cost of the order of the free-space FMM. Several technical details and efficient algorithms for auxiliary computations are provided, as are numerical comparisons.

Gumerov, Nail A.; Duraiswami, Ramani

2014-09-01

71

Efficient methods to compute genomic predictions.  

PubMed

Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and to estimate thousands of marker effects simultaneously. Algorithms were derived and computer programs tested with simulated data for 2,967 bulls and 50,000 markers distributed randomly across 30 chromosomes. Estimation of genomic inbreeding coefficients required accurate estimates of allele frequencies in the base population. Linear model predictions of breeding values were computed by 3 equivalent methods: 1) iteration for individual allele effects followed by summation across loci to obtain estimated breeding values, 2) selection index including a genomic relationship matrix, and 3) mixed model equations including the inverse of genomic relationships. A blend of first- and second-order Jacobi iteration using 2 separate relaxation factors converged well for allele frequencies and effects. Reliability of predicted net merit for young bulls was 63% compared with 32% using the traditional relationship matrix. Nonlinear predictions were also computed using iteration on data and nonlinear regression on marker deviations; an additional (about 3%) gain in reliability for young bulls increased average reliability to 66%. Computing times increased linearly with number of genotypes. Estimation of allele frequencies required 2 processor days, and genomic predictions required <1 d per trait, and traits were processed in parallel. Information from genotyping was equivalent to about 20 daughters with phenotypic records. Actual gains may differ because the simulation did not account for linkage disequilibrium in the base population or selection in subsequent generations. PMID:18946147

VanRaden, P M

2008-11-01

72

Parallel computer methods for eigenvalue extraction  

NASA Technical Reports Server (NTRS)

A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is used in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. The advantages of this new algorithm in parallel computer architecture are discussed.

Akl, Fred

1988-01-01

73

Advanced computational methods for nonlinear spin dynamics  

NASA Astrophysics Data System (ADS)

We survey methods for the accurate computation of the dynamics of spin in general nonlinear accelerator lattices. Specifically, we show how it is possible to compute high-order nonlinear spin transfer maps in SO(3) or SU(2) representations in parallel with the corresponding orbit transfer maps. Specifically, using suitable invariant subspaces of the coupled spin-orbit dynamics, it is possible to develop a differential algebraic flow operator in a similar way as in the symplectic case of the orbit dynamics. The resulting high-order maps can be utilized for a variety of applications, including long-term spin-orbit tracking under preservation of the symplectic-orthonormal structure and the associated determination of depolarization rates. Using normal form methods, it is also possible to determine spin-orbit invariants of the motion, in particular the nonlinear invariant axis as well as the associated spin-orbit tune shifts. The methods are implemented in the code COSY INFINITY [1] and available for spin-orbit computations for general accelerator lattices, including conventional particle optical elements including their fringe fields, as well as user specified field arrangements.

Berz, Martin; Makino, Kyoko

2011-05-01

74

Computational methods for vortex dominated compressible flows  

NASA Technical Reports Server (NTRS)

The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

Murman, Earll M.

1987-01-01

75

Saving lives: a computer simulation game for public education about emergencies  

SciTech Connect

One facet of the Information Revolution in which the nation finds itself involves the utilization of computers, video systems, and a variety of telecommunications capabilities by those who must cope with emergency situations. Such technologies possess a significant potential for performing emergency public education and transmitting key information that is essential for survival. An ''Emergency Public Information Competitive Challenge Grant,'' under the aegis of the Federal Emergency Management Agency (FEMA), has sponsored an effort to use computer technology - both large, time-sharing systems and small personal computers - to develop computer games which will help teach techniques of emergency management to the public at large. 24 references.

Morentz, J.W.

1985-01-01

76

Numerical methods for problems in computational aeroacoustics  

NASA Astrophysics Data System (ADS)

A goal of computational aeroacoustics is the accurate calculation of noise from a jet in the far field. This work concerns the numerical aspects of accurately calculating acoustic waves over large distances and long time. More specifically, the stability, efficiency, accuracy, dispersion and dissipation in spatial discretizations, time stepping schemes, and absorbing boundaries for the direct solution of wave propagation problems are determined. Efficient finite difference methods developed by Tam and Webb, which minimize dispersion and dissipation, are commonly used for the spatial and temporal discretization. Alternatively, high order pseudospectral methods can be made more efficient by using the grid transformation introduced by Kosloff and Tal-Ezer. Work in this dissertation confirms that the grid transformation introduced by Kosloff and Tal-Ezer is not spectrally accurate because, in the limit, the grid transformation forces zero derivatives at the boundaries. If a small number of grid points are used, it is shown that approximations with the Chebyshev pseudospectral method with the Kosloff and Tal-Ezer grid transformation are as accurate as with the Chebyshev pseudospectral method. This result is based on the analysis of the phase and amplitude errors of these methods, and their use for the solution of a benchmark problem in computational aeroacoustics. For the grid transformed Chebyshev method with a small number of grid points it is, however, more appropriate to compare its accuracy with that of high- order finite difference methods. This comparison, for an order of accuracy 10-3 for a benchmark problem in computational aeroacoustics, is performed for the grid transformed Chebyshev method and the fourth order finite difference method of Tam. Solutions with the finite difference method are as accurate. and the finite difference method is more efficient than, the Chebyshev pseudospectral method with the grid transformation. The efficiency of the Chebyshev pseudospectral method is further improved by developing Runge-Kutta methods for the temporal discretization which maximize imaginary stability intervals. Two new Runge-Kutta methods, which allow time steps almost twice as large as the maximal order schemes, while holding dissipation and dispersion fixed, are developed. In the process of studying dispersion and dissipation, it is determined that maximizing dispersion minimizes dissipation, and vice versa. In order to determine accurate and efficient absorbing boundary conditions, absorbing layers are studied and compared with one way wave equations. The matched layer technique for Maxwell equations is equivalent to the absorbing layer technique for the acoustic wave equation introduced by Kosloff and Kosloff. The numerical implementation of the perfectly matched layer for the acoustic wave equation with a large damping parameter results in a small portion of the wave transmitting into the absorbing layer. A large damping parameter also results in a large portion of the wave reflecting back into the domain. The perfectly matched layer is implemented on a single domain for the solution of the second order wave equation, and when implemented in this manner shows no advantage over the matched layer. Solutions of the second order wave equation, with the absorbing boundary condition imposed either by the matched layer or by the one way wave equations, are compared. The comparison shows no advantage of the matched layer over the one way wave equation for the absorbing boundary condition. Hence there is no benefit to be gained by using the matched layer, which necessarily increases the size of the computational domain.

Mead, Jodi Lorraine

1998-12-01

77

Network analysis in public health: history, methods, and applications.  

PubMed

Network analysis is an approach to research that is uniquely suited to describing, exploring, and understanding structural and relational aspects of health. It is both a methodological tool and a theoretical paradigm that allows us to pose and answer important ecological questions in public health. In this review we trace the history of network analysis, provide a methodological overview of network techniques, and discuss where and how network analysis has been used in public health. We show how network analysis has its roots in mathematics, statistics, sociology, anthropology, psychology, biology, physics, and computer science. In public health, network analysis has been used to study primarily disease transmission, especially for HIV/AIDS and other sexually transmitted diseases; information transmission, particularly for diffusion of innovations; the role of social support and social capital; the influence of personal and social networks on health behavior; and the interorganizational structure of health systems. We conclude with future directions for network analysis in public health. PMID:17222078

Luke, Douglas A; Harris, Jenine K

2007-01-01

78

Meshfree methods for computational fluid dynamics  

NASA Astrophysics Data System (ADS)

The paper deals with the convergence problem of the SPH (Smoothed Particle Hydrodynamics) meshfree method for the solution of fluid dynamics tasks. In the introductory part, fundamental aspects of mesh- free methods, their definition, computational approaches and classification are discussed. In the following part, the methods of local integral representation, where SPH belongs are analyzed and specifically the method RKPM (Reproducing Kernel Particle Method) is described. In the contribution, also the influence of boundary conditions on the SPH approximation consistence is analyzed, which has a direct impact on the convergence of the method. A classical boundary condition in the form of virtual particles does not ensure a sufficient order of consistence near the boundary of the definition domain of the task. This problem is solved by using ghost particles as a boundary condition, which was implemented into the SPH code as part of this work. Further, several numerical aspects linked with the SPH method are described. In the concluding part, results are presented of the application of the SPH method with ghost particles to the 2D shock tube example. Also results of tests of several parameters and modifications of the SPH code are shown.

Niedoba, P.; ?ermák, L.; Jícha, M.

2013-04-01

79

Motivation for Green computer, methods used in computer science program  

Microsoft Academic Search

Computer science educators are uniquely positioned to promote greater awareness of Green computing, using the academic setting to encourage environmentally conscious use of technology. This paper reports on practical techniques that can engage faculty and students, enabling Green computing to be integrated into the classroom and research laboratory. Analysis and empirical evaluation of each reported technique is given, comparing the

V. Chauhan; A. Chauhan; S. Kapoor; S. Agrawal; R. R. Singh

2011-01-01

80

Solar heating and cooling computer analysis - A simplified sizing design method for non-thermal specialists  

Microsoft Academic Search

Emphasis on solar energy for use in space heating and cooling presents a problem for many architects, heating, ventilating and air conditioning engineers, and contractors because they lack expertise in solar applications. This paper describes two public-domain computer design programs, written for use by the solar community. SOLCOST, a simplified sizing design method for nonthermal specialist users, computes an optimum

M. Connolly; R. Giellis; C. Jensen; R. McMordie

1976-01-01

81

Review of Computational Stirling Analysis Methods  

NASA Technical Reports Server (NTRS)

Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

2004-01-01

82

Universal Tailored Access: Automating Setup of Public and Classroom Computers.  

ERIC Educational Resources Information Center

This article describes a setup smart access card that enables users with visual impairments to customize magnifiers and screen readers on computers by loading the floppy disk into the computer and finding and pressing two successive keys. A trial with four elderly users found instruction took about 15 minutes. (Contains 3 references.) (CR)

Whittaker, Stephen G.; Young, Ted; Toth-Cohen, Susan

2002-01-01

83

A Bibliography of Selected Rand Publications; Computing Technology.  

National Technical Information Service (NTIS)

The bibliography contains 308 abstracts of unclassified Rand studies dealing with various aspects of computing technology. The studies selected have all been issued during the period January 1963 through August 1971. The intention is to revise the bibliog...

1971-01-01

84

Guidelines for Improving Security and Privacy in Public Cloud Computing.  

National Technical Information Service (NTIS)

Cloud computing is an emerging technology that can help organizations become more efficient and agile, and respond more quickly and reliably to their customers' needs. Many government and private sector organizations are currently using or considering the...

2012-01-01

85

Soils Tests Computer Programs. A Water Resources Technical Publication.  

National Technical Information Service (NTIS)

Electronic computer programs have been written to perform the required calculations for numerous soils test performed in the Soils Engineering Laboratory. The test programs written and currently being used by the Soils Laboratory are: natural moisture-den...

P. C. Knodel

1966-01-01

86

Computational methods for optical molecular imaging.  

PubMed

A new computational technique, the matched interface and boundary (MIB) method, is presented to model the photon propagation in biological tissue for the optical molecular imaging. Optical properties have significant differences in different organs of small animals, resulting in discontinuous coefficients in the diffusion equation model. Complex organ shape of small animal induces singularities of the geometric model as well. The MIB method is designed as a dimension splitting approach to decompose a multidimensional interface problem into one-dimensional ones. The methodology simplifies the topological relation near an interface and is able to handle discontinuous coefficients and complex interfaces with geometric singularities. In the present MIB method, both the interface jump condition and the photon flux jump conditions are rigorously enforced at the interface location by using only the lowest-order jump conditions. This solution near the interface is smoothly extended across the interface so that central finite difference schemes can be employed without the loss of accuracy. A wide range of numerical experiments are carried out to validate the proposed MIB method. The second-order convergence is maintained in all benchmark problems. The fourth-order convergence is also demonstrated for some three-dimensional problems. The robustness of the proposed method over the variable strength of the linear term of the diffusion equation is also examined. The performance of the present approach is compared with that of the standard finite element method. The numerical study indicates that the proposed method is a potentially efficient and robust approach for the optical molecular imaging. PMID:20485461

Chen, Duan; Wei, Guo-Wei; Cong, Wen-Xiang; Wang, Ge

2009-01-01

87

Computational predictive methods for fracture and fatigue  

NASA Technical Reports Server (NTRS)

The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

1994-01-01

88

Modules and methods for all photonic computing  

DOEpatents

A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

Schultz, David R. (Knoxville, TN); Ma, Chao Hung (Oak Ridge, TN)

2001-01-01

89

Optical design teaching by computing graphic methods  

NASA Astrophysics Data System (ADS)

One of the key challenges in the teaching of Optics is that students need to know not only the math of the optical design, but also, and more important, to grasp and understand the optics in a three-dimensional space. Having a clear image of the problem to solve is the first step in order to begin to solve that problem. Therefore to achieve that the students not only must know the equation of refraction law but they have also to understand how the main parameters of this law are interacting among them. This should be a major goal in the teaching course. Optical graphic methods are a valuable tool in this way since they have the advantage of visual information and the accuracy of a computer calculation.

Vazquez-Molini, D.; Muñoz-Luna, J.; Fernandez-Balbuena, A. A.; Garcia-Botella, A.; Belloni, P.; Alda, J.

2012-10-01

90

Domain decomposition methods in computational fluid dynamics  

NASA Astrophysics Data System (ADS)

The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

Gropp, William D.; Keyes, David E.

1991-02-01

91

Domain decomposition methods in computational fluid dynamics  

NASA Astrophysics Data System (ADS)

The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

Gropp, William D.; Keyes, David E.

1992-01-01

92

The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers  

ERIC Educational Resources Information Center

Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…

DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

2013-01-01

93

Public library computer training for older adults to access high-quality Internet health information  

Microsoft Academic Search

An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at

Bo Xie; Julie M. Bugg

2009-01-01

94

Computer Technology Standards of Learning for Virginia's Public Schools  

ERIC Educational Resources Information Center

The Computer/Technology Standards of Learning identify and define the progressive development of essential knowledge and skills necessary for students to access, evaluate, use, and create information using technology. They provide a framework for technology literacy and demonstrate a progression from physical manipulation skills for the use of…

Virginia Department of Education, 2005

2005-01-01

95

The ACLS Survey of Scholars: Views on Publications, Computers, Libraries.  

ERIC Educational Resources Information Center

Reviews results of a survey by the American Council of Learned Societies (ACLS) of 3,835 scholars in the humanities and social sciences who are working both in colleges and universities and outside the academic community. Areas highlighted include professional reading, authorship patterns, computer use, and library use. (LRW)

Morton, Herbert C.; Price, Anne Jamieson

1986-01-01

96

Computers in Public Schools: Changing the Image with Image Processing.  

ERIC Educational Resources Information Center

The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

Raphael, Jacqueline; Greenberg, Richard

1995-01-01

97

Program Documentation of a Computer Model for Variable Calculations of the Public School Foundation Program. Revised.  

ERIC Educational Resources Information Center

This publication documents the revised Alaska Finance Foundation Simulation Program, a computer finance simulation package for the Alaska School District Foundation Formula. The introduction briefly describes the program, which was written in Fortran for a Honeywell '66' computer located at the University of Alaska, Fairbanks, and allows…

Fullam, T. J.

98

Adding It Up: Is Computer Use Associated with Higher Achievement in Public Elementary Mathematics Classrooms?  

ERIC Educational Resources Information Center

Despite support for technology in schools, there is little evidence indicating whether using computers in public elementary mathematics classrooms is associated with improved outcomes for students. This exploratory study examined data from the Early Childhood Longitudinal Study, investigating whether students' frequency of computer use was related…

Kao, Linda Lee

2009-01-01

99

The Administrative Impact of Computers on the British Columbia Public School System.  

ERIC Educational Resources Information Center

This case study analyzes and evaluates the administrative computer systems in the British Columbia public school organization in order to investigate the costs and benefits of computers, their impact on managerial work, their influence on centralization in organizations, and the relationship between organizational objectives and the design of…

Gibbens, Trevor P.

100

Statistical Backgrounds and Computing Needs of Graduate Students in Political Science and Public Administration Programs.  

ERIC Educational Resources Information Center

The article integrates information on three topics--the quantitative backgrounds and computing needs of social science students; cooperation among social science instructors, students, and computer center user consultants; and attitudes of instructors in public administration and political science doctoral programs. (Author/DB)

Hy, Ronald John; And Others

1981-01-01

101

Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results  

NASA Technical Reports Server (NTRS)

Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

2013-01-01

102

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?  

Code of Federal Regulations, 2013 CFR

...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...

2013-07-01

103

Computational and design methods for advanced imaging  

NASA Astrophysics Data System (ADS)

This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

Birch, Gabriel C.

104

Key management of the double random-phase-encoding method using public-key encryption  

NASA Astrophysics Data System (ADS)

Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.

Saini, Nirmala; Sinha, Aloka

2010-03-01

105

ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING  

EPA Science Inventory

The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

106

Computer telephony integration (CTI) systems and methods for enhancing school safety  

US Patent & Trademark Office Database

Methods and systems are disclosed for enabling a dynamic computer telephony integration campus call center that leverages the assets of a school communications system including internal telecommunications networks, information systems, data networks, and applications, of public telecommunications networks, of public data networks, and/or of various communications devices to facilitate improved access, sharing, notification, and/or management of communications (e.g., external and internal communications) and associated data to enhance school safety services.

2012-04-03

107

Element-free Galerkin method for electromagnetic field computations  

Microsoft Academic Search

Although numerically very efficient the finite element method exhibits difficulties whenever the remeshing of the analysis domain must be performed. For such problems utilizing meshless computation methods is very promising. In this paper, a kind of meshless method called the element-free Galerkin method is introduced for electromagnetic field computation. The mathematical background for the moving least square approximation employed in

Vlatko CINGOSKI; Naoki MIYAMOTO; Hideo Yamashita

1998-01-01

108

A typology of health marketing research methods--combining public relations methods with organizational concern.  

PubMed

Research plays a critical role throughout virtually every conduit of the health services industry. The key terms of research, public relations, and organizational interests are discussed. Combining public relations as a strategic methodology with the organizational concern as a factor, a typology of four different research methods emerges. These four health marketing research methods are: investigative, strategic, informative, and verification. The implications of these distinct and contrasting research methods are examined. PMID:19042536

Rotarius, Timothy; Wan, Thomas T H; Liberman, Aaron

2007-01-01

109

Python for Education: Computational Methods for Nonlinear Systems  

NSDL National Science Digital Library

The authors' interdisciplinary computational methods course uses Python and associated numerical and visualization libraries to enable students to implement simulations for several different course modules, which highlight the breadth and flexibility of Python-powered computational environments.

Myers, Christopher; Sethna, James

2008-07-23

110

Computational Methods in Advanced Imaging Sciences.  

National Technical Information Service (NTIS)

The broad objective of this project was the development of efficient computational algorithms to solve important problems in optical imaging. This provided support for the Air Force's Partnerships for Research Excellence and Transition (PRET) in Advanced ...

C. R. Vogel

2006-01-01

111

Methods of Astrodynamics, a Computer Approach.  

National Technical Information Service (NTIS)

A library PASCAL and FORTRAN computer routines to solve various Astrodynamic problems is presented. Diagrams and equations are given for each routine. The main part of the document is the actual code. Rigorous documentation and coding discipline was used ...

D. A. Vallado

1991-01-01

112

76 FR 67418 - Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing...  

Federal Register 2010, 2011, 2012, 2013

...Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing...comments on the first draft of Special Publication 500-293, US Government Cloud Computing...service provider interaction.'' Special Publication 800-145 (Draft). \\2\\ Office...

2011-11-01

113

New computational methods for full and subset Zernike moments  

Microsoft Academic Search

The computation of Zernike radial polynomials contributes most of the computation time in computing the Zernike moments due to the involvement of factorial terms. The common approaches used in fast computation of Zernike moments are Kintner’s, Prata’s, coefficient and q-recursive methods. In this paper, we propose faster methods to derive the full set of Zernike moments as well as a

Chong-yaw Wee; Paramesran Raveendran; Fumiaki Takeda

2004-01-01

114

Public health surveillance: historical origins, methods and evaluation.  

PubMed Central

In the last three decades, disease surveillance has grown into a complete discipline, quite distinct from epidemiology. This expansion into a separate scientific area within public health has not been accompanied by parallel growth in the literature about its principles and methods. The development of the fundamental concepts of surveillance systems provides a basis on which to build a better understanding of the subject. In addition, the concepts have practical value as they can be used in designing new systems as well as understanding or evaluating currently operating systems. This article reviews the principles of surveillance, beginning with a historical survey of the roots and evolution of surveillance, and discusses the goals of public health surveillance. Methods for data collection, data analysis, interpretation, and dissemination are presented, together with proposed procedures for evaluating and improving a surveillance system. Finally, some points to be considered in establishing a new surveillance system are presented.

Declich, S.; Carter, A. O.

1994-01-01

115

Computational aeroacoustics: Its methods and applications  

NASA Astrophysics Data System (ADS)

The first part of this thesis deals with the methodology of computational aeroacoustics (CAA). It is shown that although the overall accuracy of a broadband optimized upwind scheme can be improved to some degree, a scheme that is accurate everywhere in a wide range is not possible because increasing the accuracy for large wavenumbers is always at the expense of decreasing that for smaller wavenumbers. Partially for avoiding such a dilemma, optimized multi-component schemes are proposed that are superior to optimized broadband schemes for a sound field with dominant wavenumbers. The Fourier analysis shows that even for broadband waves an optimized central multi-component scheme is at least comparable to an optimized central broadband scheme. Numerical implementation of the impedance boundary condition in the time domain is a unique and challenging topic in CAA. A benchmark problem is proposed for such implementation and its analytical solution is derived. A CAA code using Tam and Auriault's formulation of broadband time-domain impedance boundary condition accurately reproduces the analytical solution. For the duct environment, the code also accurately predicts the analytical solution of a semi-infinite impedance duct problem and the experimental data from the NASA Langley Flow Impedance Tube Facility. In the second part of the thesis are applications of the developed CAA codes. A time-domain method is formulated to separate the instability waves from the acoustic waves of the linearized Euler equations in a critical sheared mean flow. Its effectiveness is demonstrated with the CAA code solving a test problem. Other applications are concerned with optimization using the CAA codes. A noise prediction and optimization system for turbofan engine inlet duct design is developed and applied in three scenarios: liner impedance optimization, duct geometry optimization and liner layout optimization. The results show that the system is effective in finding design variable values in favor of a given objective. In a different context of optimization, a conceptual design for adaptive noise control is developed. It consists of a liner with controllable impedance and an expert system realized with an optimizer coupled with the CAA code. The expert system is shown to be able to find impedance properties that minimize the difference between the current and the desired acoustic fields.

Zheng, Shi

116

A Comparative Assessment of Computer Literacy of Private and Public Secondary School Students in Lagos State, Nigeria  

ERIC Educational Resources Information Center

The aim of this study was to conduct a comparative assessment of computer literacy of private and public secondary school students. Although the definition of computer literacy varies widely, this study treated computer literacy in terms of access to, and use of, computers and the internet, basic knowledge and skills required to use computers and…

Osunwusi, Adeyinka Olumuyiwa; Abifarin, Michael Segun

2013-01-01

117

Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment  

PubMed Central

Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved.

Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

2013-01-01

118

Method of performing computational aeroelastic analyses  

NASA Technical Reports Server (NTRS)

Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

Silva, Walter A. (Inventor)

2011-01-01

119

Methods for multiphase computational fluid dynamics  

Microsoft Academic Search

This paper presents an overview of the physical models for computational fluid dynamic (CFD) predictions of multiphase flows. The governing equations and closure models are derived and presented for fluid–solid flows and fluid–fluid flows, both in an Eulerian and a Lagrangian framework. Some results obtained with these equations are presented. Finally, the capabilities and limitations of multiphase CFD are discussed.

B. G. M. van Wachem; A. E. Almstedt

2003-01-01

120

Seismic borehole tomography - Theory and computational methods  

Microsoft Academic Search

Tomographic inversion can be applied to seismic travel time data to obtain a map of seismic velocities in a rock volume, although the field geometries will not, in general, allow a complete ray sampling. This paper deals with theoretical and computational aspects of tomographic inversion under such circumstances. It is shown that some usual measurement geometries could, in theory, be

S. Ivansson

1986-01-01

121

Computational aeroacoustics: Its methods and applications  

Microsoft Academic Search

The first part of this thesis deals with the methodology of computational aeroacoustics (CAA). It is shown that although the overall accuracy of a broadband optimized upwind scheme can be improved to some degree, a scheme that is accurate everywhere in a wide range is not possible because increasing the accuracy for large wavenumbers is always at the expense of

Shi Zheng

2006-01-01

122

Computer Virus Strategies and Detection Methods  

Microsoft Academic Search

The typical antivirus approach consists of waiting for a number of computers to be infected, detecting the virus, designing a solution, and delivering and deploying the solution, in such situation, it is very difficult to prevent every machine from being compromised by virus. This paper shows that to develop new reliable antivirus software some problems must be solved such as:

Essam Al Daoud; Iqbal H. Jebril; Belal Zaqaibeh

2008-01-01

123

Methods towards invasive human brain computer interfaces  

Microsoft Academic Search

Abstract During the last ten years there has been growing interest in the development of Brain Computer Interfaces (BCIs). The eld,has mainly been driven by the needs of completely paralyzed patients to communicate. With a few exceptions, most human BCIs are based on extracranial electroencephalography (EEG). However, reported bit rates are still low. One reason for this is the low

T. N. Lal; T. Hinterberger; G. Widman; N. J. Hill; W. Rosenstiel; C. E. Elger; N. Birbaum

2005-01-01

124

Methods Towards Invasive Human Brain Computer Interfaces  

Microsoft Academic Search

During the last ten years there has been growing interest in the develop- ment of Brain Computer Interfaces (BCIs). The eld has mainly been driven by the needs of completely paralyzed patients to communicate. With a few exceptions, most human BCIs are based on extracranial elec- troencephalography (EEG). However, reported bit rates are still low. One reason for this is

Thomas Navin Lal; Thilo Hinterberger; Guido Widman; Michael Schröder; N. Jeremy Hill; Wolfgang Rosenstiel; Christian Erich Elger; Bernhard Schölkopf; Niels Birbaumer

2004-01-01

125

Performance of the IDS Method as a Soft Computing Tool  

Microsoft Academic Search

Performance factors such as robustness, speed, and tractability are important for the realization of practical computing systems. The aim of soft computing is to achieve these factors in practice by tolerating imprecision and uncertainty instead of depending on exact mathematical computations. The ink drop spread (IDS) method is a modeling technique that has been proposed as a new approach to

Masayuki Murakami; Nakaji Honda

2008-01-01

126

Checklist and Pollard Walk butterfly survey methods on public lands  

USGS Publications Warehouse

Checklist and Pollard Walk butterfly survey methods were contemporaneously applied to seven public sites in North Dakota during the summer of 1995. Results were compared for effect of method and site on total number of butterflies and total number of species detected per hour. Checklist searching produced significantly more butterfly detections per hour than Pollard Walks at all sites. Number of species detected per hour did not differ significantly either among sites or between methods. Many species were detected by only one method, and at most sites generalist and invader species were more likely to be observed during checklist searches than during Pollard Walks. Results indicate that checklist surveys are a more efficient means for initial determination of a species list for a site, whereas for long-term monitoring the Pollard Walk is more practical and statistically manageable. Pollard Walk transects are thus recommended once a prairie butterfly fauna has been defined for a site by checklist surveys.

Royer, R. A.; Austin, J. E.; Newton, W. E.

1998-01-01

127

Public participation GIS: a method for identifying ecosystems services  

USGS Publications Warehouse

This study evaluated the use of an Internet-based public participation geographic information system (PPGIS) to identify ecosystem services in Grand County, Colorado. Specific research objectives were to examine the distribution of ecosystem services, identify the characteristics of participants in the study, explore potential relationships between ecosystem services and land use and land cover (LULC) classifications, and assess the methodological strengths and weakness of the PPGIS approach for identifying ecosystem services. Key findings include: (1) Cultural ecosystem service opportunities were easiest to identify while supporting and regulatory services most challenging, (2) participants were highly educated, knowledgeable about nature and science, and have a strong connection to the outdoors, (3) some LULC classifications were logically and spatially associated with ecosystem services, and (4) despite limitations, the PPGIS method demonstrates potential for identifying ecosystem services to augment expert judgment and to inform public or environmental policy decisions regarding land use trade-offs.

Brown, Greg; Montag, Jessica; Lyon, Katie

2012-01-01

128

Computational methods for physical mapping of chromosomes  

SciTech Connect

A standard technique for mapping a chromosome is to randomly select pieces, to use restriction enzymes to cut these pieces into fragments, and then to use the fragments for estimating the probability of overlap of these pieces. Typically, the order of the fragments within a piece is not determined, and the observed fragment data from each pair of pieces must be permuted N1 {times} N2 ways to evaluate the probability of overlap, N1 and N2 being the observed number of fragments in the two selected pieces. We will describe computational approaches used to substantially reduce the computational complexity of the calculation of overlap probability from fragment data. Presently, about 10{sup {minus}4} CPU seconds on one processor of an IBM 3090 is required for calculation of overlap probability from the fragment data of two randomly selected pieces, with an average of ten fragments per piece. A parallel version has been written using IBM clustered FORTRAN. Parallel measurements for 1, 6, and 12 processors will be presented. This approach has proven promising in the mapping of chromosome 16 at Los Alamos National Laboratory. We will also describe other computational challenges presented by physical mapping. 4 refs., 4 figs., 1 tab.

Torney, D.C.; Schenk, K.R. (Los Alamos National Lab., NM (USA)); Whittaker, C.C. (International Business Machines Corp., Albuquerque, NM (USA) Los Alamos National Lab., NM (USA)); White, S.W. (International Business Machines Corp., Kingston, NY (USA))

1990-01-01

129

A Comparison of Computational Methods for Identifying Virulence Factors  

PubMed Central

Bacterial pathogens continue to threaten public health worldwide today. Identification of bacterial virulence factors can help to find novel drug/vaccine targets against pathogenicity. It can also help to reveal the mechanisms of the related diseases at the molecular level. With the explosive growth in protein sequences generated in the postgenomic age, it is highly desired to develop computational methods for rapidly and effectively identifying virulence factors according to their sequence information alone. In this study, based on the protein-protein interaction networks from the STRING database, a novel network-based method was proposed for identifying the virulence factors in the proteomes of UPEC 536, UPEC CFT073, P. aeruginosa PAO1, L. pneumophila Philadelphia 1, C. jejuni NCTC 11168 and M. tuberculosis H37Rv. Evaluated on the same benchmark datasets derived from the aforementioned species, the identification accuracies achieved by the network-based method were around 0.9, significantly higher than those by the sequence-based methods such as BLAST, feature selection and VirulentPred. Further analysis showed that the functional associations such as the gene neighborhood and co-occurrence were the primary associations between these virulence factors in the STRING database. The high success rates indicate that the network-based method is quite promising. The novel approach holds high potential for identifying virulence factors in many other various organisms as well because it can be easily extended to identify the virulence factors in many other bacterial species, as long as the relevant significant statistical data are available for them.

Zheng, Lu-Lu; Li, Yi-Xue; Ding, Juan; Guo, Xiao-Kui; Feng, Kai-Yan; Wang, Ya-Jun; Hu, Le-Le; Cai, Yu-Dong; Hao, Pei; Chou, Kuo-Chen

2012-01-01

130

Trends in Access to Computing Technology and Its Use in Chicago Public Schools, 2001-2005  

ERIC Educational Resources Information Center

Five years after Consortium on Chicago School Research (CCSR) research revealed a "digital divide" among Chicago Public Schools (CPS) and limited computer usage by staff and students, this new study shows that district schools have overcome many of these obstacles, particularly in terms of technology access and use among teachers and…

Coca, Vanessa; Allensworth, Elaine M.

2007-01-01

131

An application of a computational ecology model to a routing method in computer networks  

Microsoft Academic Search

The paper proposes a network routing method based on a computational ecology model. The computational ecology model is a mathematical model proposed by B.A. Huberman and T. Hogg (1988), which represents a macro action of multi-agent systems. We formulate routing on a computer network as a resource allocation problem, where packets and links are regarded as agents and resources, respectively.

Tatsushi Yamasaki; Toshimitsu Ushio

2002-01-01

132

Computational Method Speeds Mapping of Cell Signaling Networks  

NSF Publications Database

... Release 05-062Computational Method Speeds Mapping of Cell Signaling Networks Method helps decode ... cell signaling networks are so complex that mapping them has been a slow, arduous process. Now, a ...

133

Computational methods. [Calculation of dynamic loading to offshore platforms  

Microsoft Academic Search

With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine

Maeda

1993-01-01

134

Graphical electromagnetic computing method combined with IGES files import  

Microsoft Academic Search

In this paper, the graphical electromagnetic computing (GRECO) method combined with an interface that can identify and read IGES files is presented. Graphical electromagnetic computing (GRECO) is an effective method of predicting the radar cross section (RCS) of complex targets, but there must beforehand be a model file from which the shape parameter can be easily obtained. Based on the

Fang Xiang; Su Donglin; Liu Yan

2007-01-01

135

The conjugate gradient regularization method in Computed Tomography problems  

Microsoft Academic Search

In this work we solve inverse problems coming from the area of ComputedTomography by means of regularization methods based on conjugate gradientiterations. We develop a stopping criterion which is efficient for the computationof a regularized solution for the least--squares normal equations. The stoppingrule can be suitably applied also to the Tikhonov regularization method. Wereport computational experiences based on different physical

Elena Loli Piccolomini; Fabiana Zama

1999-01-01

136

Computational Methods for Jet Noise Simulation  

NASA Technical Reports Server (NTRS)

The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.

Goodrich, John W. (Technical Monitor); Hagstrom, Thomas

2003-01-01

137

Reducing MHT computational requirements through use of cheap JPDA methods  

NASA Astrophysics Data System (ADS)

Hypothesis formation is a major computational burden for any multiple hypotheses tracking (MHT) method. In particular, a track-oriented MHT method defines compatible tracks to be tracks not sharing common observations and then re-forms hypotheses from compatible tracks after each new scan of data is received. The Cheap Joint Probabilistic Data Association (CJPDA) method provides an efficient means for computing approximate hypothesis probabilities. This paper presents a method of extending CJPDA calculations in order to eliminate low probability track branches in a track-oriented MHT method. The method is tested using IRST data. This approach reduces the number of tracks in a cluster and the resultant computations required for hypothesis formation. It is also suggested that the use of CJPDA methods can reduce assignment matrix sizes and resultant computations for the hypothesis-oriented (Reid"s algorithm) MHT implementation.

Quevedo, Hector A.; Blackman, Samuel S.; Nichols, T.; Dempster, Robert J.; Wenski, R.

2001-11-01

138

COMSAC: Computational Methods for Stability and Control. Part 1  

NASA Technical Reports Server (NTRS)

Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

2004-01-01

139

Computational Methods for Antenna Pattern Synthesis.  

National Technical Information Service (NTIS)

Some general numerical methods for antenna pattern synthesis, with and without constraints, are developed in this report. Particular cases considered are (1) field pattern specified in amplitude and phase, (2) field pattern specified in amplitude only, (3...

J. R. Mautz R. F. Harrington

1973-01-01

140

Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS.  

PubMed

As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities. PMID:23569644

Jalali, Arash; Olabode, Olusegun A; Bell, Christopher M

2012-01-01

141

Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS  

PubMed Central

As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities.

Jalali, Arash; Olabode, Olusegun A.; Bell, Christopher M.

2012-01-01

142

System, method and computer program product for aquatic environment assessment  

US Patent & Trademark Office Database

A system, method and computer program product are provided for assessing aquatic environments. Initially, the user measures biological, geomorphological and physiological parameters, quantitatively, semi-quantitatively and qualitatively in the field, guided by the computer software of the present invention, and enters the data obtained from such measurements entered into a handheld computer processing means running the computer software of the present invention. Then, the data is transferred into a desktop computer processing means for automated analysis, processing, reporting of field data, and production of various user reports through the synchronized, compatible desktop software of the present invention.

2008-04-01

143

A Novel College Network Resource Management Method using Cloud Computing  

NASA Astrophysics Data System (ADS)

At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

Lin, Chen

144

Method for transferring data from an unsecured computer to a secured computer  

DOEpatents

A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

Nilsen, Curt A. (Castro Valley, CA) [Castro Valley, CA

1997-01-01

145

Computational method for analysis of polyethylene biodegradation  

NASA Astrophysics Data System (ADS)

In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to ?-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the ?-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.

Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro

2003-12-01

146

Computational genetics: finding protein function by nonhomology methods  

Microsoft Academic Search

During the past year, computational methods have been developed that use the rapidly accumulating genomic data to discover protein function. The methods rely on properties shared by functionally related proteins other than sequence or structural similarity. Instead, these ‘nonhomology’ methods analyze patterns such as domain fusion, conserved gene position and gene co-inheritance and coexpression to identify protein–protein relationships. The methods

Edward M Marcotte

2000-01-01

147

Iterated Runge-Kutta Methods on Parallel Computers.  

National Technical Information Service (NTIS)

Diagonally implicit iteration methods for solving implicit Runge-Kutta methods with high stage order on parallel computers are studied. These iteration methods are such that after a finite number of m iterations, the iterated Runge-Kutta method belongs to...

P. J. Vanderhouwen B. P. Sommeijer

1990-01-01

148

Transonic Flow Computations Using Nonlinear Potential Methods  

NASA Technical Reports Server (NTRS)

This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.

Holst, Terry L.; Kwak, Dochan (Technical Monitor)

2000-01-01

149

A Stochastic Approximation Method for Reachability Computations  

Microsoft Academic Search

We develop a grid-based method for estimating the probability that the trajectories of a given stochastic system will eventually\\u000a enter a certain target set during a – possibly infinite – look-ahead time horizon. The distinguishing feature of the proposed\\u000a methodology is that it rests on the approximation of the solution to stochastic differential equations by using Markov chains.\\u000a From an

Maria Prandini; Jianghai Hu

150

Public involvement in multi-objective water level regulation development projects-evaluating the applicability of public involvement methods  

SciTech Connect

Public involvement is a process that involves the public in the decision making of an organization, for example a municipality or a corporation. It has developed into a widely accepted and recommended policy in environment altering projects. The EU Water Framework Directive (WFD) took force in 2000 and stresses the importance of public involvement in composing river basin management plans. Therefore, the need to develop public involvement methods for different situations and circumstances is evident. This paper describes how various public involvement methods have been applied in a development project involving the most heavily regulated lake in Finland. The objective of the project was to assess the positive and negative impacts of regulation and to find possibilities for alleviating the adverse impacts on recreational use and the aquatic ecosystem. An exceptional effort was made towards public involvement, which was closely connected to planning and decision making. The applied methods were (1) steering group work, (2) survey, (3) dialogue, (4) theme interviews, (5) public meeting and (6) workshops. The information gathered using these methods was utilized in different stages of the project, e.g., in identifying the regulation impacts, comparing alternatives and compiling the recommendations for regulation development. After describing our case and the results from the applied public involvement methods, we will discuss our experiences and the feedback from the public. We will also critically evaluate our own success in coping with public involvement challenges. In addition to that, we present general recommendations for dealing with these problematic issues based on our experiences, which provide new insights for applying various public involvement methods in multi-objective decision making projects.

Vaentaenen, Ari [Department of Sociology, FIN 20014 University of Turku (Finland)]. E-mail: armiva@utu.fi; Marttunen, Mika [Department for Expert Services, Finnish Environment Institute, P.O. Box 140 FIN 00251 Helsinki (Finland)]. E-mail: Mika.Marttunen@ymparisto.fi

2005-04-15

151

Low-Rank Incremental Methods for Computing Dominant Singular Subspaces  

SciTech Connect

Computing the singular values and vectors of a matrix is a crucial kernel in numerous scientific and industrial applications. As such, numerous methods have been proposed to handle this problem in a computationally efficient way. This paper considers a family of methods for incrementally computing the dominant SVD of a large matrix A. Specifically, we describe a unification of a number of previously disparate methods for approximating the dominant SVD via a single pass through A. We tie the behavior of these methods to that of a class of optimization-based iterative eigensolvers on A'*A. An iterative procedure is proposed which allows the computation of an accurate dominant SVD via multiple passes through A. We present an analysis of the convergence of this iteration, and provide empirical demonstration of the proposed method on both synthetic and benchmark data.

Baker, Christopher G [ORNL; Gallivan, Dr. Kyle A [Florida State University; Van Dooren, Dr. Paul [Universite Catholique de Louvain

2012-01-01

152

Computational methods for internal flows with emphasis on turbomachinery  

NASA Technical Reports Server (NTRS)

Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

Mcnally, W. D.; Sockol, P. M.

1981-01-01

153

Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems  

NASA Technical Reports Server (NTRS)

A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

Terrile, Richard J.; Guillaume, Alexandre

2011-01-01

154

A method for computing spectral reflectance.  

PubMed

Psychophysical experiments show that the perceived colour of an object is relatively independent of the spectrum of the incident illumination and mainly depends on the surface spectral reflectance. We first demonstrate a possible solution to this undetermined problem for a Mondrian world of flat rectangular patches. We expand the illumination and surface reflectances in terms of a finite number of basis functions. We assume that the number of colour receptors is greater than the number of basis functions. This yields a set of nonlinear equations for each colour patch. Number counting arguments show that, given a sufficient number of surface patches with the same illumination, there are enough equations to determine the surface reflectances up to an overall scaling factor. This theory is similar to previous and independent work by Maloney and Wandell (Maloney 1985). We demonstrate a simple method of solving these non-linear equations. We generalize to situations where the illumination varies in space and the objects are three dimensional shapes. To do this we define a method for detecting material changes, a colour edge detector, and illustrate a way of detecting the colour of a material at its boundaries and propagating it inwards. PMID:3593787

Yuille, A

1987-01-01

155

Coarse-graining methods for computational biology.  

PubMed

Connecting the molecular world to biology requires understanding how molecular-scale dynamics propagate upward in scale to define the function of biological structures. To address this challenge, multiscale approaches, including coarse-graining methods, become necessary. We discuss here the theoretical underpinnings and history of coarse-graining and summarize the state of the field, organizing key methodologies based on an emerging paradigm for multiscale theory and modeling of biomolecular systems. This framework involves an integrated, iterative approach to couple information from different scales. The primary steps, which coincide with key areas of method development, include developing first-pass coarse-grained models guided by experimental results, performing numerous large-scale coarse-grained simulations, identifying important interactions that drive emergent behaviors, and finally reconnecting to the molecular scale by performing all-atom molecular dynamics simulations guided by the coarse-grained results. The coarse-grained modeling can then be extended and refined, with the entire loop repeated iteratively if necessary. PMID:23451897

Saunders, Marissa G; Voth, Gregory A

2013-01-01

156

Computer Methods in the Teaching of Library and Information Studies.  

ERIC Educational Resources Information Center

Presents results of a survey of library and information studies (LIS) departments in the United Kingdom that investigated computer methods currently in use in the teaching of LIS courses. Highlights include the Computers in Teaching Initiative (CTI), where LIS departments fit within academic institutions, and the software used. (16 references)…

Rowland, Fytton; Tseng, Gwyneth M.

1991-01-01

157

Computer Simulation: A Method for Training Educational Diagnosticians.  

ERIC Educational Resources Information Center

Northwestern University's Learning Disabilities Program conducted a study to explore and develop ways of applying computer technology to the fields of reading and learning disabilities and to train specialists who are familiar with computer methods. This study was designed to simulate the actual conditions of the Diagnostic Clinic at Northwestern…

Lerner, Janet W.

158

Computer Controlled Oral Test Administration: A Method and Example.  

ERIC Educational Resources Information Center

A computer/tape recorder interface was designed, which permits automatic oral adminstration of "true-false" or "multiple-choice" type tests. This paper describes the hardware and control program software, which were developed to implement the method on a DEC PDP 11 computer. (Author/JKS)

Milligan, W. Lloyd

1978-01-01

159

Computer systems and methods for visualizing data  

SciTech Connect

A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

Stolte, Chris (Palo Alto, CA); Hanrahan, Patrick (Portola Valley, CA)

2010-07-13

160

Computational Simulations and the Scientific Method  

NASA Technical Reports Server (NTRS)

As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

Kleb, Bil; Wood, Bill

2005-01-01

161

I LIKE Computers versus I LIKERT Computers: Rethinking Methods for Assessing the Gender Gap in Computing.  

ERIC Educational Resources Information Center

There is a burgeoning body of research on gender differences in computing attitudes and behaviors. After a decade of experience, researchers from both inside and outside the field of educational computing research are raising methodological and conceptual issues which suggest that perhaps researchers have shortchanged girls and women in…

Morse, Frances K.; Daiute, Colette

162

SAR/QSAR methods in public health practice  

SciTech Connect

Methods of (Quantitative) Structure-Activity Relationship ((Q)SAR) modeling play an important and active role in ATSDR programs in support of the Agency mission to protect human populations from exposure to environmental contaminants. They are used for cross-chemical extrapolation to complement the traditional toxicological approach when chemical-specific information is unavailable. SAR and QSAR methods are used to investigate adverse health effects and exposure levels, bioavailability, and pharmacokinetic properties of hazardous chemical compounds. They are applied as a part of an integrated systematic approach in the development of Health Guidance Values (HGVs), such as ATSDR Minimal Risk Levels, which are used to protect populations exposed to toxic chemicals at hazardous waste sites. (Q)SAR analyses are incorporated into ATSDR documents (such as the toxicological profiles and chemical-specific health consultations) to support environmental health assessments, prioritization of environmental chemical hazards, and to improve study design, when filling the priority data needs (PDNs) as mandated by Congress, in instances when experimental information is insufficient. These cases are illustrated by several examples, which explain how ATSDR applies (Q)SAR methods in public health practice.

Demchuk, Eugene, E-mail: edemchuk@cdc.gov; Ruiz, Patricia; Chou, Selene; Fowler, Bruce A.

2011-07-15

163

Computer based safety training: an investigation of methods  

PubMed Central

Background: Computer based methods are increasingly being used for training workers, although our understanding of how to structure this training has not kept pace with the changing abilities of computers. Information on a computer can be presented in many different ways and the style of presentation can greatly affect learning outcomes and the effectiveness of the learning intervention. Many questions about how adults learn from different types of presentations and which methods best support learning remain unanswered. Aims: To determine if computer based methods, which have been shown to be effective on younger students, can also be an effective method for older workers in occupational health and safety training. Methods: Three versions of a computer based respirator training module were developed and presented to manufacturing workers: one consisting of text only; one with text, pictures, and animation; and one with narration, pictures, and animation. After instruction, participants were given two tests: a multiple choice test measuring low level, rote learning; and a transfer test measuring higher level learning. Results: Participants receiving the concurrent narration with pictures and animation scored significantly higher on the transfer test than did workers receiving the other two types of instruction. There were no significant differences between groups on the multiple choice test. Conclusions: Narration with pictures and text may be a more effective method for training workers about respirator safety than other popular methods of computer based training. Further study is needed to determine the conditions for the effective use of this technology.

Wallen, E; Mulloy, K

2005-01-01

164

Platform-independent method for computer aided schematic drawings  

DOEpatents

A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

Vell, Jeffrey L. (Slingerlands, NY); Siganporia, Darius M. (Clifton Park, NY); Levy, Arthur J. (Fort Lauderdale, FL)

2012-02-14

165

Computer method for identification of boiler transfer functions  

NASA Technical Reports Server (NTRS)

Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

Miles, J. H.

1972-01-01

166

Anti-HIV Drug Development Through Computational Methods.  

PubMed

Although highly active antiretroviral therapy (HAART) is effective in controlling the progression of AIDS, the emergence of drug-resistant strains increases the difficulty of successful treatment of patients with HIV infection. Increasing numbers of patients are facing the dilemma that comes with the running out of drug combinations for HAART. Computational methods play a key role in anti-HIV drug development. A substantial number of studies have been performed in anti-HIV drug development using various computational methods, such as virtual screening, QSAR, molecular docking, and homology modeling, etc. In this review, we summarize recent advances in the application of computational methods to anti-HIV drug development for five key targets as follows: reverse transcriptase, protease, integrase, CCR5, and CXCR4. We hope that this review will stimulate researchers from multiple disciplines to consider computational methods in the anti-HIV drug development process. PMID:24760437

Gu, Wan-Gang; Zhang, Xuan; Yuan, Jun-Fa

2014-07-01

167

Computer Simulation Methods for Defect Configurations and Nanoscale Structures  

SciTech Connect

This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.

Gao, Fei

2010-01-01

168

A new iterative method to compute nonlinear equations  

Microsoft Academic Search

The aim of this paper is to construct a new efficient iterative method to solve nonlinear equations. The new method is based on the proposals of Abbasbandy on improving the order of accuracy of Newton–Raphson method [S. Abbasbandy, Improving Newton–Raphson method for nonlinear equations by modified Adomian decomposition method, Applied Mathematics and Computation 145 (2003) 887–893] and on the proposals

Mário Basto; Viriato Semião; Francisco L. Calheiros

2006-01-01

169

Method and computer program product for maintenance and modernization backlogging  

SciTech Connect

According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

2013-02-19

170

Review of parallel computing methods and tools for FPGA technology  

NASA Astrophysics Data System (ADS)

Parallel computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated using parallel computing techniques. Specialized parallel computer architectures are used for accelerating speci c tasks. High-Energy Physics Experiments measuring systems often use FPGAs for ne-grained computation. FPGA combines many bene ts of both software and ASIC implementations. Like software, the mapped circuit is exible, and can be recon gured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This paper presents existing methods and tools for ne-grained computation implemented in FPGA using Behavioral Description and High Level Programming Languages.

Cieszewski, Rados?aw; Linczuk, Maciej; Pozniak, Krzysztof; Romaniuk, Ryszard

2013-10-01

171

Methods for operating parallel computing systems employing sequenced communications  

DOEpatents

A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

Benner, R.E.; Gustafson, J.L.; Montry, G.R.

1999-08-10

172

Convergence acceleration of the Proteus computer code with multigrid methods  

NASA Technical Reports Server (NTRS)

Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

Demuren, A. O.; Ibraheem, S. O.

1992-01-01

173

An efficient method for computation of the manipulator inertia matrix  

NASA Technical Reports Server (NTRS)

An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

Fijany, Amir; Bejczy, Antal K.

1989-01-01

174

Some data and observations on research publication in the areas of numerical computation and programming languages and systems  

Microsoft Academic Search

This report contains extensive data on the level of research publications in refereed journals for two areas in Computer Science: Numerical Computation and Programming Languages and Systems. It is concluded that the research output in Numerical Computation is about 5 times that in Programming Languages and Systems (as measured by refereed research articles). Some less complete data on the number

John R. Rice

1976-01-01

175

A literature review of neck pain associated with computer use: public health implications  

PubMed Central

Prolonged use of computers during daily work activities and recreation is often cited as a cause of neck pain. This review of the literature identifies public health aspects of neck pain as associated with computer use. While some retrospective studies support the hypothesis that frequent computer operation is associated with neck pain, few prospective studies reveal causal relationships. Many risk factors are identified in the literature. Primary prevention strategies have largely been confined to addressing environmental exposure to ergonomic risk factors, since to date, no clear cause for this work-related neck pain has been acknowledged. Future research should include identifying causes of work related neck pain so that appropriate primary prevention strategies may be developed and to make policy recommendations pertaining to prevention.

Green, Bart N

2008-01-01

176

Method and device for processing a computer document in a computer system  

US Patent & Trademark Office Database

A method for processing a portion of a computer document in a computer system, the content of the computer document being represented by a markup language, each tag of which having a name and a value, the computer document being associated with a second computer document, referred to as a "schema document", the content of which is represented in a schema type markup language, the schema document defining the structure of the portion under consideration of the computer document. This processing method comprises the steps of: selection (S13) of a tag, referred to as the "current tag", in the portion of the computer document; searching (S15-S21) in the schema document for at least one declarative tag of a function associated with the selected tag of the computer document; creation (S23, S25) of a list of functions applicable to the current tag of the computer document from at least one declarative tag of a function, found in the schema document.

2007-08-21

177

Method for implementation of recursive hierarchical segmentation on parallel computers  

NASA Technical Reports Server (NTRS)

A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

Tilton, James C. (Inventor)

2005-01-01

178

Multiresolution reproducing kernel particle method for computational fluid dynamics  

Microsoft Academic Search

Multiresolution analysis based on the reproducing kernel particle method (RKPM) is developed for computational fluid dynamics. An algorithm incorporating multiple-scale adaptive refinement is introduced. The concept of using a wavelet solution as an error indicator is also presented. A few representative numerical examples are solved to illustrate the performance of this new meshless method. Results show that the RKPM is

Sukky Jun; Dirk Thomas Sihling; Yijung Chen; Wei Hao

1997-01-01

179

Bolus chasing computed tomography angiography using local maximum tracking method  

Microsoft Academic Search

Tracking bolus peak position is a crucial issue for computed tomography angiography(CTA). In this paper, a local maximum tracking method is proposed and tested using real patient data. The method uses a second order polynomial to ap- proximate bolus density function locally. By estimating the local maximum density position at next sampling time, bolus peak trajectory is closely followed. Experi-

Zhijun Cai; Robert McCabe; Ge Wang; Er Wei Bai

2007-01-01

180

Element-Free Galerkin Method in Electromagnetic Scattering Field Computation  

Microsoft Academic Search

In this paper, one of meshless methods named element-free Galerkin method is firstly employed to analyze electromagnetic scattering problems. Moving least-squares interpolations are used to construct the trial and test functions for the weak form. Numerical studies show that it is working and has good performance in electromagnetic scattering problems computation.

X. Liu; B.-Z. Wang; S. Lai

2007-01-01

181

Model-potential method for computation of autoionization state widths  

SciTech Connect

The model-potential method (MPM), previously developed for describing single-electron excitations, is generalized to the case of two-electron transitions in atoms. A method is proposed for computing autoionization state widths with the help of the MPM, and the autoionization probabilities of He and Li atoms are calculated.

Zapryagaev, S.A.; Ovsyannikov, V.D.

1982-02-01

182

Gyrokinetic theory and computational methods for electromagnetic perturbations in tokamaks  

Microsoft Academic Search

A general gyrokinetic formalism and appropriate computational methods have been developed for electromagnetic perturbations in toroidal plasmas. This formalism and associated numerical code represent the first self-consistent, comprehensive, fully kinetic model for treating both magnetohydrodynamic (MHD) instabilities and electromagnetic drift waves. The gyrokinetic system of equation is derived by phase- space Lagrangian Lie perturbation methods which enable applications to modes

Hong Qin

1998-01-01

183

An Area Computation Based Method for RAIM Holes Assessment  

Microsoft Academic Search

Receiver Autonomous Integrity Monitoring (RAIM) is a method implemented within the receiver to protect users against satellite navigation system failures. Research has shown that traditional methods for the determination of RAIM holes (i.e. places where less than five satellites are visible and available) based on spatial and temporal intervals (grids) compromise accuracy due to the constraint of computation load. Research

Shaojun Feng; Washington Y. Ochieng; Rainer Mautz

2006-01-01

184

Computer-Simulation Methods in Human Linkage Analysis  

Microsoft Academic Search

In human linkage analysis, many statistical problems without analytical solution could be solved by ad hoc Monte Carlo procedures were efficient computer-simulation methods available for members of family pedigrees. In this paper, a general method is described for randomly generating genotypes at one or more marker loci, given observed phenotypes at loci linked among themselves and with the markers. The

Jurg Ott

1989-01-01

185

Floating Points: A method for computing stipple drawings  

Microsoft Academic Search

We present a method for computer generated pen-and-ink illustrations by the simulation of stippling. In a stipple drawing, dots are used to represent tone and also material of surfaces. We create such drawings by generating an initial dot set which is then processed by a relaxation method based on Voronoi diagrams. The point patterns generated are approximations of Poisson disc

Oliver Deussen; Stefan Hiller; Cornelius W. A. M. Van Overveld; Thomas Strothotte

2000-01-01

186

Boundary method for attenuation correction in positron computed tomography  

Microsoft Academic Search

A new method for attenuation correction in positron computed tomography (PCT) has been developed, and it can improve the quality of PCT images. The method requires a short transmission scan by the PCT system. Then boundaries between tissues with significantly different attenuation coefficients are determined from the transmission image by edge-finding techniques. Attenuation correction factors (ACF) are then calculated using

S. C. Huang; R. E. Carson; M. E. Phelps; E. J. Hoff-man; H. R. Schelbert; D. E. Kuhl

1981-01-01

187

Calculating PI Using Historical Methods and Your Personal Computer.  

ERIC Educational Resources Information Center

Provides a software program for determining PI to the 15th place after the decimal. Explores the history of determining the value of PI from Archimedes to present computer methods. Investigates Wallis's, Liebniz's, and Buffon's methods. Written for Tandy GW-BASIC (IBM compatible) with 384K. Suggestions for Apple II's are given. (MVL)

Mandell, Alan

1989-01-01

188

Linear and nonlinear methods for brain-computer interfaces  

Microsoft Academic Search

At the recent Second International Meeting on Brain-Computer Interfaces (BCIs) held in June 2002 in Rensselaerville, NY, a formal debate was held on the pros and cons of linear and nonlinear methods in BCI research. Specific examples applying EEG data sets to linear and nonlinear methods are given and an overview of the various pros and cons of each approach

Klaus-Robert Müller; Charles W. Anderson; Gary E. Birch

2003-01-01

189

Probability computations using the SIGMA-PI method on a personal computer  

SciTech Connect

The SIGMA-PI ({Sigma}{Pi}) method as implemented in the SIGPI computer code, is designed to accurately and efficiently evaluate the probability of Boolean expressions in disjunctive normal form given the base event probabilities. The method is not limited to problems in which base event probabilities are small, nor to Boolean expressions that exclude the compliments of base events, nor to problems in which base events are independent. The feasibility of implementing the {Sigma}{Pi} method on a personal computer has been evaluated, and a version of the SIGPI code capable of quantifying simple Boolean expressions with independent base events on the personal computer has been developed. Tasks required for a fully functional personal computer version of SIGPI have been identified together with enhancements that could be implemented to improve the utility and efficiency of the code.

Haskin, F.E.; Lazo, M.S.; Heger, A.S. [Univ. of New Mexico, Albuquerque, NM (US). Dept. of Chemical and Nuclear Engineering

1990-09-30

190

Alternate Methods of Measuring Public Radio Audiences: A Pilot Project.  

ERIC Educational Resources Information Center

A pilot project was undertaken to explore ways to profile public radio audiences inexpensively and simply. The major effort was through use of the station's monthly programing guide mailing list. Persons found in this list were interviewed and their listening habits compared with a general survey (baseline) group. The survey showed that public

Williams, Wenmouth, Jr.; LeRoy, David J.

191

Methods and systems for providing reconfigurable and recoverable computing resources  

NASA Technical Reports Server (NTRS)

A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

2010-01-01

192

Extrapolation methods for accelerating PageRank computations  

Microsoft Academic Search

We present a novel algorithm for the fast computation of PageRank, a hyperlink-based estimate of the ''importance'' of Web pages. The original PageRank algorithm uses the Power Method to compute successive iterates that converge to the principal eigenvector of the Markov matrix representing the Web link graph. The algorithm presented here, called Quadratic Extrapolation, accelerates the convergence of the Power

Sepandar D. Kamvar; Taher H. Haveliwala; Christopher D. Manning; Gene H. Golub

2003-01-01

193

A Lanczos eigenvalue method on a parallel computer  

NASA Technical Reports Server (NTRS)

Eigenvalue analyses of complex structures is a computationally intensive task which can benefit significantly from new and impending parallel computers. This study reports on a parallel computer implementation of the Lanczos method for free vibration analysis. The approach used here subdivides the major Lanczos calculation tasks into subtasks and introduces parallelism down to the subtask levels such as matrix decomposition and forward/backward substitution. The method was implemented on a commercial parallel computer and results were obtained for a long flexible space structure. While parallel computing efficiency for the Lanczos method was good for a moderate number of processors for the test problem, the greatest reduction in time was realized for the decomposition of the stiffness matrix, a calculation which took 70 percent of the time in the sequential program and which took 25 percent of the time on eight processors. For a sample calculation of the twenty lowest frequencies of a 486 degree of freedom problem, the total sequential computing time was reduced by almost a factor of ten using 16 processors.

Bostic, Susan W.; Fulton, Robert E.

1987-01-01

194

An Overview of a Decade of Journal Publications about Culture and Human-Computer Interaction (HCI)  

NASA Astrophysics Data System (ADS)

In this paper, we analyze the concept of human-computer interaction in cultural and national contexts. Building and extending upon the framework for understanding research in usability and culture by Honold [3], we give an overview of publications in culture and HCI between 1998 and 2008, with a narrow focus on high-level journal publications only. The purpose is to review current practice in how cultural HCI issues are studied, and to analyse problems with the measures and interpretation of this studies. We find that Hofstede's cultural dimensions has been the dominating model of culture, participants have been picked because they could speak English, and most studies have been large scale quantitative studies. In order to balance this situation, we recommend that more researchers and practitioners do qualitative, empirical work studies.

Clemmensen, Torkil; Roese, Kerstin

195

Robust regression methods for computer vision: A review  

Microsoft Academic Search

Regression analysis (fitting a model to noisy data) is a basic technique in computer vision, Robust regression methods that remain reliable in the presence of various types of noise are therefore of considerable importance. We review several robust estimation techniques and describe in detail the least-median-of-squares (LMedS) method. The method yields the correct result even when half of the data

Peter Meer; Doron Mintz; Azriel Rosenfeld; Dong Yoon Kim

1991-01-01

196

The Direct Lighting Computation in Global Illumination Methods  

NASA Astrophysics Data System (ADS)

Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

Wang, Changyaw Allen

1994-01-01

197

Software for computing eigenvalue bounds for iterative subspace matrix methods  

NASA Astrophysics Data System (ADS)

This paper describes software for computing eigenvalue bounds to the standard and generalized hermitian eigenvalue problem as described in [Y. Zhou, R. Shepard, M. Minkoff, Computing eigenvalue bounds for iterative subspace matrix methods, Comput. Phys. Comm. 167 (2005) 90-102]. The software discussed in this manuscript applies to any subspace method, including Lanczos, Davidson, SPAM, Generalized Davidson Inverse Iteration, Jacobi-Davidson, and the Generalized Jacobi-Davidson methods, and it is applicable to either outer or inner eigenvalues. This software can be applied during the subspace iterations in order to truncate the iterative process and to avoid unnecessary effort when converging specific eigenvalues to a required target accuracy, and it can be applied to the final set of Ritz values to assess the accuracy of the converged results. Program summaryTitle of program: SUBROUTINE BOUNDS_OPT Catalogue identifier: ADVE Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVE Computers: any computer that supports a Fortran 90 compiler Operating systems: any computer that supports a Fortran 90 compiler Programming language: Standard Fortran 90 High speed storage required:5m+5 working-precision and 2m+7 integer for m Ritz values No. of bits in a word: The floating point working precision is parameterized with the symbolic constant WP No. of lines in distributed program, including test data, etc.: 2452 No. of bytes in distributed program, including test data, etc.: 281 543 Distribution format: tar.gz Nature of physical problem: The computational solution of eigenvalue problems using iterative subspace methods has widespread applications in the physical sciences and engineering as well as other areas of mathematical modeling (economics, social sciences, etc.). The accuracy of the solution of such problems and the utility of those errors is a fundamental problem that is of importance in order to provide the modeler with information of the reliability of the computational results. Such applications include using these bounds to terminate the iterative procedure at specified accuracy limits. Method of solution: The Ritz values and their residual norms are computed and used as input for the procedure. While knowledge of the exact eigenvalues is not required, we require that the Ritz values are isolated from the exact eigenvalues outside of the Ritz spectrum and that there are no skipped eigenvalues within the Ritz spectrum. Using a multipass refinement approach, upper and lower bounds are computed for each Ritz value. Typical running time: While typical applications would deal with m<20, for m=100000, the running time is 0.12 s on an Apple PowerBook.

Shepard, Ron; Minkoff, Michael; Zhou, Yunkai

2005-07-01

198

Pretrial Publicity and the Jury: Research and Methods  

Microsoft Academic Search

\\u000a Research conducted over the past 40 years demonstrates that pretrial publicity (PTP) can negatively influence jurors’ perceptions\\u000a of parties in criminal and civil cases receiving substantial news coverage. Changes in the news media over the same period\\u000a of time have made news coverage more accessible to the public as traditional media including newspapers, television, and radio\\u000a are complemented with new

Lisa M. Spano; Jennifer L. Groscup; Steven D. Penrod

199

Determinant Computation on the GPU using the Condensation Method  

NASA Astrophysics Data System (ADS)

We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.

Anisul Haque, Sardar; Moreno Maza, Marc

2012-02-01

200

Publications  

NSDL National Science Digital Library

The Nitrogen and Phosphorus Knowledge Web page is offered by Iowa State University Extension and the College of Agriculture. The publications page contains links to various newsletters, articles, publications, power point presentations, links to governmental publications, and more. For example, visitors will find articles written on phosphorous within the Integrated Crop Management Newsletter, power point presentations on Nitrogen Management and Carbon Sequestration, and links to other Iowa State University publications on various subjects such as nutrient management. Other links on the home page of the site contain soil temperature data, research highlights, and other similarly relevant information for those in similar fields.

1969-12-31

201

Cloud Computing Research and Development Trend  

Microsoft Academic Search

With the development of parallel computing, distributed computing, grid computing, a new computing model appeared. The concept of computing comes from grid, public computing and SaaS. It is a new method that shares basic framework. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer or

Shuai Zhang; Shufen Zhang; Xuebin Chen; Xiuzhen Huo

2010-01-01

202

Computer controlled fluorometer device and method of operating same  

DOEpatents

A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

Kolber, Zbigniew (Shoreham, NY); Falkowski, Paul (Stony Brook, NY)

1990-01-01

203

Computational Methods for Dynamic Stability and Control Derivatives  

NASA Technical Reports Server (NTRS)

Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

2003-01-01

204

Computational Methods for Dynamic Stability and Control Derivatives  

NASA Technical Reports Server (NTRS)

Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

2004-01-01

205

Decluttering Methods for Computer-Generated Graphic Displays  

NASA Technical Reports Server (NTRS)

Symbol simplification and contrasting enhance viewer's ability to detect particular symbol. Report describes experiments designed to indicate how various decluttering methods affect viewer's abilities to distinguish essential from nonessential features on computer-generated graphic displays. Results indicate partial removal of nonessential graphic features through symbol simplification effective in decluttering as total removal of nonessential graphic features.

Schultz, E. Eugene, Jr.

1986-01-01

206

Fast BEM computations with the adaptive multilevel fast multipole method  

Microsoft Academic Search

Since the storage requirements of the BEM are proportional to N 2, only relative small problems can be solved on a PC or a workstation. In this paper we present an adaptive multilevel fast multipole method for the solution of electrostatic problems with the BEM. We will show, that in practice the storage requirements and the computational costs are approximately

André Buchau; Christian J. Huber; Wolfgang Rieger; Wolfgang M. Rucker

2000-01-01

207

Computational methods for creep fracture analysis by damage mechanics  

Microsoft Academic Search

Some mechanical problems of the computational method of creep fracture analysis based on continuum damage mechanics are discussed. After brief review of the local approach to creep crack growth analysis by means of finite element analysis and continuum damage mechanics, intrinsic feature of the fracture analysis in the framework of continuum theory and the causes of mesh-dependence of the numerical

S. Murakami; Y. Liu; M. Mizuno

2000-01-01

208

Computational method for radar absorbing composite lattice grids  

Microsoft Academic Search

Composite lattice grids reinforced by glass fibers (GFRC) and carbon fibers (CFRC) filled with spongy materials can be designed as lightweight radar absorbing structures (RAS). In the present paper, a computational approach based on periodic moment method (PMM) has been developed to calculate reflection coefficients of radar absorbing composite lattice grids. Total reflection backing (TRB) is considered directly in our

Mingji Chen; Yongmao Pei; Daining Fang

2009-01-01

209

COMPUTER-BASED TRIZ - SYSTEMATIC INNOVATION METHODS FOR ARCHITECTURE  

Microsoft Academic Search

The Russian Theory of Inventive Problem Solving, TRIZ, is the most comprehensive systematic innovation and creativity methodology available. Essentially the method consists of restating a specific design task in a more general way and then selecting generic solutions from databases of patents and solutions from a wide range of technologies. The development of computer databases greatly facilitates this task. Since

Darrell L Mann; Conall Ó Catháin

210

EQUILIBRIUM AND NONEQUILIBRIUM FOUNDATIONS OF FREE ENERGY COMPUTATIONAL METHODS  

SciTech Connect

Statistical mechanics provides a rigorous framework for the numerical estimation of free energy differences in complex systems such as biomolecules. This paper presents a brief review of the statistical mechanical identities underlying a number of techniques for computing free energy differences. Both equilibrium and nonequilibrium methods are covered.

C. JARZYNSKI

2001-03-01

211

Improved diffraction computation with a hybrid C-RCWA-method  

Microsoft Academic Search

The Rigorous Coupled Wave Approach (RCWA) is acknowledged as a well established diffraction simulation method in electro-magnetic computing. Its two most essential applications in the semiconductor industry are in optical scatterometry and optical lithography simulation. In scatterometry, it is the standard technique to simulate spectra or diffraction responses for gratings to be characterized. In optical lithography simulation, it is an

Joerg Bischoff

2009-01-01

212

Convergence acceleration of the Proteus computer code with multigrid methods  

NASA Technical Reports Server (NTRS)

This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

Demuren, A. O.; Ibraheem, S. O.

1995-01-01

213

Computational methods for calculating geometric parameters of tectonic plates  

Microsoft Academic Search

Present day and ancient plate tectonic configurations can be modelled in terms of non-overlapping polygonal regions, separated by plate boundaries, on the unit sphere. The computational methods described in this article allow an evaluation of the area and the inertial tensor components of a polygonal region on the unit sphere, as well as an estimation of the associated errors. These

Antonio Schettino

1999-01-01

214

Interactive method for computation of viscous flow with recirculation  

NASA Technical Reports Server (NTRS)

An interactive method is proposed for the solution of two-dimensional, laminar flow fields with identifiable regions of recirculation, such as the shear-layer-driven cavity flow. The method treats the flow field as composed of two regions, with an appropriate mathematical model adopted for each region. The shear layer is computed by the compressible boundary layer equations, and the slowly recirculating flow by the incompressible Navier-Stokes equations. The flow field is solved iteratively by matching the local solutions in the two regions. For this purpose a new matching method utilizing an overlap between the two computational regions is developed, and shown to be most satisfactory. Matching of the two velocity components, as well as the change in velocity with respect to depth is amply accomplished using the present approach, and the stagnation points corresponding to separation and reattachment of the dividing streamline are computed as part of the interactive solution. The interactive method is applied to the test problem of a shear layer driven cavity. The computational results are used to show the validity and applicability of the present approach.

Brandeis, J.; Rom, J.

1981-01-01

215

A Higher Order Iterative Method for Computing the Drazin Inverse  

PubMed Central

A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper.

Soleymani, F.; Stanimirovic, Predrag S.

2013-01-01

216

A Critical Review of Computer-Assisted Learning in Public Health via the Internet, 1999-2008  

ERIC Educational Resources Information Center

Computers and the internet have been utilized as viable avenues for public health education delivery. Yet the effectiveness, e.g., behavior change, from use of these tools has been limited. Previous reviews have focused on single health topics such as smoking cessation and weight loss. This review broadens the scope to consider computer-assisted…

Corda, Kirsten W.; Polacek, Georgia N. L. J.

2009-01-01

217

Analysis and optimization of cyclic methods in orbit computation  

NASA Technical Reports Server (NTRS)

The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

Pierce, S.

1973-01-01

218

Network Analysis in Public Health: History, Methods, and Applications  

Microsoft Academic Search

Network analysis is an approach to research that is uniquely suited to describing, exploring, and understanding structural and relational aspects of health. It is both a methodological tool and a theoretical paradigm that allows us to pose and answer important ecological questions in public health. In this review we trace the history of network analysis, provide a methodological overview of

Douglas A. Luke; Jenine K. Harris

2007-01-01

219

GRACE: Public Health Recovery Methods Following an Environmental Disaster  

Microsoft Academic Search

Different approaches are necessary when community-based participatory research (CBPR) of environmental illness is initiated after an environmental disaster within a community. Often such events are viewed as golden scientific opportunities to do epidemiological studies. However, the authors believe that in such circumstances, community engagement and empowerment needs to be integrated into the public health service efforts in order for both

Erik R. Svendsen; Nancy C. Whittle; Robert E. McKeown; Karen Sprayberry; Margaret Heim; Richard Caldwell; James J. Gibson; John E. Vena

2010-01-01

220

Public Participation GIS: A Method for Identifying Ecosystem Services  

Microsoft Academic Search

This study evaluated the use of an Internet-based public participation geographic information system (PPGIS) to identify ecosystem services in Grand County, Colorado. Specific research objectives were to examine the distribution of ecosystem services, identify the characteristics of participants in the study, explore potential relationships between ecosystem services and land use and land cover (LULC) classifications, and assess the methodological strengths

Greg Brown; Jessica M. Montag; Katie Lyon

2011-01-01

221

Public Participation GIS: A Method for Identifying Ecosystem Services  

Microsoft Academic Search

This study evaluated the use of an Internet-based public participation geographic information system (PPGIS) to identify ecosystem services in Grand County, Colorado. Specific research objectives were to examine the distribution of ecosystem services, identify the characteristics of participants in the study, explore potential relationships between ecosystem services and land use and land cover (LULC) classifications, and assess the methodological strengths

Greg Brown; Jessica M. Montag; Katie Lyon

2012-01-01

222

Computational Methods for Structural Mechanics and Dynamics, part 1  

NASA Technical Reports Server (NTRS)

The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

Stroud, W. Jefferson (editor); Housner, Jerrold M. (editor); Tanner, John A. (editor); Hayduk, Robert J. (editor)

1989-01-01

223

An effective method for computing the noise in biochemical networks  

NASA Astrophysics Data System (ADS)

We present a simple yet effective method, which is based on power series expansion, for computing exact binomial moments that can be in turn used to compute steady-state probability distributions as well as the noise in linear or nonlinear biochemical reaction networks. When the method is applied to representative reaction networks such as the ON-OFF models of gene expression, gene models of promoter progression, gene auto-regulatory models, and common signaling motifs, the exact formulae for computing the intensities of noise in the species of interest or steady-state distributions are analytically given. Interestingly, we find that positive (negative) feedback does not enlarge (reduce) noise as claimed in previous works but has a counter-intuitive effect and that the multi-OFF (or ON) mechanism always attenuates the noise in contrast to the common ON-OFF mechanism and can modulate the noise to the lowest level independently of the mRNA mean. Except for its power in deriving analytical expressions for distributions and noise, our method is programmable and has apparent advantages in reducing computational cost.

Zhang, Jiajun; Nie, Qing; He, Miao; Zhou, Tianshou

2013-02-01

224

Research on the computational methods of gravity topographic effect  

NASA Astrophysics Data System (ADS)

Methods for computing the gravity topographic effect are presented. These methods include: topographic correction solution C are shown, MOLODENSKY's series solution G1, PELLINEN's formula G', analytic continuation g sub and BOERHAMMAR's solution. The interrelation between the formulas of topographic correction are shown the relation between the analytic continuation solution and BOERHAMMAR's solution are derived and some of their natures are deduced. In these methods, each has a common operator L. A certain connections with each other exists, but they can not be replaced mutually. The linear solution of analytic continuation solution is respectevely the approximates value of MOLODENSKY's series solution G1 and BOERHAMMAR'S solution. This suggesyted that these of reduction of separated rings may decrease difficulties brought about in iterative computation.

Meng, J.

225

Computation of Pressurized Gas Bearings Using CE/SE Method  

NASA Technical Reports Server (NTRS)

The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

2003-01-01

226

Discrete, spatiotemporal, wavelet multiresolution analysis method for computing optical flow  

NASA Astrophysics Data System (ADS)

A wavelet-based system for computing localized velocity fields associated with time-sequential imagery is described. The approach combines the mathematical rigor of the multiresolution wavelet analysis with well-known spatiotemporal frequency flow computation principles. The foundation of the approach consists of a unique, nonhomogeneous multiresolution wavelet filter bank designed to extract moving objects in a 3D image sequence based on their location, size, and speed. The filter bank is generated by an unconventional 3D subband coding scheme that generates 20 orientation-tuned filters at each spatial and temporal resolution. The frequency responses of the wavelet filter bank are combined using a least-squares method to assign a velocity vector to each spatial location in an image sequence. Several examples are provided to demonstrate the flow computation abilities of the wavelet vector motion sensor.

Burns, Thomas J.; Rogers, Steven K.; Oxley, Mark E.; Ruck, Dennis W.

1994-07-01

227

Computational methods for coupling microstructural and micromechanical materials response simulations  

SciTech Connect

Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

2000-04-01

228

GRACE: public health recovery methods following an environmental disaster.  

PubMed

Different approaches are necessary when community-based participatory research (CBPR) of environmental illness is initiated after an environmental disaster within a community. Often such events are viewed as golden scientific opportunities to do epidemiological studies. However, the authors believe that in such circumstances, community engagement and empowerment needs to be integrated into the public health service efforts in order for both those and any science to be successful, with special care being taken to address the immediate health needs of the community first, rather than the pressing needs to answer important scientific questions. The authors will demonstrate how they have simultaneously provided valuable public health service, embedded generalizable scientific knowledge, and built a successful foundation for supplemental CBPR through their on-going recovery work after the chlorine gas disaster in Graniteville, South Carolina. PMID:20439226

Svendsen, Erik R; Whittle, Nancy C; Sanders, Louisiana; McKeown, Robert E; Sprayberry, Karen; Heim, Margaret; Caldwell, Richard; Gibson, James J; Vena, John E

2010-01-01

229

GRACE: Public Health Recovery Methods following an Environmental Disaster  

PubMed Central

Different approaches are necessary when Community Based Participatory Research (CBPR) of environmental illness is initiated after an environmental disaster within a community. Often such events are viewed as golden scientific opportunities to do epidemiological studies. However, we believe that in such circumstances, community engagement and empowerment needs to be integrated into the public health service efforts in order for both those and any science to be successful, with special care being taken to address the immediate health needs of the community first rather than the pressing needs to answer important scientific questions. We will demonstrate how we have simultaneously provided valuable public health service, embedded generalizable scientific knowledge, and built a successful foundation for supplemental CBPR through our on-going recovery work after the chlorine gas disaster in Graniteville, South Carolina.

Svendsen, ER; Whittle, N; Wright, L; McKeown, RE; Sprayberry, K; Heim, M; Caldwell, R; Gibson, JJ; Vena, J.

2014-01-01

230

AN ALGEBRAIC METHOD FOR PUBLIC-KEY CRYPTOGRAPHY  

Microsoft Academic Search

Algebraic key establishment protocols based on the di-culty of solv- ing equations over algebraic structures are described as a theoretical basis for constructing public{key cryptosystems. A protocol is a multi{party algorithm, deflned by a sequence of steps, speci- fying the actions required of two or more parties in order to achieve a specifled objective. Furthermore, a key establishment protocol is

Iris Anshel; Michael Anshel; Dorian Goldfeld

1999-01-01

231

37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...  

Code of Federal Regulations, 2010 CFR

...libraries, sale, or destruction. Donation of public domain software may...documentation should be included in the donation. (iii) If the public...must be submitted. (3) Donations of public domain software...at 60 FR 34168, June 30, 1995; 64 FR 29522, June 1,...

2010-07-01

232

37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...  

Code of Federal Regulations, 2010 CFR

...libraries, sale, or destruction. Donation of public domain software may...documentation should be included in the donation. (iii) If the public...must be submitted. (3) Donations of public domain software...at 60 FR 34168, June 30, 1995; 64 FR 29522, June 1,...

2009-07-01

233

Practical methods to improve the development of computational software  

SciTech Connect

The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

Osborne, A. G.; Harding, D. W.; Deinert, M. R. [Department of Mechanical Engineering, University of Texas, Austin (United States)] [Department of Mechanical Engineering, University of Texas, Austin (United States)

2013-07-01

234

An efficient method for computing mathematical morphology for medical imaging  

NASA Astrophysics Data System (ADS)

Many medical imaging techniques use mathematical morphology (MM), with discs and spheres being the structuring elements (SE) of choice. Given the non-linear nature of the underlying comparison operations (min, max, AND, OR), MM optimization can be challenging. Many efficient methods have been proposed for various types of SE based on the ability to decompose the SE by way of separability or homotopy. Usually, these methods are only able to approximate disc and sphere SE rather than accomplish MM for the exact SE obtained by discretization of such shapes. We present a method that for efficiently computing MM for binary and gray scale image volumes using digitally convex and X-Y-Z symmetric flat SE, which includes discs and spheres. The computational cost is a function of the diameter of the SE and rather than its volume. Additional memory overhead, if any, is modest. We are able to compute MM on real medical image volumes with greatly reduced running times with increasing gains for larger SE. Our method is also robust to scale: it is applicable to ellipse and ellipsoid SE which may result from discretizing a disc or sphere on an anisotropic grid. In addition, it is easy to implement and can make use of existing image comparison operations. We present performance results on large medical chest CT datasets.

Vaz, Michael S.; Kiraly, Atilla P.

2006-03-01

235

Computer-aided methods, systems, and apparatuses for shoulder arthroplasty  

US Patent & Trademark Office Database

A method for performing shoulder arthroplasty or hemiarthroplasty is provided. The method includes generating navigational reference information relating to position and orientation of a body part forming at least a portion of the shoulder joint. The reference information may be stored in a computer. Navigational references are attached to the body part and an object. Information is received regarding the position and orientation of the object with respect to the body part and the object is navigated according to this information. The body part may be modified using the object and the modification may be displayed on a monitor associated with the computer. The navigational references may be used to track a shoulder arthroplasty trial component. Information is received regarding the position and orientation of the trial component with respect to the body part. This information is used to navigate the trial component to the body part.

2012-02-07

236

Characterization of Meta-Materials Using Computational Electromagnetic Methods  

NASA Technical Reports Server (NTRS)

An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

Deshpande, Manohar; Shin, Joon

2005-01-01

237

Computational methods for efficient structural reliability and reliability sensitivity analysis  

NASA Astrophysics Data System (ADS)

This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

Wu, Y.-T.

1993-04-01

238

Interval sampling methods and measurement error: a computer simulation.  

PubMed

A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. PMID:24127380

Wirth, Oliver; Slaven, James; Taylor, Matthew A

2014-01-01

239

Improved diffraction computation with a hybrid C-RCWA-method  

NASA Astrophysics Data System (ADS)

The Rigorous Coupled Wave Approach (RCWA) is acknowledged as a well established diffraction simulation method in electro-magnetic computing. Its two most essential applications in the semiconductor industry are in optical scatterometry and optical lithography simulation. In scatterometry, it is the standard technique to simulate spectra or diffraction responses for gratings to be characterized. In optical lithography simulation, it is an effective alternative to supplement or even to replace the FDTD for the calculation of light diffraction from thick masks as well as from wafer topographies. Unfortunately, the RCWA shows some serious disadvantages particularly for the modelling of grating profiles with shallow slopes and multilayer stacks with many layers such as extreme UV masks with large number of quarter wave layers. Here, the slicing may become a nightmare and also the computation costs may increase dramatically. Moreover, the accuracy is suffering due to the inadequate staircase approximation of the slicing in conjunction with the boundary conditions in TM polarization. On the other hand, the Chandezon Method (C-Method) solves all these problems in a very elegant way, however, it fails for binary patterns or gratings with very steep profiles where the RCWA works excellent. Therefore, we suggest a combination of both methods as plug-ins in the same scattering matrix coupling frame. The improved performance and the advantages of this hybrid C-RCWA-Method over the individual methods is shown with some relevant examples.

Bischoff, Joerg

2009-03-01

240

The projection method for computing multidimensional absolutely continuous invariant measures  

Microsoft Academic Search

We present an algorithm for numerically computing an absolutely continuous invariant measure associated with a piecewiseC2 expanding mappingS:??? on a bounded region ??RN. The method is based on the Galerkin projection principle for solving an operator equation in a Banach space. With the help of the modern notion of functions of bounded variation in multidimension, we prove the convergence of

Jiu Ding; Aihui Zhou

1994-01-01

241

Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach  

NASA Astrophysics Data System (ADS)

SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other Evolutionary Computation-based algorithms. The Levenberg-Marquardt optimization must be considered as the most efficient one due to its speed. Its drawback due to possible sticking in poor local optimum can be overcome by applying a multi-start approach.

Piotrowski, Adam P.; Napiorkowski, Jaros?aw J.

2011-09-01

242

38 CFR 3.25 - Parent's dependency and indemnity compensation (DIC)-Method of payment computation.  

Code of Federal Regulations, 2010 CFR

...Parent's dependency and indemnity compensation (DIC)-Method of payment computation. 3...Parent's dependency and indemnity compensation (DIC)âMethod of payment computation. Monthly payments of parents' DIC shall be computed in accordance with...

2010-07-01

243

38 CFR 3.25 - Parent's dependency and indemnity compensation (DIC)-Method of payment computation.  

Code of Federal Regulations, 2010 CFR

...Parent's dependency and indemnity compensation (DIC)-Method of payment computation. 3...Parent's dependency and indemnity compensation (DIC)âMethod of payment computation. Monthly payments of parents' DIC shall be computed in accordance with...

2009-07-01

244

38 CFR 3.25 - Parent's dependency and indemnity compensation (DIC)-Method of payment computation.  

Code of Federal Regulations, 2013 CFR

...Parent's dependency and indemnity compensation (DIC)-Method of payment computation. 3...Parent's dependency and indemnity compensation (DIC)âMethod of payment computation. Monthly payments of parents' DIC shall be computed in accordance with...

2013-07-01

245

FDM-FEM (Finite Difference Method-Finite Element Method) Method for Viscous Flow Computations over Multiple-Bodies,  

National Technical Information Service (NTIS)

A hybrid method between a finite-difference method (FDM) and a finite-element method (FEM) is developed for computations of two and three dimensional viscous flowfields over multiple bodies. In this scheme, an implicit finite-difference method is applied ...

K. Nakahashi S. Obayashi

1987-01-01

246

Computer method for identification of boiler transfer functions  

NASA Technical Reports Server (NTRS)

An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

Miles, J. H.

1971-01-01

247

A FAST NEW PUBLIC CODE FOR COMPUTING PHOTON ORBITS IN A KERR SPACETIME  

SciTech Connect

Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. We present a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semianalytically for the first time. We discuss the method, its implementation in a freely available FORTRAN code, and its application to toy problems from the literature.

Dexter, Jason [Department of Physics, University of Washington, Seattle, WA 98195-1560 (United States); Agol, Eric [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States)], E-mail: jdexter@u.washington.edu

2009-05-10

248

A bibliography on finite element and related methods analysis in reactor physics computations (1971--1997)  

SciTech Connect

This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

Carpenter, D.C.

1998-01-01

249

Profiling Animal Toxicants by Automatically Mining Public Bioassay Data: A Big Data Approach for Computational Toxicology  

PubMed Central

In vitro bioassays have been developed and are currently being evaluated as potential alternatives to traditional animal toxicity models. Already, the progress of high throughput screening techniques has resulted in an enormous amount of publicly available bioassay data having been generated for a large collection of compounds. When a compound is tested using a collection of various bioassays, all the testing results can be considered as providing a unique bio-profile for this compound, which records the responses induced when the compound interacts with different cellular systems or biological targets. Profiling compounds of environmental or pharmaceutical interest using useful toxicity bioassay data is a promising method to study complex animal toxicity. In this study, we developed an automatic virtual profiling tool to evaluate potential animal toxicants. First, we automatically acquired all PubChem bioassay data for a set of 4,841 compounds with publicly available rat acute toxicity results. Next, we developed a scoring system to evaluate the relevance between these extracted bioassays and animal acute toxicity. Finally, the top ranked bioassays were selected to profile the compounds of interest. The resulting response profiles proved to be useful to prioritize untested compounds for their animal toxicity potentials and form a potential in vitro toxicity testing panel. The protocol developed in this study could be combined with structure-activity approaches and used to explore additional publicly available bioassay datasets for modeling a broader range of animal toxicities.

Zhang, Jun; Hsieh, Jui-Hua; Zhu, Hao

2014-01-01

250

Assessing Computational Methods of Cis-Regulatory Module Prediction  

PubMed Central

Computational methods attempting to identify instances of cis-regulatory modules (CRMs) in the genome face a challenging problem of searching for potentially interacting transcription factor binding sites while knowledge of the specific interactions involved remains limited. Without a comprehensive comparison of their performance, the reliability and accuracy of these tools remains unclear. Faced with a large number of different tools that address this problem, we summarized and categorized them based on search strategy and input data requirements. Twelve representative methods were chosen and applied to predict CRMs from the Drosophila CRM database REDfly, and across the human ENCODE regions. Our results show that the optimal choice of method varies depending on species and composition of the sequences in question. When discriminating CRMs from non-coding regions, those methods considering evolutionary conservation have a stronger predictive power than methods designed to be run on a single genome. Different CRM representations and search strategies rely on different CRM properties, and different methods can complement one another. For example, some favour homotypical clusters of binding sites, while others perform best on short CRMs. Furthermore, most methods appear to be sensitive to the composition and structure of the genome to which they are applied. We analyze the principal features that distinguish the methods that performed well, identify weaknesses leading to poor performance, and provide a guide for users. We also propose key considerations for the development and evaluation of future CRM-prediction methods.

Su, Jing; Teichmann, Sarah A.; Down, Thomas A.

2010-01-01

251

A comparison of computation methods for leg stiffness during hopping.  

PubMed

Despite the presence of several different calculations of leg stiffness during hopping, little is known about how the methodologies produce differences in the leg stiffness. The purpose of this study was to directly compare Kleg during hopping as calculated from three previously published computation methods. Ten male subjects hopped in place on two legs, at four frequencies (2.2, 2.6, 3.0, and 3.4 Hz). In this article, leg stiffness was calculated from the natural frequency of oscillation (method A), the ratio of maximal ground reaction force (GRF) to peak center of mass displacement at the middle of the stance phase (method B), and an approximation based on sine-wave GRF modeling (method C). We found that leg stiffness in all methods increased with an increase in hopping frequency, but Kleg values using methods A and B were significantly higher than when using method C at all hopping frequencies. Therefore, care should be taken when comparing leg stiffness obtained by method C with those calculated by other methods. PMID:24676522

Hobara, Hiroaki; Inoue, Koh; Kobayashi, Yoshiyuki; Ogata, Toru

2014-02-01

252

Computational Methods; Tool for Electronic Structure Analysis of Solids  

NASA Astrophysics Data System (ADS)

Solid materials and their properties are of great technological interest. In depth understanding of such materials have need of a quantum mechanical description of the related solids. Electronic structure calculations of solids can be performed in a variety of ways using classical to quantum mechanical approaches. For a system, which shows strange phenomena, one often relies on first principle calculations. In this paper, we will give a brief description of the plane wave basis that underlies in many condensed matter Density Functional Theory methods. We will give brief introduction of these methods especially Augmented Plane Wave method, its linearization along with an alternate approach. Lastly, we will discuss the advantages of modern computer code WIEN2k of line aroused augmented plane wave method.

Ahmed, Rashid; Ahmed, Maqsood; Saeed, M. A.; Fazal-E-Aleem

2005-03-01

253

Comparison of different methods for shielding design in computed tomography.  

PubMed

The purpose of this work is to compare different methods for shielding calculation in computed tomography (CT). The BIR-IPEM (British Institute of Radiology and Institute of Physics in Engineering in Medicine) and NCRP (National Council on Radiation Protection) method were used for shielding thickness calculation. Scattered dose levels and calculated barrier thickness were also compared with those obtained by scatter dose measurements in the vicinity of a dedicated CT unit. Minimal requirement for protective barriers based on BIR-IPEM method ranged between 1.1 and 1.4 mm of lead demonstrating underestimation of up to 20 % and overestimation of up to 30 % when compared with thicknesses based on measured dose levels. For NCRP method, calculated thicknesses were 33 % higher (27-42 %). BIR-IPEM methodology-based results were comparable with values based on scattered dose measurements, while results obtained using NCRP methodology demonstrated an overestimation of the minimal required barrier thickness. PMID:21743070

Ciraj-Bjelac, O; Arandjic, D; Kosutic, D

2011-09-01

254

Public websites and human–computer interaction: an empirical study of measurement of website quality and user satisfaction  

Microsoft Academic Search

The focus of this paper is to investigate measurement of website quality and user satisfaction. More specifically, the paper reports on a study investigating whether users of high-quality public websites are more satisfied than those of low-quality websites. Adopting a human–computer interaction perspective, we have gathered data from the 2009 public website awards in Scandinavia. Our analysis of Norwegian and

Hanne Sørum; Kim Normann Andersen; Ravi Vatrapu

2012-01-01

255

Public websites and human–computer interaction: an empirical study of measurement of website quality and user satisfaction  

Microsoft Academic Search

The focus of this paper is to investigate measurement of website quality and user satisfaction. More specifically, the paper reports on a study investigating whether users of high-quality public websites are more satisfied than those of low-quality websites. Adopting a human–computer interaction perspective, we have gathered data from the 2009 public website awards in Scandinavia. Our analysis of Norwegian and

Hanne Sørum; Kim Normann Andersen; Ravi Vatrapu

2011-01-01

256

77 FR 22326 - Privacy Act of 1974, as Amended by Public Law 100-503; Notice of a Computer Matching Program  

Federal Register 2010, 2011, 2012, 2013

...by Public Law 100-503; Notice of a Computer Matching Program AGENCY: Office of Financial...Information System (PARIS) notice of a computer matching program between the Department...amended by Public Law 100-503, the Computer Matching and Privacy Protection Act...

2012-04-13

257

Molecular (hyper)polarizabilities computed by pseudospectral methods.  

PubMed

We have developed algorithms based on pseudospectral (PS) ab initio electronic structure methods for solving the first- and second-order Hartree-Fock/Kohn-Sham equations and evaluating molecular polarizabilities and first- and second-order hyperpolarizabilities in the spin-restricted and spin-unrestricted formalisms at the Hartree-Fock (HF) and density functional theory (DFT) levels. We carry out calculations on 50 small molecules to test the accuracy of the PS approach. Our results demonstrate that the molecular polarizability alpha computed by the PS method is essentially identical to the value obtained from conventional methods for both HF and DFT calculations, while the first-order hyperpolarizability beta and second-order hyperpolarizability gamma have mean unsigned percentage differences of 1.26% and 0.62% (HF) and 0.78% and 0.65% (DFT), respectively. We also present CPU timing comparisons between the PS and conventional methods at the 6-31 G(**) level for 14 molecules having 185 to 1185 basis functions. The timing results show that the PS method is 25 (PS-HF) and 13 (PS-DFT) times faster than the conventional method for a system with 500 basis functions. The PS methods are found scale as N(2.70) (PS-HF) and N(2.40) (PS-DFT), while the conventional methods scale as N(2.93) (PRISM-HF) and N(2.87) (PRISM-DFT), where N is the number of basis functions. PMID:15836304

Cao, Yixiang; Friesner, Richard A

2005-03-01

258

Computation of multi-material interactions using point method  

SciTech Connect

Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations. Numerical examples are given to demonstrate the new scheme.

Zhang, Duan Z [Los Alamos National Laboratory; Ma, Xia [Los Alamos National Laboratory; Giguere, Paul T [Los Alamos National Laboratory

2009-01-01

259

An analytical method for computing atomic contact areas in biomolecules.  

PubMed

We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. PMID:22965816

Mach, Paul; Koehl, Patrice

2013-01-15

260

Domain decomposition methods for the parallel computation of reacting flows  

NASA Astrophysics Data System (ADS)

Domain decomposition is a natural route to parallel computing for partial differential equation solvers. In this procedure, subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, we make comparisons between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demostrate for it approximately 10-fold speedup on 16 processsors. The three special features of reacting flow models in relation to these linear systems are: the possibly large number of degrees of freedom per gridpoint, the dominance of dense intra-point source-term coupling over inter-point convective-diffusive coupling throughout significant portions of the flow-field and strong nonlinearities which restrict the time step to small values (independent of linear algebraic considerations) throughout significant portions of the iteration history. Though these features are exploited to advantage herein, many aspects of the paper are applicable to the modeling of general convective-diffusive systems.

Keyes, David E.

1989-05-01

261

A modified Henyey method for computing radiative transfer hydrodynamics  

NASA Technical Reports Server (NTRS)

The implicit hydrodynamic code of Kutter and Sparks (1972), which is limited to optically thick regions and employs the diffusion approximation for radiative transfer, is modified to include radiative transfer effects in the optically thin regions of a model star. A modified Henyey method is used to include the solution of the radiative transfer equation in this implicit code, and the convergence properties of this method are proven. A comparison is made between two hydrodynamic models of a classical Cepheid with a 12-day period, one of which was computed with the diffusion approximation and the other with the modified Henyey method. It is found that the two models produce nearly identical light and velocity curves, but differ in the fact that the former never has temperature inversions in the atmosphere while the latter does when sufficiently strong shocks are present.

Karp, A. H.

1975-01-01

262

Benchmarking Gas Path Diagnostic Methods: A Public Approach  

NASA Technical Reports Server (NTRS)

Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

2008-01-01

263

A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data  

NASA Technical Reports Server (NTRS)

A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

2011-01-01

264

A METHOD FOR OBTAINING DIGITAL SIGNATURES AND PUBLIC-KEY CRYP-TOSYSTEMS  

Microsoft Academic Search

Abstract An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: 1. Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intended recipient. Only he can

R. L. Rivest; A. Shamir; L. M. Adelman

1977-01-01

265

A method for obtaining digital signatures and public-key cryptosystems  

Microsoft Academic Search

An encryption method is presented with the novel property that publicly re- vealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: 1. Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intended recipient. Only he can

Ronald L. Rivest; Adi Shamir; Leonard M. Adleman

1978-01-01

266

Compressive sampling in computed tomography: Method and application  

NASA Astrophysics Data System (ADS)

Since Donoho and Candes et al. published their groundbreaking work on compressive sampling or compressive sensing (CS), CS theory has attracted a lot of attention and become a hot topic, especially in biomedical imaging. Specifically, some CS based methods have been developed to enable accurate reconstruction from sparse data in computed tomography (CT) imaging. In this paper, we will review the progress in CS based CT from aspects of three fundamental requirements of CS: sparse representation, incoherent sampling and reconstruction algorithm. In addition, some potential applications of compressive sampling in CT are introduced.

Hu, Zhanli; Liang, Dong; Xia, Dan; Zheng, Hairong

2014-06-01

267

Fan Flutter Computations Using the Harmonic Balance Method  

NASA Technical Reports Server (NTRS)

An experimental forward-swept fan encountered flutter at part-speed conditions during wind tunnel testing. A new propulsion aeroelasticity code, based on a computational fluid dynamics (CFD) approach, was used to model the aeroelastic behavior of this fan. This threedimensional code models the unsteady flowfield due to blade vibrations using a harmonic balance method to solve the Navier-Stokes equations. This paper describes the flutter calculations and compares the results to experimental measurements and previous results from a time-accurate propulsion aeroelasticity code.

Bakhle, Milind A.; Thomas, Jeffrey P.; Reddy, T.S.R.

2009-01-01

268

Interspike interval method to compute speech signals from neural firing  

NASA Astrophysics Data System (ADS)

Auditory perception neurons also called inner hair cells (IHC) transform the mechanical movements of the basilar membrane into electrical impulses. The impulse coding of the neurons is the main information carrier in the auditory process and is the basis for improvements of cochlea implants as well as for low rate, high quality speech processing and compression. This paper shows how to compute the speech signal from the neural firing based on the analysis of the interspike interval histogram. This new approach solves problems which other standard analysis methods do no solve sufficiently well.

Meyer-Baese, Uwe

1998-03-01

269

Assessment of nonequilibrium radiation computation methods for hypersonic flows  

NASA Technical Reports Server (NTRS)

The present understanding of shock-layer radiation in the low density regime, as appropriate to hypersonic vehicles, is surveyed. Based on the relative importance of electron excitation and radiation transport, the hypersonic flows are divided into three groups: weakly ionized, moderately ionized, and highly ionized flows. In the light of this division, the existing laboratory and flight data are scrutinized. Finally, an assessment of the nonequilibrium radiation computation methods for the three regimes in hypersonic flows is presented. The assessment is conducted by comparing experimental data against the values predicted by the physical model.

Sharma, Surendra

1993-01-01

270

Method for determination of stand attributes and a computer program for performing the method  

US Patent & Trademark Office Database

The method is for forest inventory and for determination of stand attributes. Stand information of trees, sample plots, stands and larger forest areas can be determined by measuring or deriving the most important attributes for individual trees. The method uses a laser scanner and overlapping images. A densification of the laser point clouds is performed and the achieved denser point clouds are used to identify individual trees and groups of trees. A computer program is used for performing the method.

2012-06-26

271

Fractional Steps methods for transient problems on commodity computer architectures  

NASA Astrophysics Data System (ADS)

Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.

Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.

2008-12-01

272

An experiment in hurricane track prediction using parallel computing methods  

NASA Technical Reports Server (NTRS)

The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

1994-01-01

273

Applications of Computational Methods for Dynamic Stability and Control Derivatives  

NASA Technical Reports Server (NTRS)

Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

Green, Lawrence L.; Spence, Angela M.

2004-01-01

274

Illumination invariant method to detect and track left luggage in public areas  

NASA Astrophysics Data System (ADS)

Surveillance and its security applications have been critical subjects recently with various studies placing a high demand on robust computer vision solutions that can work effectively and efficiently in complex environments without human intervention. In this paper, an efficient illumination invariant template generation and tracking method to identify and track abandoned objects (bags) in public areas is described. Intensity and chromaticity distortion parameters are initially used to generate a binary mask containing all the moving objects in the scene. The binary blobs in the mask are tracked, and those found static through the use of a 'centroid-range' method are segregated. A Laplacian of Gaussian (LoG) filter is then applied to the parts of the current frame and the average background frame, encompassed by the static blobs, to pick up the high frequency components. The total energy is calculated for both the frames, current and background, covered by the detected edge map to ensure that illumination change has not resulted in false segmentation. Finally, the resultant edge-map is registered and tracked through the use of a correlation based matching process. The algorithm has been successfully tested on the iLIDs dataset, results being presented in this paper.

Hassan, Waqas; Mitra, Bhargav; Chatwin, Chris; Young, Rupert; Birch, Philip

2010-04-01

275

Computation of Sound Propagation by Boundary Element Method  

NASA Technical Reports Server (NTRS)

This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.

Guo, Yueping

2005-01-01

276

Reforming the Social Studies Methods Course. SSEC Publication No. 155.  

ERIC Educational Resources Information Center

Numerous criticisms of college social studies methods courses have generated various reform efforts. Three of these reforms are examined, including competency-based teacher education, the value analysis approach to teacher education, and the human relations approach to teacher education. Competency-based courses develop among future teachers…

Patrick, John J.

277

Computational method for reducing variance with Affymetrix microarrays  

PubMed Central

Background Affymetrix microarrays are used by many laboratories to generate gene expression profiles. Generally, only large differences (> 1.7-fold) between conditions have been reported. Computational methods to reduce inter-array variability might be of value when attempting to detect smaller differences. We examined whether inter-array variability could be reduced by using data based on the Affymetrix algorithm for pairwise comparisons between arrays (ratio method) rather than data based on the algorithm for analysis of individual arrays (signal method). Six HG-U95A arrays that probed mRNA from young (21–31 yr old) human muscle were compared with six arrays that probed mRNA from older (62–77 yr old) muscle. Results Differences in mean expression levels of young and old subjects were small, rarely > 1.5-fold. The mean within-group coefficient of variation for 4629 mRNAs expressed in muscle was 20% according to the ratio method and 25% according to the signal method. The ratio method yielded more differences according to t-tests (124 vs. 98 differences at P < 0.01), rank sum tests (107 vs. 85 differences at P < 0.01), and the Significance Analysis of Microarrays method (124 vs. 56 differences with false detection rate < 20%; 20 vs. 0 differences with false detection rate < 5%). The ratio method also improved consistency between results of the initial scan and results of the antibody-enhanced scan. Conclusion The ratio method reduces inter-array variance and thereby enhances statistical power.

2002-01-01

278

One-to-One Computing in Public Schools: Lessons from "Laptops for All" Programs  

ERIC Educational Resources Information Center

The basic tenet of one-to-one computing is that the student and teacher have Internet-connected, wireless computing devices in the classroom and optimally at home as well. Also known as "ubiquitous computing," this strategy assumes that every teacher and student has her own computing device and obviates the need for moving classes to computer

Abell Foundation, 2008

2008-01-01

279

SOURCE WATER PROTECTION OF PUBLIC DRINKING WATER WELLS: COMPUTER MODELING OF ZONES CONTRIBUTING RECHARGE TO PUMPING WELLS  

EPA Science Inventory

Computer technology to assist states, tribes, and clients in the design of wellhead and source water protection areas for public water supply wells is being developed through two distinct SubTasks: (Sub task 1) developing a web-based wellhead decision support system, WellHEDSS, t...

280

Computational analysis of methods for reduction of induced drag  

NASA Technical Reports Server (NTRS)

The purpose of this effort was to perform a computational flow analysis of a design concept centered around induced drag reduction and tip-vortex energy recovery. The flow model solves the unsteady three-dimensional Euler equations, discretized as a finite-volume method, utilizing a high-resolution approximate Riemann solver for cell interface flux definitions. The numerical scheme is an approximately-factored block LU implicit Newton iterative-refinement method. Multiblock domain decomposition is used to partition the field into an ordered arrangement of blocks. Three configurations are analyzed: a baseline fuselage-wing, a fuselage-wing-nacelle, and a fuselage-wing-nacelle-propfan. Aerodynamic force coefficients, propfan performance coefficients, and flowfield maps are used to qualitatively access design efficacy. Where appropriate, comparisons are made with available experimental data.

Janus, J. M.; Chatterjee, Animesh; Cave, Chris

1993-01-01

281

A computational design method for transonic turbomachinery cascades  

NASA Technical Reports Server (NTRS)

This paper describes a systematical computational procedure to find configuration changes necessary to modify the resulting flow past turbomachinery cascades, channels and nozzles, to be shock-free at prescribed transonic operating conditions. The method is based on a finite area transonic analysis technique and the fictitious gas approach. This design scheme has two major areas of application. First, it can be used for design of supercritical cascades, with applications mainly in compressor blade design. Second, it provides subsonic inlet shapes including sonic surfaces with suitable initial data for the design of supersonic (accelerated) exits, like nozzles and turbine cascade shapes. This fast, accurate and economical method with a proven potential for applications to three-dimensional flows is illustrated by some design examples.

Sobieczky, H.; Dulikravich, D. S.

1982-01-01

282

Computational aeroacoustics applications based on a discontinuous Galerkin method  

NASA Astrophysics Data System (ADS)

CAA simulation requires the calculation of the propagation of acoustic waves with low numerical dissipation and dispersion error, and to take into account complex geometries. To give, at the same time, an answer to both challenges, a Discontinuous Galerkin Method is developed for Computational AeroAcoustics. Euler's linearized equations are solved with the Discontinuous Galerkin Method using flux splitting technics. Boundary conditions are established for rigid wall, non-reflective boundary and imposed values. A first validation, for induct propagation is realized. Then, applications illustrate: the Chu and Kovasznay's decomposition of perturbation inside uniform flow in term of independent acoustic and rotational modes, Kelvin-Helmholtz instability and acoustic diffraction by an air wing. To cite this article: Ph. Delorme et al., C. R. Mecanique 333 (2005).

Delorme, Philippe; Mazet, Pierre; Peyret, Christophe; Ventribout, Yoan

2005-09-01

283

Improved Methods for Operating Public Transportation Services. Final August 1, 2011-January 31, 2013.  

National Technical Information Service (NTIS)

In this joint project, West Virginia University and the University of Maryland collaborated in developing improved methods for analyzing and managing public transportation services. Transit travel time data were collected using GPS tracking services and t...

A. Sanchez A. Unnikrishan D. Martinelli M. E. Kim P. Schonfeld

2013-01-01

284

Parallel computation of multigroup reactivity coefficient using iterative method  

SciTech Connect

One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

Susmikanti, Mike [Center for Development of Nuclear Informatics, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)] [Center for Development of Nuclear Informatics, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia); Dewayatna, Winter [Center for Nuclear Fuel Technology, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)] [Center for Nuclear Fuel Technology, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)

2013-09-09

285

Characterization of heterogeneous solids via wave methods in computational microelasticity  

NASA Astrophysics Data System (ADS)

Real solids are inherently heterogeneous bodies. While the resolution at which they are observed may be disparate from one material to the next, heterogeneities heavily affect the dynamic behavior of all microstructured solids. This work introduces a wave propagation simulation methodology, based on Mindlin's microelastic continuum theory, as a tool to dynamically characterize microstructured solids in a way that naturally accounts for their inherent heterogeneities. Wave motion represents a natural benchmark problem to appreciate the full benefits of the microelastic theory, as in high-frequency dynamic regimes do microstructural effects unequivocally elucidate themselves. Through a finite-element implementation of the microelastic continuum and the interpretation of the resulting computational multiscale wavefields, one can estimate the effect of microstructures upon the wave propagation modes, phase and group velocities. By accounting for microstructures without explicitly modeling them, the method allows reducing the computational time with respect to classical methods based on a direct numerical simulation of the heterogeneities. The numerical method put forth in this research implements the microelastic theory through a finite-element scheme with enriched super-elements featuring microstructural degrees of freedom, and implementing constitutive laws obtained by homogenizing the microstructure characteristics over material meso-domains. It is possible to envision the use of this modeling methodology in support of diverse applications, ranging from structural health monitoring in composite materials to the simulation of biological and geomaterials. From an intellectual point of view, this work offers a mathematical explanation of some of the discrepancies often observed between one-scale models and physical experiments by targeting the area of wave propagation, one area where these discrepancies are most pronounced.

Gonella, Stefano; Steven Greene, M.; Kam Liu, Wing

2011-05-01

286

Graphical Methods: A Review of Current Methods and Computer Hardware and Software. Technical Report No. 27.  

ERIC Educational Resources Information Center

Graphical methods for displaying data, as well as available computer software and hardware, are reviewed. The authors have emphasized the types of graphs which are most relevant to the needs of the National Center for Education Statistics (NCES) and its readers. The following types of graphs are described: tabulations, stem-and-leaf displays,…

Bessey, Barbara L.; And Others

287

A multiscale discontinuous Galerkin method with the computational structure of a continuous Galerkin method  

Microsoft Academic Search

Proliferation of degrees-of-freedom has plagued discontinuous Galerkin methodology from its inception over 30years ago. This paper develops a new computational formulation that combines the advantages of discontinuous Galerkin methods with the data structure of their continuous Galerkin counterparts. The new method uses local, element-wise problems to project a continuous finite element space into a given discontinuous space, and then applies

Thomas J. R. Hughes; Guglielmo Scovazzi; Pavel B. Bochev; Annalisa Buffa

2006-01-01

288

A comprehensive method for optical-emission computed tomography  

NASA Astrophysics Data System (ADS)

Optical-computed tomography (CT) and optical-emission computed tomography (ECT) are recent techniques with potential for high-resolution multi-faceted 3D imaging of the structure and function in unsectioned tissue samples up to 1-4 cc. Quantitative imaging of 3D fluorophore distribution (e.g. GFP) using optical-ECT is challenging due to attenuation present within the sample. Uncorrected reconstructed images appear hotter near the edges than at the center. A similar effect is seen in SPECT/PET imaging, although an important difference is attenuation occurs for both emission and excitation photons. This work presents a way to implement not only the emission attenuation correction utilized in SPECT, but also excitation attenuation correction and source strength modeling which are unique to optical-ECT. The performance of the correction methods was investigated by the use of a cylindrical gelatin phantom whose central region was filled with a known distribution of attenuation and fluorophores. Uncorrected and corrected reconstructions were compared to a sectioned slice of the phantom imaged using a fluorescent dissecting microscope. Significant attenuation artifacts were observed in uncorrected images and appeared up to 80% less intense in the central regions due to attenuation and an assumed uniform light source. The corrected reconstruction showed agreement throughout the verification image with only slight variations (~5%). Final experiments demonstrate the correction in tissue as applied to a tumor with constitutive RFP.

Thomas, Andrew; Bowsher, James; Roper, Justin; Oliver, Tim; Dewhirst, Mark; Oldham, Mark

2010-07-01

289

Helping Students Soar to Success on Computers: An Investigation of the Soar Study Method for Computer-Based Learning  

ERIC Educational Resources Information Center

This study used self-report and observation techniques to investigate how students study computer-based materials. In addition, it examined if a study method called SOAR can facilitate computer-based learning. SOAR is an acronym that stands for the method's 4 theoretically driven and empirically supported components: select (S), organize (O),…

Jairam, Dharmananda; Kiewra, Kenneth A.

2010-01-01

290

Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety  

SciTech Connect

Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

1999-09-20

291

Modern wing flutter analysis by computational fluid dynamics methods  

NASA Technical Reports Server (NTRS)

The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.

Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.

1987-01-01

292

Modern wing flutter analysis by computational fluid dynamics methods  

NASA Technical Reports Server (NTRS)

The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.

Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.

1988-01-01

293

Advanced rotor computations with a corrected potential method  

NASA Technical Reports Server (NTRS)

An unsteady Full-Potential Rotor code (FPR) has been enhanced with modifications directed at improving its drag prediction capability. The potential code has been rewritten with modifications to increase the code accuracy. Also, the shock generated entropy has been included to provide solutions comparable to the Euler equations. Two different weakly interacted boundary layer models have also been coupled to FPR in order to estimate skin-friction drag. One is a two-dimensional integral method and the other is a three-dimensional finite-difference scheme. The new flow solver is able to find accurate inviscid drags without recourse to numerical error tares. This permits the resolution of drag distributions resulting from rotor geometric variations. Good comparisons have been obtained between computed and measured torque for a rectangular and a highly swept model rotor.

Bridgeman, John O.; Strawn, Roger C.; Caradonna, Francis X.; Chen, Ching S.

1989-01-01

294

A computer method to provide optimal laser weld schedules  

SciTech Connect

A method has been formulated and tested to provide laser weld schedules using a mathematical model and parameter optimization. This effort, on behalf of the Smartweld manufacturing initiative, seeks to provide: (1) laser power, (2) part travel speed, and (3) lens focal length to optimize weld process efficiency while constraining weld dimensions. Experimental data for three metals was fit to provide the mathematical model. Embedded material constants in the computational model allowed the extension to seven metals. A genetic algorithm, was used to accomplish the optimization. Lens focal length is a discrete variable and necessitated this type of algorithm. All coding was done in MATLAB and a graphical user interface was provided. Contour and surface plots, available through the interface, provide the analyst with insight as to optimum welds which are reachable within the problem-specified bounds on power, speed, and available focal lengths.

Eisler, G.R.; Fuerschbach, P.W.

1995-02-01

295

Computational method for discovery of estrogen responsive genes  

PubMed Central

Estrogen has a profound impact on human physiology and affects numerous genes. The classical estrogen reaction is mediated by its receptors (ERs), which bind to the estrogen response elements (EREs) in target gene's promoter region. Due to tedious and expensive experiments, a limited number of human genes are functionally well characterized. It is still unclear how many and which human genes respond to estrogen treatment. We propose a simple, economic, yet effective computational method to predict a subclass of estrogen responsive genes. Our method relies on the similarity of ERE frames across different promoters in the human genome. Matching ERE frames of a test set of 60 known estrogen responsive genes to the collection of over 18?000 human promoters, we obtained 604 candidate genes. Evaluating our result by comparison with the published microarray data and literature, we found that more than half (53.6%, 324/604) of predicted candidate genes are responsive to estrogen. We believe this method can significantly reduce the number of testing potential estrogen target genes and provide functional clues for annotating part of genes that lack functional information.

Tang, Suisheng; Tan, Sin Lam; Ramadoss, Suresh Kumar; Kumar, Arun Prashanth; Tang, Man-Hung Eric; Bajic, Vladimir B.

2004-01-01

296

Computing Transport coefficients from the Microscopic Response Method  

NASA Astrophysics Data System (ADS)

If an external perturbation to a system may be expressed as additional terms in the Hamiltonian, the microscopic response is determined by the the wave function of the system. To obtain the macroscopic response, an ensemble average can be carried out at the final stage. With the help of a systematic diagrammatic expansion, one is able to consistently compute the corresponding transport coefficient. If the spatial fluctuation of the carrier distribution is small, the microscopic response method reduces to the usual Kubo-Greenwood formula (KGF). We illustrate with the conductivity and Hall mobility of amorphous semiconductors. Because the direction of the Lorentz force is determined by the line connecting the initial and final localized states, the sign of Hall mobility in a-Si:H can be anomalous. The method is being implemented in an ab initio code, and it is applicable to any temperature. Thus it significantly improves upon the usual method which averages KGF over a trajectory of classical molecular dynamics. [4pt] M.-L. Zhang and D. A. Drabold, Phys. Rev. Lett. 105, 186602 (2010); Eur. Phys. J. B. 77, 7-23, (2010); arXiv: 1008.1067.

Zhang, Mingliang; Drabold, David A.

2011-03-01

297

Optimal pulse design in quantum control: a unified computational method.  

PubMed

Many key aspects of control of quantum systems involve manipulating a large quantum ensemble exhibiting variation in the value of parameters characterizing the system dynamics. Developing electromagnetic pulses to produce a desired evolution in the presence of such variation is a fundamental and challenging problem in this research area. We present such robust pulse designs as an optimal control problem of a continuum of bilinear systems with a common control function. We map this control problem of infinite dimension to a problem of polynomial approximation employing tools from geometric control theory. We then adopt this new notion and develop a unified computational method for optimal pulse design using ideas from pseudospectral approximations, by which a continuous-time optimal control problem of pulse design can be discretized to a constrained optimization problem with spectral accuracy. Furthermore, this is a highly flexible and efficient numerical method that requires low order of discretization and yields inherently smooth solutions. We demonstrate this method by designing effective broadband ?/2 and ? pulses with reduced rf energy and pulse duration, which show significant sensitivity enhancement at the edge of the spectrum over conventional pulses in 1D and 2D NMR spectroscopy experiments. PMID:21245345

Li, Jr-Shin; Ruths, Justin; Yu, Tsyr-Yan; Arthanari, Haribabu; Wagner, Gerhard

2011-02-01

298

Smart algorithms and adaptive methods in computational fluid dynamics  

NASA Astrophysics Data System (ADS)

A review is presented of the use of smart algorithms which employ adaptive methods in processing large amounts of data in computational fluid dynamics (CFD). Smart algorithms use a rationally based set of criteria for automatic decision making in an attempt to produce optimal simulations of complex fluid dynamics problems. The information needed to make these decisions is not known beforehand and evolves in structure and form during the numerical solution of flow problems. Once the code makes a decision based on the available data, the structure of the data may change, and criteria may be reapplied in order to direct the analysis toward an acceptable end. Intelligent decisions are made by processing vast amounts of data that evolve unpredictably during the calculation. The basic components of adaptive methods and their application to complex problems of fluid dynamics are reviewed. The basic components of adaptive methods are: (1) data structures, that is what approaches are available for modifying data structures of an approximation so as to reduce errors; (2) error estimation, that is what techniques exist for estimating error evolution in a CFD calculation; and (3) solvers, what algorithms are available which can function in changing meshes. Numerical examples which demonstrate the viability of these approaches are presented.

Tinsley Oden, J.

1989-05-01

299

Optimal pulse design in quantum control: A unified computational method  

PubMed Central

Many key aspects of control of quantum systems involve manipulating a large quantum ensemble exhibiting variation in the value of parameters characterizing the system dynamics. Developing electromagnetic pulses to produce a desired evolution in the presence of such variation is a fundamental and challenging problem in this research area. We present such robust pulse designs as an optimal control problem of a continuum of bilinear systems with a common control function. We map this control problem of infinite dimension to a problem of polynomial approximation employing tools from geometric control theory. We then adopt this new notion and develop a unified computational method for optimal pulse design using ideas from pseudospectral approximations, by which a continuous-time optimal control problem of pulse design can be discretized to a constrained optimization problem with spectral accuracy. Furthermore, this is a highly flexible and efficient numerical method that requires low order of discretization and yields inherently smooth solutions. We demonstrate this method by designing effective broadband ?/2 and ? pulses with reduced rf energy and pulse duration, which show significant sensitivity enhancement at the edge of the spectrum over conventional pulses in 1D and 2D NMR spectroscopy experiments.

Li, Jr-Shin; Ruths, Justin; Yu, Tsyr-Yan; Arthanari, Haribabu; Wagner, Gerhard

2011-01-01

300

Matching wind turbine rotors and loads: Computational methods for designers  

NASA Astrophysics Data System (ADS)

A comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications was reported. A method was developed to convert the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) it is decided how turbine power is to be governed to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics are used to predict longterm energy output. Most systems are approximated by a graph and calculator approach. The method leads to energy predictions, and to insight into modeled processes. A computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out with in depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps; including three different load compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

Seale, J. B.

1983-04-01

301

Computational Acoustic Methods for the Design of Woodwind Instruments  

NASA Astrophysics Data System (ADS)

This thesis presents a number of methods for the computational analysis of woodwind instruments. The Transmission-Matrix Method (TMM) for the calculation of the input impedance of an instrument is described. An approach based on the Finite Element Method (FEM) is applied to the determination of the transmission-matrix parameters of woodwind instrument toneholes, from which new formulas are developed that extend the range of validity of current theories. The effect of a hanging keypad is investigated and discrepancies with current theories are found for short toneholes. This approach was applied as well to toneholes on a conical bore, and we conclude that the tonehole transmission matrix parameters developed on a cylindrical bore are equally valid for use on a conical bore. A boundary condition for the approximation of the boundary layer losses for use with the FEM was developed, and it enables the simulation of complete woodwind instruments. The comparison of the simulations of instruments with many open or closed toneholes with calculations using the TMM reveal discrepancies that are most likely attributable to internal or external tonehole interactions. This is not taken into account in the TMM and poses a limit to its accuracy. The maximal error is found to be smaller than 10 cents. The effect of the curvature of the main bore is investigated using the FEM. The radiation impedance of a wind instrument bell is calculated using the FEM and compared to TMM calculations; we conclude that the TMM is not appropriate for the simulation of flaring bells. Finally, a method is presented for the calculation of the tonehole positions and dimensions under various constraints using an optimization algorithm, which is based on the estimation of the playing frequencies using the Transmission-Matrix Method. A number of simple woodwind instruments are designed using this algorithm and prototypes evaluated.

Lefebvre, Antoine

302

Systems, Methods, and Computer Program Products for Transmitting Neural Signal Information.  

National Technical Information Service (NTIS)

Systems, Methods, and Computer Program Products for Transmitting Neural Signal Information. Systems, method, and computer program products are provided for neural signal transmission. A system according to one embodiment can include a signal receiver oper...

I. Obeid P. D. Wolf

2004-01-01

303

Agreement between two methods of computing the fractal dimensions of complex figures.  

PubMed

Agreement between two methods of computing the fractal dimensions of complex figures was examined. 16 complex figures were scanned and the fractal dimensions were computed under two conditions. There was no significant difference between the two methods. PMID:7675564

House, G; Zelhart, P F

1995-04-01

304

Optimal Joint Multiple Resource Allocation Method for Cloud Computing Environments  

Microsoft Academic Search

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources. To provide cloud computing services economically, it is important to optimize resource allocation under the assumption that the required resource can be taken from a shared resource pool. In addition, to be able to provide processing ability and storage capacity, it

Shin-ichi Kuribayashi

2011-01-01

305

Analytic and simulation methods in computer network design  

Microsoft Academic Search

The Seventies are here and so are computer networks! The time sharing industry dominated the Sixties and it appears that computer networks will play a similar role in the Seventies. The need has now arisen for many of these time-shared systems to share each others' resources by coupling them together over a communication network thereby creating a computer network. The

Leonard Kleinrock

1970-01-01

306

Development of computational methods for heavy lift launch vehicles  

NASA Technical Reports Server (NTRS)

The research effort has been focused on the development of an advanced flow solver for complex viscous turbulent flows with shock waves. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. A new computer program named CENS3D has been developed for viscous turbulent flows with discontinuities. Details of the code are described in Appendix A and Appendix B. With the developments of the numerical algorithm and dissipation model, the simulation of three-dimensional viscous compressible flows has become more efficient and accurate. The results of the research are expected to yield a direct impact on the design process of future liquid fueled launch systems.

Yoon, Seokkwan; Ryan, James S.

1993-01-01

307

Assessment of regression-based methods to adjust for publication bias through a comprehensive simulation study  

PubMed Central

Background In meta-analysis, the presence of funnel plot asymmetry is attributed to publication or other small-study effects, which causes larger effects to be observed in the smaller studies. This issue potentially mean inappropriate conclusions are drawn from a meta-analysis. If meta-analysis is to be used to inform decision-making, a reliable way to adjust pooled estimates for potential funnel plot asymmetry is required. Methods A comprehensive simulation study is presented to assess the performance of different adjustment methods including the novel application of several regression-based methods (which are commonly applied to detect publication bias rather than adjust for it) and the popular Trim & Fill algorithm. Meta-analyses with binary outcomes, analysed on the log odds ratio scale, were simulated by considering scenarios with and without i) publication bias and; ii) heterogeneity. Publication bias was induced through two underlying mechanisms assuming the probability of publication depends on i) the study effect size; or ii) the p-value. Results The performance of all methods tended to worsen as unexplained heterogeneity increased and the number of studies in the meta-analysis decreased. Applying the methods conditional on an initial test for the presence of funnel plot asymmetry generally provided poorer performance than the unconditional use of the adjustment method. Several of the regression based methods consistently outperformed the Trim & Fill estimators. Conclusion Regression-based adjustments for publication bias and other small study effects are easy to conduct and outperformed more established methods over a wide range of simulation scenarios.

2009-01-01

308

Non-unitary probabilistic quantum computing circuit and method  

NASA Technical Reports Server (NTRS)

A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

2009-01-01

309

Interactive computer methods for generating mineral-resource maps  

USGS Publications Warehouse

Inasmuch as maps are a basic tool of geologists, the U.S. Geological Survey's CRIB (Computerized Resources Information Bank) was constructed so that the data it contains can be used to generate mineral-resource maps. However, by the standard methods used-batch processing and off-line plotting-the production of a finished map commonly takes 2-3 weeks. To produce computer-generated maps more rapidly, cheaply, and easily, and also to provide an effective demonstration tool, we have devised two related methods for plotting maps as alternatives to conventional batch methods. These methods are: 1. Quick-Plot, an interactive program whose output appears on a CRT (cathode-ray-tube) device, and 2. The Interactive CAM (Cartographic Automatic Mapping system), which combines batch and interactive runs. The output of the Interactive CAM system is final compilation (not camera-ready) paper copy. Both methods are designed to use data from the CRIB file in conjunction with a map-plotting program. Quick-Plot retrieves a user-selected subset of data from the CRIB file, immediately produces an image of the desired area on a CRT device, and plots data points according to a limited set of user-selected symbols. This method is useful for immediate evaluation of the map and for demonstrating how trial maps can be made quickly. The Interactive CAM system links the output of an interactive CRIB retrieval to a modified version of the CAM program, which runs in the batch mode and stores plotting instructions on a disk, rather than on a tape. The disk can be accessed by a CRT, and, thus, the user can view and evaluate the map output on a CRT immediately after a batch run, without waiting 1-3 days for an off-line plot. The user can, therefore, do most of the layout and design work in a relatively short time by use of the CRT, before generating a plot tape and having the map plotted on an off-line plotter.

Calkins, James Alfred; Crosby, A. S.; Huffman, T. E.; Clark, A. L.; Mason, G. T.; Bascle, R. J.

1980-01-01

310

A stoichiometric calibration method for dual energy computed tomography.  

PubMed

The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0 mm, 0.5 mm and 0.4 mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy. PMID:24694786

Bourque, Alexandra E; Carrier, Jean-François; Bouchard, Hugo

2014-04-21

311

Using a Portable Wireless Computer Lab to Provide Outreach Training to Public Health Workers  

Microsoft Academic Search

Librarians at Louisiana State University Health Sciences Center in Shreveport developed an outreach program for public health workers in north Louisiana. This program provided hands-on training on how to find health information resources on the Web. Several challenges arose during this project. Public health units in the region lacked suitable teaching labs and faced limited travel budgets and tight staffing

Michael M. Watson; Donna F. Timm; Dawn M. Parker; Mararia Adams; Angela D. Anderson; Dennis A. Pernotto; Marianne Comegys

2006-01-01

312

Advocacy and evidence for sustainable public computer access : Experiences from the Global Libraries Initiative  

Microsoft Academic Search

Purpose – This paper aims to draw together the evidence-based advocacy experience of five national programs focused on developing public access information and communications technologies (ICT) via public libraries as grantees of the Bill & Melinda Gates Foundation's Global Libraries Initiative. Design\\/methodology\\/approach – The authors describe a common approach to strategic advocacy and to impact planning and assessment. They then

Janet Sawaya; Tshepo Maswabi; Resego Taolo; Pablo Andrade; Máximo Moreno Grez; Pilar Pacheco; Kristine Paberza; Sandra Vigante; Agniete Kurutyte; Ugne Rutkauskiene; Jolanta Je?owska; Maciej Kochanowicz

2011-01-01

313

Perception of Wearable Computers for Everyday Life by the General Public: Impact of Culture and Gender on Technology  

Microsoft Academic Search

\\u000a This paper examines the perception of wearable computers for everyday life by the general public, in order to foster the adoption\\u000a of this technology. We present a social study that focuses on sensors, actuators, autonomy, uses, and privacy. Carried out\\u000a in 2005, it considers gender and cultural disparities in two dissimilar groups: French (115 males, 59 females) and Japanese\\u000a (61

Sébastien Duval; Hiromichi Hashizume

2005-01-01

314

Redundant CORDIC Methods with a Constant Scale Factor for Sine and Cosine Computation  

Microsoft Academic Search

Proposes two redundant CORDIC (coordinate rotation digital computer) methods with a constant scale factor for sine and cosine computation, called the double rotation method and the correcting rotation method. In both methods, the CORDIC is accelerated by the use of a redundant binary number representation, as in the previously proposed redundant CORDIC. In the proposed methods, since the number of

Naofumi Takagi; Tohru Asada; Shuzo Yajima

1991-01-01

315

Computational Methods for Fundamental Studies Of Plasma Processes  

NASA Astrophysics Data System (ADS)

We present a combination of a wide range of computational methods that permits us to perform in-depth numerical studies of processes taking place in silicon/hydrogen plasma reactors during the fabrication of solar cells by means of Plasma Enhanced Chemical Vapor Deposition (PECVD). Notably, our investigations are motivated by the question under which plasma conditions hydrogenated silicon SinHm (n<=20) clusters become amorphous or crystalline. A crystalline structure of those nanoparticles is crucial, for example, for the electrical properties and stability of polymorphous solar cells. First, we use fluid dynamics model calculations to characterize the experimentally employed hydrogen/silane plasmas. The resulting relative densities for all plasma radicals, their temperatures, and their collision interval times are then used as input data for detailed semi-empirical molecular dynamics (MD) simulations. As a result, the growth dynamics of nanometric hydrogenated silicon SinHm clusters is simulated starting out from the collision of individual SiHx (x = 1-3) radicals under the plasma conditions derived above. We demonstrate how the plasma conditions determine the amorphous or crystalline character of the forming nanoparticles. Finally, we show a preliminary absorption spectrum based on ab initio time-dependent density-functional theory (DFT) calculations for a crystalline cluster to demonstrate the possibility to monitor cluster growth in situ.

Ning, N.; Dolgonos, G.; Morscheidt, W.; Michau, A.; Hassouni, K.; Vach, H.

2007-11-01

316

An efficient computation method for the texture browsing descriptor of MPEG7  

Microsoft Academic Search

In this paper, an efficient computation method for computing the texture browsing descriptor of MPEG-7 is provided. Texture browsing descriptor is used to characterize a texture's regularity, directionality and coarseness. To compute the regularity of textures, Fourier transform is first performed. To get more discriminative features for regularity computation, the Fourier spectrum is treated as an image and the Fourier

Kuen-long Lee; Ling-hwei Chen

2005-01-01

317

Constraint methods for neural networks and computer graphics  

Microsoft Academic Search

Both computer graphics and neural networks are related, in that they model natural phenomena. Physically-based models are used by computer graphics researchers to create realistic, natural animation, and neural models are used by neural network researchers to create new algorithms or new circuits. To exploit successfully these graphical and neural models, engineers want models that fulfill designer-specified goals. These goals

J. Platt

1989-01-01

318

Students' Attitudes towards Control Methods in Computer-Assisted Instruction.  

ERIC Educational Resources Information Center

Describes study designed to investigate dental students' attitudes toward computer-assisted teaching as applied in programs for oral radiology in Denmark. Programs using personal computers and slide projectors with varying degrees of learner and teacher control are described, and differences in attitudes between male and female students are…

Hintze, Hanne; And Others

1988-01-01

319

Computer-Aided Transcription in the Courts. Executive Summary. National Center Publication No. R-0058.  

ERIC Educational Resources Information Center

This report summarizes the findings of the Computer-Aided Transcription (CAT) Project, which conducted a 14-month study of the technology and use of computer systems for translating into English the shorthand notes taken by court reporters on stenotype machines. Included are the state of the art of CAT at the end of 1980 and anticipated future…

National Center for State Courts, Williamsburg, VA.

320

Creating a Public Domain Software Library To Increase Computer Access of Elementary Students with Learning Disabilities.  

ERIC Educational Resources Information Center

Information is provided on a practicum that addressed the lack of access to computer-aided instruction by elementary level students with learning disabilities, due to lack of diverse software, limited funding, and insufficient teacher training. The strategies to improve the amount of access time included: increasing the number of computer programs…

McInturff, Johanna R.

321

Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning  

ERIC Educational Resources Information Center

Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

Siu, Kin Wai Michael; Lam, Mei Seung

2012-01-01

322

Children's Learning Processes Using Unsupervised "Hole in the Wall" Computers in Shared Public Spaces  

ERIC Educational Resources Information Center

Earlier research by Mitra and colleagues on the use of computers by young children revealed that children are able to learn basic computing skills irrespective of their social, cultural, intellectual and religious backgrounds (Mitra & Rana, 2001). The present paper is an attempt to identify the varied aspects of a learning environment that impact…

Dangwal, Ritu; Kapur, Preeti

2008-01-01

323

Computer-Aided Transcription in the Courts. Executive Summary. National Center Publication No. R-0058.  

National Technical Information Service (NTIS)

This report summarizes the findings of the Computer-Aided Transcription (CAT) Project, which conducted a 14-month study of the technology and use of computer systems for translating into English the shorthand notes taken by court reporters on stenotype ma...

1981-01-01

324

Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture  

DOEpatents

Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

2011-10-11

325

Evaluation of some methods for the relative assessment of scientific publications  

Microsoft Academic Search

Some bibliometric methods for the assessment of the publication activity of research units are discussed on the basis of impact factors and citations of papers. Average subfield impact factor of periodicals representing subfields in chemistry is suggested. This indicator characterizes the average citedness of a paper in a given subfield. Comparing the total sum of impact factors of corresponding periodicals

P. Vinkler

1986-01-01

326

Are Private Schools Better Than Public Schools? Appraisal for Ireland by Methods for Observational Studies  

PubMed Central

In observational studies the assignment of units to treatments is not under control. Consequently, the estimation and comparison of treatment effects based on the empirical distribution of the responses can be biased since the units exposed to the various treatments could differ in important unknown pretreatment characteristics, which are related to the response. An important example studied in this article is the question of whether private schools offer better quality of education than public schools. In order to address this question we use data collected in the year 2000 by OECD for the Programme for International Student Assessment (PISA). Focusing for illustration on scores in mathematics of 15-years old pupils in Ireland, we find that the raw average score of pupils in private schools is higher than of pupils in public schools. However, application of a newly proposed method for observational studies suggests that the less able pupils tend to enroll in public schools, such that their lower scores is not necessarily an indication of bad quality of the public schools. Indeed, when comparing the average score in the two types of schools after adjusting for the enrollment effects, we find quite surprisingly that public schools perform better on average. This outcome is supported by the methods of instrumental variables and latent variables, commonly used by econometricians for analyzing and evaluating social programs.

Pfeffermann, Danny; Landsman, Victoria

2011-01-01

327

The Repeated Replacement Method: A Pure Lagrangian Meshfree Method for Computational Fluid Dynamics  

PubMed Central

In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids’ tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly “chops out” fluid from active areas and replaces it with new “flattened” fluid cells with the same mass, momentum, and energy. We call the new cells “flattened” because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations.

Walker, Wade A.

2012-01-01

328

Asronomical refraction: Computational methods for all zenith angles  

NASA Technical Reports Server (NTRS)

It is shown that the problem of computing astronomical refraction for any value of the zenith angle may be reduced to a simple, nonsingular, numerical quadrature when the proper choice is made for the independent variable of integration.

Auer, L. H.; Standish, E. M.

2000-01-01

329

Small Scale Distance Education; "The Personal (Computer) Touch"; Tutorial Methods for TMA's Using a Computer.  

ERIC Educational Resources Information Center

The authors present reports of current research on distance education at the FernUniversitat in West Germany. Fritsch discusses adapting distance education techniques for small classes. Kuffner describes procedures for providing feedback to students using personalized computer-generated letters. Klute discusses using a computer with tutorial…

Fritsch, Helmut; And Others

1989-01-01

330

26 CFR 1.9001-1 - Change from retirement to straight-line method of computing depreciation.  

Code of Federal Regulations, 2010 CFR

... Change from retirement to straight-line method of computing depreciation... Change from retirement to straight-line method of computing depreciation...from the retirement to the straight-line method of computing the...

2009-04-01

331

26 CFR 1.9001-1 - Change from retirement to straight-line method of computing depreciation.  

Code of Federal Regulations, 2010 CFR

... Change from retirement to straight-line method of computing depreciation... Change from retirement to straight-line method of computing depreciation...from the retirement to the straight-line method of computing the...

2010-04-01

332

Leading Computational Methods on Scalar and Vector HEC Platforms  

Microsoft Academic Search

The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than

Leonid Oliker; Jonathan Carter; Michael Wehner; Andrew Canning; Stéphane Ethier; Arthur Mirin; David Parks; Patrick H. Worley; Shigemune Kitawaki; Yoshinori Tsuda

2005-01-01

333

Computational Fluid Dynamics. [numerical methods and algorithm development  

NASA Technical Reports Server (NTRS)

This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

1992-01-01

334

Computational Fluid Dynamics. [Numerical methods and algorithm development  

SciTech Connect

This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

Not Available

1992-02-01

335

Method and system for providing personalized online services and advertisements in public spaces  

US Patent & Trademark Office Database

A method and system for providing, personalized and integrated online services for communications and commercial transactions both in private and public venues. The invention provides personalized information that is conveniently accessible through a network of public access stations (or terminals) which are enabled by a personal system access card (e.g., smart card). The invention also provides advertisers the opportunity to directly engage action; and potential user-consumers with selected advertising or marketing content based on each user's profile and usage history.

2005-01-25

336

The GLEaMviz computational tool, a publicly available software to explore realistic epidemic spreading scenarios at the global scale  

PubMed Central

Background Computational models play an increasingly important role in the assessment and control of public health crises, as demonstrated during the 2009 H1N1 influenza pandemic. Much research has been done in recent years in the development of sophisticated data-driven models for realistic computer-based simulations of infectious disease spreading. However, only a few computational tools are presently available for assessing scenarios, predicting epidemic evolutions, and managing health emergencies that can benefit a broad audience of users including policy makers and health institutions. Results We present "GLEaMviz", a publicly available software system that simulates the spread of emerging human-to-human infectious diseases across the world. The GLEaMviz tool comprises three components: the client application, the proxy middleware, and the simulation engine. The latter two components constitute the GLEaMviz server. The simulation engine leverages on the Global Epidemic and Mobility (GLEaM) framework, a stochastic computational scheme that integrates worldwide high-resolution demographic and mobility data to simulate disease spread on the global scale. The GLEaMviz design aims at maximizing flexibility in defining the disease compartmental model and configuring the simulation scenario; it allows the user to set a variety of parameters including: compartment-specific features, transition values, and environmental effects. The output is a dynamic map and a corresponding set of charts that quantitatively describe the geo-temporal evolution of the disease. The software is designed as a client-server system. The multi-platform client, which can be installed on the user's local machine, is used to set up simulations that will be executed on the server, thus avoiding specific requirements for large computational capabilities on the user side. Conclusions The user-friendly graphical interface of the GLEaMviz tool, along with its high level of detail and the realism of its embedded modeling approach, opens up the platform to simulate realistic epidemic scenarios. These features make the GLEaMviz computational tool a convenient teaching/training tool as well as a first step toward the development of a computational tool aimed at facilitating the use and exploitation of computational models for the policy making and scenario analysis of infectious disease outbreaks.

2011-01-01

337

An Overview of Public Access Computer Software Management Tools for Libraries  

ERIC Educational Resources Information Center

An IT decision maker gives an overview of public access PC software that's useful in controlling session length and scheduling, Internet access, print output, security, and the latest headaches: spyware and adware. In this article, the author describes a representative sample of software tools in several important categories such as setup…

Wayne, Richard

2004-01-01

338

A comparison of shielding calculation methods for multi-slice computed tomography (CT) systems.  

PubMed

Currently in the UK, shielding calculations for computed tomography (CT) systems are based on the BIR-IPEM (British Institute of Radiology and Institute of Physics in Engineering in Medicine) working group publication from 2000. Concerns have been raised internationally regarding the accuracy of the dose plots on which this method depends and the effect that new scanner technologies may have. Additionally, more recent shielding methods have been proposed by the NCRP (National Council on Radiation Protection) from the USA. Thermoluminescent detectors (TLDs) were placed in three CT scanner rooms at different positions for several weeks before being processed. Patient workload and dose data (DLP: the dose length product and mAs: the tube current-time product) were collected for this period. Individual dose data were available for more than 95% of patients scanned and the remainder were estimated. The patient workload data were used to calculate expected scatter radiation for each TLD location by both the NCRP and BIR-IPEM methods. The results were then compared to the measured scattered radiation. Calculated scattered air kerma and the minimum required lead shielding were found to be frequently overestimated compared to the measured air kerma, on average almost five times the measured scattered air kerma. PMID:19029585

Cole, J A; Platten, D J

2008-12-01

339

A Method for Computing Stabilization Pressures for Excavations in Incompetent Rock, with Computer User Information.  

National Technical Information Service (NTIS)

The report describes a technique for determining the confining pressures that must be provided for stabilizing underground openings in incompetent rock. The technique is adapted for use with a nonlinear finite element computer code, NONSAP, to obtain stab...

J. D. Dixon M. A. Mahtab

1976-01-01

340

Progress Towards Computational Method for Circulation Control Airfoils  

NASA Technical Reports Server (NTRS)

The compressible Reynolds-averaged Navier-Stokes equations are solved for circulation control airfoil flows. Numerical solutions are computed with both structured and unstructured grid solvers. Several turbulence models are considered, including the Spalart-Allmaras model with and without curvature corrections, the shear stress transport model of Menter, and the k-enstrophy model. Circulation control flows with jet momentum coefficients of 0.03, 0.10, and 0.226 are considered. Comparisons are made between computed and experimental pressure distributions, velocity profiles, Reynolds stress profiles, and streamline patterns. Including curvature effects yields the closest agreement with the measured data.

Swanson, R. C.; Rumsey, C. L.; Anders, S. G.

2005-01-01

341

Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.  

ERIC Educational Resources Information Center

Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

Heald, Emerson F.

1978-01-01

342

ELEMENT-FREE GALERKIN METHOD IN ELECTROMAGNETIC FIELD COMPUTATION  

Microsoft Academic Search

Element-free Galerkin Method (EFGM) as a widely used method in all mesh- less studies has developed extensively in the past twenty years. As an attempt to solve electromagnetic field problems by the meshless method, we study the algorithm and pro- gramming when solving electromagnetic problems by using EFGM. Current research on this method is still in its infancy in electromagnetic

Yun Zhai; Maohui Xia; Lechun Liu

2008-01-01

343

A Combined Method to Compute the Proximities of Asteroids  

Microsoft Academic Search

We describe a simple and efficient numerical-analytical method to find all of the proximities and critical points of the distance function in the case of two elliptical orbits with a common focus. Our method is based on the solutions of Simovljevic's (1974) graphical method and on the transcendent equations developed by Lazovic (1993). The method is tested on 2 997

S. Segan; S. Milisavljevic; D. Marceta

2011-01-01

344

Volume rendering methods for computational fluid dynamics visualization  

Microsoft Academic Search

This paper describes three alternative volume rendering approaches to visualizing computational fluid dynamics (CFD) data. One new approach uses realistic volumetric gas rendering techniques to produce photo-realistic images and animations from scalar CFD data. The second uses ray casting that is based on a simpler illumination model and is mainly centered around a versatile new tool for the design of

David S. Ebert; Roni Yagel; James N. Scott; Yair Kurzion

1994-01-01

345

Computer-Graphics and the Literary Construct: A Learning Method.  

ERIC Educational Resources Information Center

Describes an undergraduate student module that was developed at the University of Exeter (United Kingdom) in which students made their own computer graphics to discover and to describe literary structures in texts of their choice. Discusses learning outcomes and refers to the Web site that shows students' course work. (Author/LRW)

Henry, Avril

2002-01-01

346

New Computational Method in the Theory of Spinodal Decomposition.  

National Technical Information Service (NTIS)

We present a new series of calculations in the theory of spinodal decomposition. The computational scheme is based on a simple ansatz for the two-point distribution function which leads to closure of the hierarchy of equations of motion for the high-order...

J. S. Langer M. Bar-on H. D. Miller

1974-01-01

347

Computational methods in Nearfield Acoustic Holography (NAH), appendix  

Microsoft Academic Search

The continuous integrals and integral equations which form the theory of Nearfield Acoustic Holography for planar and odd-shaped source boundary surfaces are reviewed, and the approximations and assumptions necessary to reduce these equations to a set of finite and discrete operations suitable for computation are developed. These equations represent the solution of the Helmholtz equation with specified boundary conditions by

W. A. Veronesi

1986-01-01

348

Computational methods for viscoplastic dynamic fracture mechanics analysis  

SciTech Connect

The role of nonlinear rate-dependent effects in the interpretation of crack run-arrest events in ductile materials is being investigated by the Heavy-Section Steel Technology (HSST) program through development and applications of viscoplastic-dynamic finite element analysis techniques. This paper describes a portion of these studies wherein various viscoplastic constitutive models and several proposed nonlinear fracture criteria are being installed in general purpose (ADINA) and special purpose (VISCRK) finite element computer program. The constitutive models implemented in these computer programs include the Bodner-Parton and the Perzyna viscoplastic formulations; the proposed fracture criteria include three parameters that are based on energy principles. The predictive capabilities of the nonlinear techniques are evaluated through applications to a series of HSST wide-plate crack-arrest tests. To assess the impact of including viscoplastic effects in the computational models, values of fracture parameters calculated in elastodynamic and viscoplastic-dynamic analyses are compared for a large wide-plate test. Finally, plans are reviewed for additional computational and experimental studies to assess the utility of viscoplastic analysis techniques in constructing a dynamic inelastic fracture mechanics model for ductile steels. 34 refs., 14 figs.

Bass, B.R.; Pugh, C.E.; Kenney-Walker, J.; Dexter, R.J.; O'Donoghue, P. E.; Schwartz, C. W.

1988-01-01

349

Multithreading, integral equation methods, and the desktop computer  

Microsoft Academic Search

The scaling properties of a new multi-threaded integral solver are presented. This integral equation solver includes both a new full matrix solver and SVD-based fast solver. Both solvers show about a 20% to 30% performance overhead when moving from a single processor to a dual processor. For higher end desktop PC's having multiple processors is now standard. And desktop computers

Andrew W. Mathis; Dalian Zheng; Richard Hall

2007-01-01

350

Quantitative Methods in Computer-Directed Teaching Systems.  

National Technical Information Service (NTIS)

The report formulates in quantitative terms the decision problem associated with the design of a computer-directed teaching system. This formulation is then used to direct a theoretical inquiry into some of the aspects of this problem that are relevant to...

R. D. Smallwood I. J. Weinstein J. E. Eckles

1967-01-01

351

Simple computer method provides contours for radiological images  

NASA Technical Reports Server (NTRS)

Computer is provided with information concerning boundaries in total image. Gradient of each point in digitized image is calculated with aid of threshold technique; then there is invoked set of algorithms designed to reduce number of gradient elements and to retain only major ones for definition of contour.

Newell, J. D.; Keller, R. A.; Baily, N. A.

1975-01-01

352

The Direct Lighting Computation in Global Illumination Methods  

Microsoft Academic Search

Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into

Changyaw Allen Wang

1994-01-01

353

New design methods for computer aided architecturald design methodology teaching  

Microsoft Academic Search

Architects and architectural students are exploring new ways of design using Computer Aided Architectural Design software. This exploration is seldom backed up from a design methodological viewpoint. In this paper, we introduce a design methodological framework for reflection on innovate design processes by architects that has been used in an educational setting. The framework leads to highly specific, weak design

Henri H. Achten

2003-01-01

354

Enabling Water Quality Management Decision Support and Public Outreach Using Cloud-Computing Services  

NASA Astrophysics Data System (ADS)

Watershed management is a participatory process that requires collaboration among multiple groups of people. Environmental decision support systems (EDSS) have long been used to support such co-management and co-learning processes in watershed management. However, implementing and maintaining EDSS in-house can be a significant burden to many water agencies because of budget, technical, and policy constraints. Basing on experiences from several web-GIS environmental management projects in Texas, we showcase how cloud-computing services can help shift the design and hosting of EDSS from the traditional client-server-based platforms to be simple clients of cloud-computing services.

Sun, A. Y.; Scanlon, B. R.; Uhlman, K.

2013-12-01

355

Retinal vessel measurement: comparison between observer and computer driven methods  

Microsoft Academic Search

A method of semi-automated image analysis for the measurement of retinal vessel diameters is described. This was compared with an observer-driven method for reproducibility and accuracy. The coefficient of variation for the data from the semi-automated method was 1.5–7.5% (depending on the vessel diameter) compared to 6–34% with the observer-driven method. The mean vessel diameters using the observer-driven method tended

Richard S. B. Newsom; Paul M. Sullivan; Sal M. B. Rassam; Roger Jagoe; Eva M. Kohner

1992-01-01

356

A method for computing the leading-edge suction in a higher-order panel method  

NASA Technical Reports Server (NTRS)

Experimental data show that the phenomenon of a separation induced leading edge vortex is influenced by the wing thickness and the shape of the leading edge. Both thickness and leading edge shape (rounded rather than point) delay the formation of a vortex. Existing computer programs used to predict the effect of a leading edge vortex do not include a procedure for determining whether or not a vortex actually exists. Studies under NASA Contract NAS1-15678 have shown that the vortex development can be predicted by using the relationship between the leading edge suction coefficient and the parabolic nose drag. The linear theory FLEXSTAB was used to calculate the leading edge suction coefficient. This report describes the development of a method for calculating leading edge suction using the capabilities of the higher order panel methods (exact boundary conditions). For a two dimensional case, numerical methods were developed using the double strength and downwash distribution along the chord. A Gaussian quadrature formula that directly incorporates the logarithmic singularity in the downwash distribution, at all panel edges, was found to be the best method.

Ehlers, F. E.; Manro, M. E.

1984-01-01

357

Geographical Information Systems (GIS): Their Use as Decision Support Tools in Public Libraries and the Integration of GIS with Other Computer Technology  

Microsoft Academic Search

Describes the the use of Geographical Information Systems (GIS) as decision support tools in public libraries in England. A GIS is a computer software system that represents data in a geographic dimension. GIS as a decision support tool in public libraries is in its infancy; only seven out of 40 libraries contacted in the survey have GIS projects, three of

Andrew M. Hawkins

1994-01-01

358

Public Release of a One Dimensional Version of the Photon Clean Method (PCM1D)  

NASA Astrophysics Data System (ADS)

We announce the public release of a one dimensional version of the Photon Clean Method (PCM1D). This code is in the general class of "inverse Monte Carlo" methods and is specifically designed to interoperate with the public analysis tools available from the Chandra Science Center and the HEASARC. The tool produces models of event based data on a photon by photon basis. The instrument models are based on the standard ARF and RMF fits files. The resulting models have a high number of degrees of freedom of order the number of photons detected providing an alternative analysis compared to the usual method of fitting models with only a few parameters. The original work on this method is described in ADASS 1996 (Jernigan and Vezie). We thank H. Tananbaum and J. McDowell of the Chandra Science Center, S. Kahn, the RGS/XMM-Newton US team leader, and W. Craig and S. Labov of the I Division of LLNL for their support for the development of the PCM concept. We thank P. Beiersdorfer and the EBIT team for the support to develop the first public version of PCM1D.

Carpenter, Matthew H.; Jernigan, J. G.

2006-09-01

359

An efficient method to compute spurious end point contributions in PO solutions. [Physical Optics  

NASA Technical Reports Server (NTRS)

A method is given to compute the spurious endpoint contributions in the physical optics solution for electromagnetic scattering from conducting bodies. The method is applicable to general three-dimensional structures. The only information required to use the method is the radius of curvature of the body at the shadow boundary. Thus, the method is very efficient for numerical computations. As an illustration, the method is applied to several bodies of revolution to compute the endpoint contributions for backscattering in the case of axial incidence. It is shown that in high-frequency situations, the endpoint contributions obtained using the method are equal to the true endpoint contributions.

Gupta, Inder J.; Burnside, Walter D.; Pistorius, Carl W. I.

1987-01-01

360

Reaching Hard-to-Reach Populations: Interactive Computer Programs as Public Information Campaigns for Adolescents.  

ERIC Educational Resources Information Center

Describes the interactive computer Project BARN (developed by University of Wisconsin Center for Health Systems Research and Analysis) that provides health information to adolescents practicing high-risk behaviors in sensitive areas of human sexuality, drugs, and cigarette smoking. Poses the question that such interaction could be a compromise…

Hawkins, Robert P.; And Others

1987-01-01

361

Computer User's Guide to the Protection of Information Resources. NIST Special Publication 500-171.  

ERIC Educational Resources Information Center

Computers have changed the way information resources are handled. Large amounts of information are stored in one central place and can be accessed from remote locations. Users have a personal responsibility for the security of the system and the data stored in it. This document outlines the user's responsibilities and provides security and control…

Helsing, Cheryl; And Others

362

Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows  

NASA Technical Reports Server (NTRS)

This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

Herrick, Gregory P.; Chen, Jen-Ping

2012-01-01

363

Multi-Level iterative methods in computational plasma physics  

SciTech Connect

Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD.

Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.

1999-03-01

364

The “Pegasus” method for computing the root of an equation  

Microsoft Academic Search

A modified Regula Falsi method is described which is appropriate for use when an interval bracketing of the root is known. The algorithm appears to exhibit superior asymptotic convergence properties to other modified linear methods.

M. Dowell; P. Jarratt

1972-01-01

365

A Parallel and Distributed Method for Computing High Dimensional MOLAP  

Microsoft Academic Search

\\u000a Data cube has been playing an essential role in fast OLAP(on-line analytical processing) in many multidimensional data warehouse.\\u000a We often execute range queries on aggregate cube computed by pre-aggregate technique in MOLAP. For the cube with d dimensions,\\u000a it can generate 2d cuboids. But in a high-dimensional data warehouse (such as the applications of bioinformatics and statistical analysis, etc.),\\u000a we

Kongfa Hu; Ling Chen; Qi Gu; Bin Li; Yisheng Dong

2005-01-01

366

Computer vision methods for visual MIMO optical system  

Microsoft Academic Search

Cameras have become commonplace in phones, laptops, music-players and handheld games. Similarly, light emitting displays are prevalent in the form of electronic billboards, televisions, computer monitors, and hand-held devices. The prevalence of cameras and displays in our society creates a novel opportunity to build camera-based optical wireless communication systems based on a concept called visual MIMO. We extend the common

Wenjia Yuan; Kristin Dana; Michael Varga; Ashwin Ashok; Marco Gruteser; Narayan Mandayam

2011-01-01

367

Perspectives toward the stereotype production method for public symbol design: a case study of novice designers.  

PubMed

This study investigated the practices and attitudes of novice designers toward user involvement in public symbol design at the conceptual design stage, i.e. the stereotype production method. Differences between male and female novice designers were examined. Forty-eight novice designers (24 male, 24 female) were asked to design public symbol referents based on suggestions made by a group of users in a previous study and provide feedback with regard to the design process. The novice designers were receptive to the adoption of user suggestions in the conception of the design, but tended to modify the pictorial representations generated by the users to varying extents. It is also significant that the male and female novice designers appeared to emphasize different aspects of user suggestions, and the female novice designers were more positive toward these suggestions than their male counterparts. The findings should aid the optimization of the stereotype production method for user-involved symbol design. PMID:22632980

Ng, Annie W Y; Siu, Kin Wai Michael; Chan, Chetwyn C H

2013-01-01

368

Smoothing and accelerated computations in the element free Galerkin method  

Microsoft Academic Search

Two topics in the formulation and implementation of meshless methods are considered: the smoothing of the approximating functions at concave boundaries and the speedup of the calculation of the approximating functions and their derivatives. These techniques are described in the context of the element free Galerkin method, but they are applicable to other meshless methods. Results are presented for some

Ted Belytschko; Yury Krongauz; Mark Fleming; Daniel Organ; Wing Kam Snm Liu

1996-01-01

369

The maximum entropy method applied to stationary density computation  

Microsoft Academic Search

The maximum entropy method (maxent) is widely used in the context of the moment problem which appears naturally in many branches of physics and engineering; it is used to numerically recover the density with least bias from finitely many known moments. We introduce the basic idea behind this method and apply this method to approximating fixed densities of Markov operators

Jiu Ding; Lawrence R. Mead

2007-01-01

370

Application of Moment Iteration Method (MIM) to Electromagnetic Field Computation,  

National Technical Information Service (NTIS)

The well-known method of moments is transformed into a very general iterational form called moment iteration method (MIM), which can be applied problems involving large or complicated bodies. It is shown that other iterational methods applied in the liter...

I. V. Lindell, K. I. Nikoskinen

1988-01-01

371

A rational interpolation method to compute frequency response  

NASA Technical Reports Server (NTRS)

A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

1993-01-01

372

Computational Systems Biology in Cancer: Modeling Methods and Applications  

PubMed Central

In recent years it has become clear that carcinogenesis is a complex process, both at the molecular and cellular levels. Understanding the origins, growth and spread of cancer, therefore requires an integrated or system-wide approach. Computational systems biology is an emerging sub-discipline in systems biology that utilizes the wealth of data from genomic, proteomic and metabolomic studies to build computer simulations of intra and intercellular processes. Several useful descriptive and predictive models of the origin, growth and spread of cancers have been developed in an effort to better understand the disease and potential therapeutic approaches. In this review we describe and assess the practical and theoretical underpinnings of commonly-used modeling approaches, including ordinary and partial differential equations, petri nets, cellular automata, agent based models and hybrid systems. A number of computer-based formalisms have been implemented to improve the accessibility of the various approaches to researchers whose primary interest lies outside of model development. We discuss several of these and describe how they have led to novel insights into tumor genesis, growth, apoptosis, vascularization and therapy.

Materi, Wayne; Wishart, David S.

2007-01-01

373

Utilization of computer technology by science teachers in public high schools and the impact of standardized testing  

NASA Astrophysics Data System (ADS)

A significant percentage of high school science teachers are not using computers to teach their students or prepare them for standardized testing. A survey of high school science teachers was conducted to determine how they are having students use computers in the classroom, why science teachers are not using computers in the classroom, which variables were relevant to their not using computers, and what are the effects of standardized testing on the use of technology in the high school science classroom. A self-administered questionnaire was developed to measure these aspects of computer integration and demographic information. A follow-up telephone interview survey of a portion of the original sample was conducted in order to clarify questions, correct misunderstandings, and to draw out more holistic descriptions from the subjects. The primary method used to analyze the quantitative data was frequency distributions. Multiple regression analysis was used to investigate the relationships between the barriers and facilitators and the dimensions of instructional use, frequency, and importance of the use of computers. All high school science teachers in a large urban/suburban school district were sent surveys. A response rate of 58% resulted from two mailings of the survey. It was found that contributing factors to why science teachers do not use computers were not enough up-to-date computers in their classrooms and other educational commitments and duties do not leave them enough time to prepare lessons that include technology. While a high percentage of science teachers thought their school and district administrations were supportive of technology, they also believed more inservice technology training and follow-up activities to support that training are needed and more software needs to be created. The majority of the science teachers do not use the computer to help students prepare for standardized tests because they believe they can prepare students more efficiently without a computer. Nearly half of the teachers, however, gave lack of time to prepare instructional materials and lack of a means to project a computer image to the whole class as reasons they do not use computers. A significant percentage thought science standardized testing was having a negative effect on computer use.

Priest, Richard Harding

374

The AEDC three-dimensional, potential flow computer program. Volume 1: Method and computer program  

Microsoft Academic Search

A complete description of a computer analysis of the potential subsonic flow about complex three-dimensional bodies is presented. The linear, partial differential equation for the compressible velocity gradient is solved for cases where the local Mach number everywhere in the flow field is less than one. The compressible flow equation is transformed, using Goethert similarity parameter, into the equivalent incompressible

D. C. Todd

1976-01-01

375

Thermal radiation view factor: Methods, accuracy and computer-aided procedures  

NASA Technical Reports Server (NTRS)

The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

Kadaba, P. V.

1982-01-01

376

A review of data quality assessment methods for public health information systems.  

PubMed

High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users' concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process. PMID:24830450

Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping

2014-05-01

377

A Review of Data Quality Assessment Methods for Public Health Information Systems  

PubMed Central

High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users’ concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process.

Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping

2014-01-01

378

Permeability computation on a REV with an immersed finite element method  

SciTech Connect

An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

Laure, P. [Laboratoire J.-A. Dieudonne, CNRS UMR 6621, Universite de Nice-Sophia Antipolis, Parc Valrose, 06108 Nice, Cedex 02 (France); Puaux, G.; Silva, L.; Vincent, M. [MINES ParisTech, CEMEF-Centre de Mise en Forme des Materiaux, CNRS UMR 7635, BP 207 1 rue Claude, Daunesse 06904 Sophia Antipolis cedex (France)

2011-05-04

379

Permeability computation on a REV with an immersed finite element method  

NASA Astrophysics Data System (ADS)

An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

Laure, P.; Puaux, G.; Silva, L.; Vincent, M.

2011-05-01

380

The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances  

NASA Technical Reports Server (NTRS)

In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

Beltran, Adriana; Salvador, James

1997-01-01

381

A Comparative Study of Two Computational Methods for Calculating Unsteady Transonic Flows About Oscillating Airfoils.  

National Technical Information Service (NTIS)

This report provides a direct comparison of the calculations for the flow field about an oscillating airfoil in a transonic freestream predicted by two computational methods; namely, harmonic analysis and time integration. Details of the two methods are s...

D. P. Rizzetta

1977-01-01

382

COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS  

EPA Science Inventory

This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

383

Multiple leaf tracking using computer vision methods with shape constraints  

NASA Astrophysics Data System (ADS)

Accurate monitoring of leaves and plants is a necessity for research on plant physiology. To aid this biological research, we propose a new active contour method to track individual leaves in chlorophyll fluorescence time laps sequences. The proposed active contour algorithm is developed such that it can handle sequences with low temporal resolution. This paper proposes a novel optimization method which incorporates prior knowledge about the plant shape. Tests show that the proposed framework outperforms state of the art tracking methods.

De Vylder, Jonas; Van Der Straeten, Dominique; Philips, Wilfried

2013-05-01

384

Perturbation Method for Computational Fluid-Dynamical Equations  

NASA Technical Reports Server (NTRS)

Perturbation technique yields accurate flow solutions using as few as one-fourth number of grid points required by finite-difference methods. Technique originally developed to solve Euler equations of two-dimensional, steady, inviscid transonic flow about airfoils, applicable to arbitrary equation sets and higher dimensions. New perturbations scheme used in design cycle where potential solutions generated routinely; Euler perturbation method used in second-cut analysis. Method also used to couple other equation sets.

Chow, L. J.; Pulliam, T. H.; Steger, J. L.

1986-01-01

385

Reference computations of public dose and cancer risk from airborne releases of plutonium. Nuclear safety technical report  

SciTech Connect

This report presents results of computations of doses and the associated health risks of postulated accidental atmospheric releases from the Rocky Flats Plant (RFP) of one gram of weapons-grade plutonium in a form that is respirable. These computations are intended to be reference computations that can be used to evaluate a variety of accident scenarios by scaling the dose and health risk results presented here according to the amount of plutonium postulated to be released, instead of repeating the computations for each scenario. The MACCS2 code has been used as the basis of these computations. The basis and capabilities of MACCS2 are summarized, the parameters used in the evaluations are discussed, and results are presented for the doses and health risks to the public, both the Maximum Offsite Individual (a maximally exposed individual at or beyond the plant boundaries) and the population within 50 miles of RFP. A number of different weather scenarios are evaluated, including constant weather conditions and observed weather for 1990, 1991, and 1992. The isotopic mix of weapons-grade plutonium will change as it ages, the {sup 241}Pu decaying into {sup 241}Am. The {sup 241}Am reaches a peak concentration after about 72 years. The doses to the bone surface, liver, and whole body will increase slightly but the dose to the lungs will decrease slightly. The overall cancer risk will show almost no change over this period. This change in cancer risk is much smaller than the year-to-year variations in cancer risk due to weather. Finally, x/Q values are also presented for other applications, such as for hazardous chemical releases. These include the x/Q values for the MOI, for a collocated worker at 100 meters downwind of an accident site, and the x/Q value integrated over the population out to 50 miles.

Peterson, V.L.

1993-12-23

386

Congestion control method with fair resource allocation for cloud computing environments  

Microsoft Academic Search

In a cloud computing environment, it is necessary to simultaneously allocate both processing ability and network bandwidth needed to access it. The authors proposed the congestion control method for a cloud computing environment which reduces the size of required resource for congested resource type, instead of restricting all service requests as in the existing networks. Although this method can achieve

Takuro Tomita; Shin-ichi Kuribayashi

2011-01-01

387

Extension of the RBD-FAST method to the computation of global sensitivity indices  

Microsoft Academic Search

This paper deals with the sensitivity analysis method named Fourier amplitude sensitivity test (FAST). This method is known to be very robust for the computation of global sensitivity indices but their computational cost remains prohibitive for complex and large dimensional models. Recent developments in the implementation of FAST by use of the random balance designs (RBD) technique have allowed significant

Thierry Alex Mara

2009-01-01

388

A linear perturbation computation method applied to hydrodynamic instability growth predictions in ICF targets  

Microsoft Academic Search

A linear perturbation computation method is used to compute hydrodynamic instability growth in model implosions of inertial confinement fusion direct-drive and indirect-drive designed targets. Accurate descriptions of linear perturbation evolutions for Legendre mode numbers up to several hundreds have thus been obtained in a systematic way, motivating further improvements of the physical modeling currently handled by the method.

J.-M. Clarisse; C. Boudesocque-Dubois; J.-P. Leidinger; J.-L. Willien

2006-01-01

389

A linear perturbation computation method applied to hydrodynamic instability growth predictions in ICF targets  

NASA Astrophysics Data System (ADS)

A linear perturbation computation method is used to compute hydrodynamic instability growth in model implosions of inertial confinement fusion direct-drive and indirect-drive designed targets. Accurate descriptions of linear perturbation evolutions for Legendre mode numbers up to several hundreds have thus been obtained in a systematic way, motivating further improvements of the physical modeling currently handled by the method.

Clarisse, J.-M.; Boudesocque-Dubois, C.; Leidinger, J.-P.; Willien, J.-L.

2006-06-01

390

Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions  

SciTech Connect

The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high behavior of longitudinal and transverse coupling impendances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides.

Dragt, A.J.; Gluckstern, R.L.

1990-11-01

391

Integrated method of stereo matching for computer vision  

NASA Astrophysics Data System (ADS)

It is an important problem for computer vision to match the stereo image pair. Only the problems of stereo matching are solved, the accurate location or measurement of object can be realized. In this paper, an integrated stereo matching approach is presented. Unlike most stereo matching approach, it integrates area-based and feature-based primitives. This allows it to take advantages of the unique attributes of each of these techniques. The feature-based process is used to match the image feature. It can provide a more precise sparse disparity map and accurate location of discontinuities. The area-based process is used to match the continuous surfaces.It can provide a dense disparity map. The techniques of stereo matching with adaptive window are adopted in the area-based process. It can make the results of area-based process get high precise. An integrated process is also used in this approach. It can integrate the results of feature-based process and area-based process, so that the approach can provide not only a dense disparity map but also an accurate location of discontinuities. The approach has been tested by some synthetic and nature images. From the results of matched wedding cake and matched aircraft model, we can see that the surfaces and configuration are well reconstructed. The integrated stereo matching approach can be used in 3D part recognition in intelligent assembly system and computer vision.

Xiong, Yingen; Wang, Dezong; Zhang, Guangzhao

1996-11-01

392

Computational methods for constructing protein structure models from 3D electron microscopy maps.  

PubMed

Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. PMID:23796504

Esquivel-Rodríguez, Juan; Kihara, Daisuke

2013-10-01

393

Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry  

NASA Technical Reports Server (NTRS)

Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

2012-01-01

394

Decluttering methods for high density computer-generated graphic displays  

NASA Technical Reports Server (NTRS)

Several decluttering methods were compared with respect to the speed and accuracy of user performance which resulted. The presence of a map background was also manipulated. Partial removal of nonessential graphic features through symbol simplification was as effective a decluttering technique as was total removal of nonessential graphic features. The presence of a map background interacted with decluttering conditions when response time was the dependent measure. Results indicate that the effectiveness of decluttering methods depends upon the degree to which each method makes essential graphic information distinctive from nonessential information. Practical implications are discussed.

Schultz, E. E., Jr.; Nichols, D. A.; Curran, P. S.

1985-01-01

395

Vectorization on the star computer of several numerical methods for a fluid flow problem  

NASA Technical Reports Server (NTRS)

A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

Lambiotte, J. J., Jr.; Howser, L. M.

1974-01-01

396

Leading Computational Methods on Scalar and Vector HEC Platforms  

SciTech Connect

The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: atmospheric modeling (CAM), magnetic fusion (GTC), plasma physics (LBMHD3D), and material science (PARATEC). We compare performance of the vector-based Cray X1, Earth Simulator, and newly-released NEC SX-8 and Cray X1E, with performance of three leading commodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: the first reported vector performance results for CAM simulations utilizing a finite-volume dynamical core on a high-resolution atmospheric grid; a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ESpromodity-based superscalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors, with modern parallel vector systems: the Cray X1, Earth Simulator (ES), and the NEC SX-8. Additionally, we examine performance of CAM on the recently-released Cray X1E. Our research team was the first international group to conduct a performance evaluation study at the Earth Simulator Center; remote ES access is not available.Our work builds on our previous efforts [16, 17] and makes several significant contributions: the first reported vector performance results for CAM simulations utilizing a finite-volume dynamical core on a high-resolution atmospheric grid; a new datadecomposition scheme for GTC that (for the first time) enables a breakthrough of the Teraflop barrier; the introduction of a new three-dimensional Lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26Tflop/s on 4800 ES processors; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.

Oliker, Leonid [Lawrence Berkeley National Laboratory (LBNL); Carter, Jonathan [Lawrence Berkeley National Laboratory (LBNL); Wehner, Michael [Lawrence Berkeley National Laboratory (LBNL); Canning, Andrew [Lawrence Berkeley National Laboratory (LBNL); Ethier, Stephane [Princeton Plasma Physics Laboratory (PPPL); Mirin, Arthur [Lawrence Livermore National Laboratory (LLNL); Bala, Govindasamy [Lawrence Livermore National Laboratory (LLNL); Parks, David [NEC Solutions America; Worley, Patrick H [ORNL; Kitawaki, Shigemune [Japan Agency for Marine-Earth Science and Technology (JAMSTEC); Tsuda, Yoshinori [Japan Agency for Marine-Earth Science and Technology (JAMSTEC)

2005-01-01

397

Computer-based plagiarism detection methods and tools: an overview  

Microsoft Academic Search

The paper is dedicated to plagiarism problem. The ways how to reduce plagiarism: both: plagiarism prevention and plagiarism detection are discussed. Widely used plagiarism detection methods are described. The most known plagiarism detection tools are analysed.

Romans Lukashenko; Vita Graudina; Janis Grundspenkis

2007-01-01

398

Initial Investigation into Methods of Computing Transonic Aerodynamic Sensitivity Coefficients.  

National Technical Information Service (NTIS)

Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared ...

L. A. Carlson

1991-01-01

399

Matching Wind Turbine Rotors and Loads: Computational Methods for Designers.  

National Technical Information Service (NTIS)

This report provides a comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications. The user must supply: (1) turbine aerodynamic efficiency as a function of ...

J. B. Seale

1983-01-01

400

Construction of Computational Methods for Shallow Water Flow Problems.  

National Technical Information Service (NTIS)

The usefulness of mathematical models based on shallow water equations (SWE) is generally recognized for hydraulic problems in civil engineering. This work contains a step-by-step description of the construction of a finite difference method (FDM) for the...

G. S. Stelling

1984-01-01

401

Circular Integrated Optical Microresonators: Analytical Methods and Computational Aspects  

NASA Astrophysics Data System (ADS)

This chapter discusses an ab initio frequency domain model of circular microresonators, built on the physical notions that commonly enter the description of the resonator functioning in terms of interaction between fields in the circular cavity with the modes supported by the straight bus waveguides. Quantitative evaluation of this abstract model requires propagation constants associated with the cavity/bend segments, and scattering matrices, that represent the wave interaction in the coupler regions. These quantities are obtained by an analytical (2-D) or numerical (3-D) treatment of bent waveguides, along with spatial coupled mode theory (CMT) for the couplers. The required CMT formulation is described in detail. Also, quasi-analytical approximations for fast and accurate computation of the resonator spectra are discussed. The formalism discussed in this chapter provides valuable insight into the functioning of the resonators, and it is suitable for practical device design.

Hiremath, Kirankumar; Hammer, Manfred

402

Computer capillaroscopy as a new cardiological diagnostics method  

NASA Astrophysics Data System (ADS)

The blood flow in capillary vessels plays an important role in sustaining the vital activity of the human organism. The computerized capillaroscope is used for the investigations of nailfold (eponychium) capillary blood flow. An important advantage of the instrument is the possibility of performing non-invasive investigations, i.e., without damage to skin or vessels and causing no pain or unpleasant sensations. The high-class equipment and software allow direct observation of capillary blood flow dynamics on a computer screen at a 700 - 1300 times magnification. For the first time in the clinical practice, it has become possible to precisely measure the speed of capillary blood flow, as well as the frequency of aggregate formation (glued together in clots of blood particles). In addition, provision is made for automatic measurement of capillary size and wall thickness and automatic recording of blood aggregate images for further visual study, documentation, and electronic database management.

Gurfinkel, Yuri I.; Korol, Oleg A.; Kufal, George E.

1998-04-01

403

Solution NMR and Computational Methods for Understanding Protein Allostery  

PubMed Central

Allosterism is an essential biological regulatory mechanism. In enzymes, allosteric regulation results in an activation or inhibition of catalytic turnover. The mechanisms by which this is accomplished are unclear and vary significantly depending on the enzyme. It is commonly the case that a metabolite binds to the enzyme at a site distant from the catalytic site yet its binding is coupled to and sensed by the active site. This coupling can manifest in changes in structure, dynamics, or both at the active site. These interactions between allosteric and active site, which are often quite distant from one another involve numerous atoms as well as complex conformational rearrangements of the protein secondary and tertiary structure. Interrogation of this complex biological phenomenon necessitates multiple experimental approaches. In this article, we outline a combined solution NMR spectroscopic and computational approach using molecular dynamics and network models to uncover mechanistic aspects of allostery in the enzyme imidazole glycerol phosphate synthase.

Manley, Gregory; Rivalta, Ivan

2014-01-01

404

The Voronoi Implicit Interface Method for computing multiphase physics  

PubMed Central

We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.

Saye, Robert I.; Sethian, James A.

2011-01-01

405

A locally refined rectangular grid finite element method - Application to computational fluid dynamics and computational physics  

NASA Technical Reports Server (NTRS)

The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.

Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

1991-01-01

406

A Comparison of Phylogenetic Network Methods Using Computer Simulation  

PubMed Central

Background We present a series of simulation studies that explore the relative performance of several phylogenetic network approaches (statistical parsimony, split decomposition, union of maximum parsimony trees, neighbor-net, simulated history recombination upper bound, median-joining, reduced median joining and minimum spanning network) compared to standard tree approaches, (neighbor-joining and maximum parsimony) in the presence and absence of recombination. Principal Findings In the absence of recombination, all methods recovered the correct topology and branch lengths nearly all of the time when the substitution rate was low, except for minimum spanning networks, which did considerably worse. At a higher substitution rate, maximum parsimony and union of maximum parsimony trees were the most accurate. With recombination, the ability to infer the correct topology was halved for all methods and no method could accurately estimate branch lengths. Conclusions Our results highlight the need for more accurate phylogenetic network methods and the importance of detecting and accounting for recombination in phylogenetic studies. Furthermore, we provide useful information for choosing a network algorithm and a framework in which to evaluate improvements to existing methods and novel algorithms developed in the future.

Woolley, Steven M.; Posada, David; Crandall, Keith A.

2008-01-01

407

Computation of molecular electrostatics with boundary element methods.  

PubMed Central

In continuum approaches to molecular electrostatics, the boundary element method (BEM) can provide accurate solutions to the Poisson-Boltzmann equation. However, the numerical aspects of this method pose significant problems. We describe our approach, applying an alpha shape-based method to generate a high-quality mesh, which represents the shape and topology of the molecule precisely. We also describe an analytical method for mapping points from the planar mesh to their exact locations on the surface of the molecule. We demonstrate that derivative boundary integral formulation has numerical advantages over the nonderivative formulation: the well-conditioned influence matrix can be maintained without deterioration of the condition number when the number of the mesh elements scales up. Singular integrand kernels are characteristics of the BEM. Their accurate integration is an important issue. We describe variable transformations that allow accurate numerical integration. The latter is the only plausible integral evaluation method when using curve-shaped boundary elements. Images FIGURE 3 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 8

Liang, J; Subramaniam, S

1997-01-01

408

One-eighth look-up table method for effectively generating computer-generated hologram patterns  

NASA Astrophysics Data System (ADS)

To generate ideal digital holograms, a computer-generated hologram (CGH) has been regarded as a solution. However, it has an unavoidable problem in that the computational burden for generating CGH is very large. Recently, many studies have been conducted to investigate different solutions in order to reduce the computational complexity of CGH by using particular methods such as look-up tables (LUTs) and parallel processing. Each method has a positive effectiveness about reducing computational time for generating CGH. However, it appears to be difficult to apply both methods simultaneously because of heavy memory consumption of the LUT technique. Therefore, we proposed a one-eighth LUT method where the memory usage of the LUT is reduced, making it possible to simultaneously apply both of the fast computing methods for the computation of CGH. With the one-eighth LUT method, only one-eighth of the zone plates were stored in the LUT. All of the zone plates were accessed by indexing method. Through this method, we significantly reduced memory usage of LUT. Also, we confirmed the feasibility of reducing the computational time of the CGH by using general-purpose graphic processing units while reducing the memory usage.

Cho, Sungjin; Ju, Byeong-Kwon; Kim, Nam-Young; Park, Min-Chul

2014-05-01

409

A rapid method for the computation of equilibrium chemical composition of air to 15000 K  

NASA Technical Reports Server (NTRS)

A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

Prabhu, Ramadas K.; Erickson, Wayne D.

1988-01-01

410

Computational methods of robust controller design for aerodynamic flutter suppression  

NASA Technical Reports Server (NTRS)

The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

Anderson, L. R.

1981-01-01

411

Numerical computation of sapphire crystal growth using heat exchanger method  

Microsoft Academic Search

The finite element software FIDAP is employed to study the temperature and velocity distribution and the interface shape during a large sapphire crystal growth process using a heat exchanger method (HEM). In the present study, the energy input to the crucible by the radiation and convection inside the furnace and the energy output through the heat exchanger is modeled by

Chung-Wei Lu; Jyh-Chen Chen

2001-01-01

412

Enabling mass customization: computer-driven alteration methods  

Microsoft Academic Search

Manufacturers have been struggling to meet the wants and needs of their customers without sacrificing the efficiencies and profits gained through mass production. Fortunately, developments in information technology have increased the probability of mass customization being adopted as an acceptable business paradigm. Almost every CAD system used in apparel patternmaking has some method that would enable mass customization through automatic

Cynthia L. Istook

2002-01-01

413

Computer program for steamtube curvature analysis: Analytical method  

NASA Technical Reports Server (NTRS)

Program provides design information for low-drag, high-drag-divergence, Mach number isolated nacelles suitable for use with advanced high-bypass-ratio, turbofan engines. One element is development of method to predict inviscid pressure distribution and flow field about arbitrary axisymmetric ducted body at transonic speeds.

Ferguson, D. R.; Heck, P. H.; Keith, J. S.; Lahti, D. J.; Merkle, C. L.

1974-01-01

414

A comprehensive overview of computational protein disorder prediction methods  

PubMed Central

Over the past decade there has been a growing acknowledgement that a large proportion of proteins within most proteomes contain disordered regions. Disordered regions are segments of the protein chain which do not adopt a stable structure. Recognition of disordered regions in a protein is of great importance for protein structure prediction, protein structure determination and function annotation as these regions have a close relationship with protein expression and functionality. As a result, a great many protein disorder prediction methods have been developed so far. Here, we present an overview of current protein disorder prediction methods including an analysis of their advantages and shortcomings. In order to help users to select alternative tools under different circumstances, we also evaluate 23 disorder predictors on the benchmark data of the most recent round of the Critical Assessment of protein Structure Prediction (CASP) and assess their accuracy using several complementary measures.

Deng, Xin; Eickholt, Jesse

2013-01-01

415

Intelligent classification methods of grain kernels using computer vision analysis  

NASA Astrophysics Data System (ADS)

In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

2011-06-01

416

Large scale structural optimization: Computational methods and optimization algorithms  

Microsoft Academic Search

Summary  The objective of this paper is to investigate the efficiency of various optimization methods based on mathematical programming\\u000a and evolutionary algorithms for solving structural optimization problems under static and seismic loading conditions. Particular\\u000a emphasis is given on modified versions of the basic evolutionary algorithms aiming at improving the performance of the optimization\\u000a procedure. Modified versions of both genetic algorithms and

M. Papadrakakis; N. D. Lagaros; Y. Tsompanakis; V. Plevris

2001-01-01

417

High-order RKDG Methods for Computational Electromagnetics  

Microsoft Academic Search

In this talk, we devise a new Runge-Kutta Discontinuous Galerkin (RKDG) method that achieves full high-order convergence in time and space while keeping the time- step proportional to the spatial mesh-size. To this end, we derive an extension to non-autonomous linear systems of the mth-order, m-stage strong stability preserving Runge-Kutta (SSP-RK) scheme with low storage described in Gottlieb et al.

Min-hung Chen; Bernardo Cockburn; Fernando Reitich

2005-01-01

418

A Survey of Synchronization Methods for Parallel Computers  

Microsoft Academic Search

An examination is given of how traditional synchronization methods influence the design of MIMD (multiple-instruction multiple-data-stream) multiprocessors. She provides an overview of MIMD multiprocessing and goes on to discuss semaphore-based implementations (Ultracomputers, Cedar, and the Sequent Balance\\/21000), monitor-based implementations (the HM\\/sup 2\\/p) and implementations based on message-passing (HEP, the BBN Butterfly and the Transputer).

Anne Dinning

1989-01-01

419

Computer method for design of acoustic liners for turbofan engines  

NASA Technical Reports Server (NTRS)

A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.

Minner, G. L.; Rice, E. J.

1976-01-01

420

Computer Simulations of Valveless Pumping using the Immersed Boundary Method  

NASA Astrophysics Data System (ADS)

Pumping blood in one direction is the main function of the heart, and the heart is equipped with valves that ensure unidirectional flow. Is it possible, though, to pump blood without valves? This report is intended to show by numerical simulation the possibility of a net flow which is generated by a valveless mechanism in a circulatory system. Simulations of valveless pumping are motivated by biomedical applications: cardiopulmonary resuscitation (CPR); and the human foetus before the development of the heart valves. The numerical method used in this work is immersed boundary method, which is applicable to problems involving an elastic structure interacting with a viscous incompressible fluid. This method has already been applied to blood flow in the heart, platelet aggregation during blood clotting, aquatic animal locomotion, and flow in collapsible tubes. The direction of flow inside a loop of tubing which consists of (almost) rigid and flexible parts is investigated when the boundary of one end of the flexible segment is forced periodically in time. Despite the absence of valves, net flow around the loop may appear in these simulations. Furthermore, we present the new, unexpected results that the direction of this flow is determined not only by the position of the periodic compression, but also by the frequency and amplitude of the driving force.

Jung, Eunok; Peskin, Charles

2000-03-01

421

Computer-supported G2G collaboration for public policy and decision-making  

Microsoft Academic Search

Purpose – This paper investigates whether and how G2G collaboration for policy and decision-making can be effectively supported by an appropriately developed information system. Design\\/methodology\\/approach – The research method adopted in this paper follows the “Design Science Paradigm”, which has been extensively used in information systems research. Findings – As resulted from the case study described in this paper, the

Nikos Karacapilidis; Euripides Loukis; Stavros Dimopoulos

2005-01-01

422

Methods of legitimation: How ethics committees decide which reasons count in public policy decision-making.  

PubMed

In recent years, liberal democratic societies have struggled with the question of how best to balance expertise and democratic participation in the regulation of emerging technologies. This study aims to explain how national deliberative ethics committees handle the practical tension between scientific expertise, ethical expertise, expert patient input, and lay public input by explaining two institutions' processes for determining the legitimacy or illegitimacy of reasons in public policy decision-making: that of the United Kingdom's Human Fertilisation and Embryology Authority (HFEA) and the United States' American Society for Reproductive Medicine (ASRM). The articulation of these 'methods of legitimation' draws on 13 in-depth interviews with HFEA and ASRM members and staff conducted in January and February 2012 in London and over Skype, as well as observation of an HFEA deliberation. This study finds that these two institutions employ different methods in rendering certain arguments legitimate and others illegitimate: while the HFEA attempts to 'balance' competing reasons but ultimately legitimizes arguments based on health and welfare concerns, the ASRM seeks to 'filter' out arguments that challenge reproductive autonomy. The notably different structures and missions of each institution may explain these divergent approaches, as may what Sheila Jasanoff (2005) terms the distinctive 'civic epistemologies' of the US and the UK. Significantly for policy makers designing such deliberative committees, each method differs substantially from that explicitly or implicitly endorsed by the institution. PMID:24833251

Edwards, Kyle T

2014-07-01

423

3D modeling method for computer animate based on modified weak structured light method  

Microsoft Academic Search

A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is

Hanwei Xiong; Ming Pan; Xiangwei Zhang

2010-01-01

424

Novel systems biology and computational methods for lipidomics  

NASA Astrophysics Data System (ADS)

The analysis and interpretation of large lipidomic data sets requires the development of new dynamical systems, data mining and visualization approaches. Traditional techniques are insufficient to study corregulations and stochastic fluctuations observed in lipidomic networks and resulting experimental data. The emphasis of this paper lies in the presentation of novel approaches for the dynamical analysis and projection representation. Different paradigms describing kinetic models and providing context-based information are described and at the same time their interrelations are revealed. These qualitative and quantitative methods are applied to the lipidomic analysis of U87 MG glioblastoma cells. The achieved provide a more detailed insight into the data structure of the lipidomic system.

Meyer-Bäse, Anke; Lespinats, Sylvain

2010-04-01

425

A simple method for computer quantification of stage REM eye movement potentials.  

PubMed

We describe a simple method for computer quantification of eye movement (EM) potentials during REM sleep. This method can be applied by investigators using either period-amplitude (PA) or Fast Fourier Transform (FFT) spectral EEG analysis without special hardware or computer programming. It provides good correlations with visual ratings of EM in baseline sleep and after administration of GABAergic hypnotics. We present baseline data for both PA and FFT measures for 16 normal subjects, studied for 5 consecutive nights. Both visually rated and computer-measured EM density (EMD) showed high night-to-night correlations across baseline and drug nights and the computer measures detected the EMD suppression that is produced by GABAergic drugs. Measurement of EM in addition to stage REM provides biologically significant information and application of this simple computer method, which does not require pattern recognition algorithms or special hardware, could provide reliable data that can be compared across laboratories. PMID:11352140

Tan, X; Campbell, I G; Feinberg, I

2001-05-01

426

Reference computations of public dose and cancer risk from airborne releases of uranium and Class W plutonium  

SciTech Connect

This report presents ``reference`` computations that can be used by safety analysts in the evaluations of the consequences of postulated atmospheric releases of radionuclides from the Rocky Flats Environmental Technology Site. These computations deal specifically with doses and health risks to the public. The radionuclides considered are Class W Plutonium, all classes of Enriched Uranium, and all classes of Depleted Uranium. (The other class of plutonium, Y, was treated in an earlier report.) In each case, one gram of the respirable material is assumed to be released at ground leveL both with and without fire. The resulting doses and health risks can be scaled to whatever amount of release is appropriate for a postulated accident being investigated. The report begins with a summary of the organ-specific stochastic risk factors appropriate for alpha radiation, which poses the main health risk of plutonium and uranium. This is followed by a summary of the atmospheric dispersion factors for unfavorable and typical weather conditions for the calculation of consequences to both the Maximum Offsite Individual and the general population within 80 km (50 miles) of the site.

Peterson, V.L.

1995-06-06

427

Application of the OPTEX method for computing reflector parameters  

SciTech Connect

We are investigating the OPTEX reflector model for obtaining few-group reflector parameters consistent with a reference power distribution in the core. In our study, the reference power distribution is obtained using a 142,872-region calculation defined over a 2D eighth-of-core pressurized water reactor and performed with the method of characteristics. The OPTEX method is based on generalized perturbation theory and uses an optimization algorithm known as parametric linear complementarity pivoting. The proposed model leads to few-group diffusion coefficients or P1-weighted macroscopic total cross sections that can be used to represent the reflector in full-core calculations. These few-group parameters can be spatially heterogeneous in order to correctly represent steel baffles present in modern pressurized water reactors. The optimal reflector parameters are compared to those obtained with a flux-volume weighting of the reflector cross sections recovered from the reference calculation. Important improvements in full-core power distribution are observed when the optimal parameters are used. (authors)

Hebert, A. [Ecole Polytechnique de Montreal, C.P. 6079 suce. Centre-Ville, Montreal QC. H3C 3A7 (Canada)] [Ecole Polytechnique de Montreal, C.P. 6079 suce. Centre-Ville, Montreal QC. H3C 3A7 (Canada); Leroyer, H. [EDF - R and D, SINETICS, 1 Avenue du General de Gaulle, 92141 Clamart (France)] [EDF - R and D, SINETICS, 1 Avenue du General de Gaulle, 92141 Clamart (France)

2013-07-01

428

Private and Public Sector Enterprise Resource Planning System Post-Implementation Practices: A Comparative Mixed Method Investigation  

ERIC Educational Resources Information Center

While private sector organizations have implemented enterprise resource planning (ERP) systems since the mid 1990s, ERP implementations within the public sector lagged by several years. This research conducted a mixed method, comparative assessment of post "go-live" ERP implementations between public and private sector organization. Based on a…

Bachman, Charles A.

2010-01-01

429

Analysis of the development concepts and methods of visual data representation in computational physics  

NASA Astrophysics Data System (ADS)

Main steps in the development of scientific visualization as a branch of science are discussed. The evolution and prospects of the development of concepts, methods, and approaches of visual representation of numerical results obtained in computational physics (mainly, in computational fluid dynamics) are discussed.

Bondarev, A. E.; Galaktionov, V. A.; Chechetkin, V. M.

2011-04-01

430

Feature selective validation (FSV) for validation of computational electromagnetics (CEM). part I-the FSV method  

Microsoft Academic Search

A goal for the validation of computational electromagnetics (CEM) is to provide the community with a simple computational method that can be used to predict the assessment of electromagnetic compatibility (EMC) data as it would be undertaken by individuals or teams of engineers. The benefits of being able to do this include quantifying the comparison of data that has hitherto

Alistair P. Duffy; Anthony J. M. Martin; Antonio Orlandi; G. Antonini; T. M. Benson; M. S. Woolfson

2006-01-01

431

A computational method for optimal control of hybrid systems using differential transformation  

Microsoft Academic Search

A computational method based on differential transformation is proposed for optimal control of hybrid systems with a predefined switching sequence. Using differential transformation, the optimality conditions of hybrid systems are transformed into a system of nonlinear algebraic equations. The numerical optimal solution is computed in the form of a finite-term series of a chosen basis system. The performance of the

Dzung Du; Jinhua Li; Inseok Hwang

2007-01-01

432

Multicore computing of the lattice Boltzmann method: A backward-facing step flow example  

Microsoft Academic Search

Recently, several methods were proposed to accelerate lattice Boltzmann computing. In this paper, we present a multicore scheme to accelerate a lattice Boltzmann model and the two-dimensional backward-facing step flow system is taken as an example to show the powerful capability of the multicore computing. The parallel algorithm has been implemented in Visual studio C++ 2005 augmented with calls to

Weibin Guo; Zhaoli Guo; Baochang Shi

2010-01-01

433

Computing research methods multi-perspective digital library: a call for participation  

Microsoft Academic Search

For the past three years, SIGCSE has sponsored a design research project on teaching Computing Research Methods (CRM) [4]. The initial phase of the work included an ITiCSE working group that gathered a great deal of literature on and about: computing research; CRM; and teaching CRM [3]. During the literature review, we discovered a number of similar current and prior

Anne G. Applin; Hilary J. Holz

2008-01-01

434

A multi-perspective digital library to facilitate integrating teaching research methods across the computing curriculum  

Microsoft Academic Search

The computing research methods (CRM) literature is scat- tered across discourse communities and published in spe- cialty journals and conference proceedings. This dispersion has led to the use of inconsistent terminology when referring to CRM. With no established CRM vocabulary and isolated discourse communities, computing as a field needs to engage in a sense-making process to establish the common ground

Anne Gates Applin; Hilary J. Holz; William Joel; Ifeyinwa Okoye; Katherine Deibel; Becky Grasser; Briony J. Oates; Gwendolyne Wood

2007-01-01

435

Improvement in computational efficiency of Euler equations via a modified Sparse Point Representation method  

Microsoft Academic Search

A modified Sparse Point Representation (SPR) method is proposed to enhance the computational efficiency of Euler equations. A SPR dataset adapted to a solution is constructed through interpolating wavelet decomposition and thresholding. The fluxes are evaluated only at the points within a SPR dataset, which reduces the total computing time. In order to improve the overall efficiency and accuracy of

Hyung Min Kang; Kyu Hong Kim; Do Hyung Lee; Dong Ho Lee

2008-01-01

436

A finite difference continuation method for computing energy levels of Bose Einstein condensates  

Microsoft Academic Search

We study a finite difference continuation (FDC) method for computing energy levels and wave functions of Bose Einstein condensates (BEC), which is governed by the Gross Pitaevskii equation (GPE). We choose the chemical potential lambda as the continuation parameter so that the proposed algorithm can compute all energy levels of the discrete GPE. The GPE is discretized using the second-order

S.-L. Chang; C.-S. Chien; Z.-C. Li

2008-01-01

437

A Survey of Methods for Computing (un)Stable Manifolds of Vector Fields  

Microsoft Academic Search

The computation of global invariant manifolds has seen renewed interest in recent years. We survey dieren t approaches for computing a global stable or unstable mani- fold of a vector eld, where we concentrate on the case of a two-dimensional manifold. All methods are illustrated with the same example | the two-dimensional stable man- ifold of the origin in the

Bernd Krauskopf; Hinke M. Osinga; Eusebius J. Doedel; M. E. Henderson; John Guckenheimer; A. Vladimirsky; M. Dellnitz; O. Junge

2005-01-01

438

Nonlinear dynamic simulation of single- and multispool core engines, part 1: Computational method  

Microsoft Academic Search

A new computational method for accurate simulation of the nonlinear, dynamic behavior of single- and multispool core engines, turbofan engines, and power-generation gas turbine engines is presented in part 1. In order to perform the simulation, a modularly structured computer code has been developed that includes individual mathematical modules representing various engine components. The generic structure of the code enables

M. T. Schobeiri; M. Attia; C. Lippke

1994-01-01

439

Multi-centred mixed-methods PEPFAR HIV care & support public health evaluation: study protocol  

PubMed Central

Background A public health response is essential to meet the multidimensional needs of patients and families affected by HIV disease in sub-Saharan Africa. In order to appraise curret provision of HIV care and support in East Africa, and to provide evidence-based direction to future care programming, and Public Health Evaluation was commissioned by the PEPFAR programme of the US Government. Methods/Design This paper described the 2-Phase international mixed methods study protocol utilising longitudinal outcome measurement, surveys, patient and family qualitative interviews and focus groups, staff qualitative interviews, health economics and document analysis. Aim 1) To describe the nature and scope of HIV care and support in two African countries, including the types of facilities available, clients seen, and availability of specific components of care [Study Phase 1]. Aim 2) To determine patient health outcomes over time and principle cost drivers [Study Phase 2]. The study objectives are as follows. 1) To undertake a cross-sectional survey of service configuration and activity by sampling 10% of the facilities being funded by PEPFAR to provide HIV care and support in Kenya and Uganda (Phase 1) in order to describe care currently provided, including pharmacy drug reviews to determine availability and supply of essential drugs in HIV management. 2) To conduct patient focus group discussions at each of these (Phase 1) to determine care received. 3) To undertake a longitudinal prospective study of 1200 patients who are newly diagnosed with HIV or patients with HIV who present with a new problem attending PEPFAR care and support services. Data collection includes self-reported quality of life, core palliative outcomes and components of care received (Phase 2). 4) To conduct qualitative interviews with staff, patients and carers in order to explore and understand service issues and care provision in more depth (Phase 2). 5) To undertake document analysis to appraise the clinical care procedures at each facility (Phase 2). 6) To determine principle cost drivers including staff, overhead and laboratory costs (Phase 2). Discussion This novel mixed methods protocol will permit transparent presentation of subsequent dataset results publication, and offers a substantive model of protocol design to measure and integrate key activities and outcomes that underpin a public health approach to disease management in a low-income setting.

2010-01-01

440

Parallel Störmer-Cowell methods for high-precision orbit computations.  

NASA Astrophysics Data System (ADS)

Many orbit problems in celestial mechanics are described by (nonstiff) initial-value problems (IVPs) for second-order ordinary differential equations of the form y?= f(y). The authors consider high-order parallel methods which fit into the class of general linear methods. In each step, these methods compute blocks of k approximate solution values (or stage values) at k different points using the whole previous block of solution values. The k stage values can be computed in parallel, so that on a k-processor computer system such methods effectively perform as a one-value method. The block methods considered in this paper are such that each equation defining a stage value resembles a linear multistep equation of the familiar Störmer-Cowell type. For k = 4 and k = 5 they constructed explicit PSC methods with stage order q = k and step point order p = k+1 and implicit PSC methods with q = k+1 and p = k+2. For k ? 6 one can construct explicit PSC methods with q = k and p = k+2 and implicit PSC methods with q = k+1 and p = k+3. It turns out that for k ? 5 the abscissae of the stage values can be chosen such that only k-1 stage values in each block have to be computed, so that the number of computational stages, and hence the number of processors and the number of starting values needed, reduces to k* = k-1.

van der Houwen, P. J.; Messina, E.; de Swart, J. J. B.

1999-11-01

441

A method for computing leading-edge loads  

NASA Technical Reports Server (NTRS)

In this report a formula is developed that enables the determination of the proper design load for the portion of the wing forward of the front spar. The formula is inherently rational in concept, as it takes into account the most important variables that affect the leading-edge load, although theoretical rigor has been sacrificed for simplicity and ease of application. Some empirical corrections, based on pressure distribution measurements on the PW-9 and M-3 airplanes have been introduced to provide properly for biplanes. Results from the formula check experimental values in a variety of cases with good accuracy in the critical loading conditions. The use of the method for design purposes is therefore felt to be justified and is recommended.

Rhode, Richard V; Pearson, Henry A

1933-01-01

442

Skin Burns Degree Determined by Computer Image Processing Method  

NASA Astrophysics Data System (ADS)

In this paper a new method determining the degree of skin burns in quantities is put forward. Firstly, with Photoshop9.0 software, we analyzed the statistical character of skin burns images’ histogram, and then turned the images of burned skins from RGB color space to HSV space, to analyze the transformed color histogram. Lastly through Photoshop9.0 software we get the percentage of the skin burns area. We made the mean of images’ histogram,the standard deviation of color maps,and the percentage of burned areas as indicators of evaluating burns,then distributed indicators the weighted values,at last get the burned scores by summing the products of every indicator of the burns and the weighted values. From the classification of burned scores, the degree of burns can be evaluated.

Li, Hong-yan

443

Dynamic Integration of Mobile JXTA with Cloud Computing for Emergency Rural Public Health Care  

PubMed Central

Objectives The existing processes of health care systems where data collection requires a great deal of labor with high-end tasks to retrieve and analyze information, are usually slow, tedious, and error prone, which restrains their clinical diagnostic and monitoring capabilities. Research is now focused on integrating cloud services with P2P JXTA to identify systematic dynamic process for emergency health care systems. The proposal is based on the concepts of a community cloud for preventative medicine, to help promote a healthy rural community. We investigate the approaches of patient health monitoring, emergency care, and an ambulance alert alarm (AAA) under mobile cloud-based telecare or community cloud controller systems. Methods Considering permanent mobile users, an efficient health promotion method is proposed. Experiments were conducted to verify the effectiveness of the method. The performance was evaluated from September 2011 to July 2012. A total of 1,856,454 cases were transported and referred to hospital, identified with health problems, and were monitored. We selected all the peer groups and the control server N0 which controls N1, N2, and N3 proxied peer groups. The hospital cloud controller maintains the database of the patients through a JXTA network. Results Among 1,856,454 transported cases with beneficiaries of 1,712,877 cases there were 1,662,834 lives saved and 8,500 cases transported per day with 104,530 transported cases found to be registered in a JXTA network. Conclusion The registered case histories were referred from the Hospital community cloud (HCC). SMS messages were sent from node N0 to the relay peers which connected to the N1, N2, and N3 nodes, controlled by the cloud controller through a JXTA network.

Rajkumar, Rajasekaran; Sriman Narayana Iyengar, Nallani Chackravatula

2013-01-01

444

A new method to compute standard-weight equations that reduces length-related bias  

USGS Publications Warehouse

We propose a new method for developing standard-weight (Ws) equations for use in the computation of relative weight (Wr) because the regression line-percentile (RLP) method often leads to length-related biases in Ws equations. We studied the structural properties of W s equations developed by the RLP method through simulations, identified reasons for biases, and compared Ws equations computed by the RLP method and the new method. The new method is similar to the RLP method but is based on means of measured weights rather than on means of weights predicted from regression models. The new method also models curvilinear W s relationships not accounted for by the RLP method. For some length-classes in some species, the relative weights computed from Ws equations developed by the new method were more than 20 Wr units different from those using Ws equations developed by the RLP method. We recommend assessment of published Ws equations developed by the RLP method for length-related bias and use of the new method for computing new Ws equations when bias is identified. ?? Copyright by the American Fisheries Society 2005.

Gerow, K. G.; Anderson-Sprecher, R. C.; Hubert, W. A.

2005-01-01

445

Application of extrapolation method to incompressible N-S equations on massively parallel computer  

NASA Astrophysics Data System (ADS)

We chose the extrapolation method ROLE as the acceleration technique implemented on massively parallel computers. Theoretical discussion was given on applicability of the extrapolation method to a nonlinear system of equations by considering behavior of a nonlinear mapping. Then the extrapolation method was introduced to the coupled method solving incompressible N-S equations on AP1000. The maximum speed-up by extrapolation reached about 1.5˜1.9 and it hardly depended on the number of processors. It was concluded that the extrapolation method which retained its accelerative property even for fine granularity was an appropriate choice for massively parallel computation.

Shimano, Kenjiro; Arakawa, Chuichi

446

Analysis of the computed torque drive method and comparison with conventional position servo for a computer-controlled manipulator  

NASA Technical Reports Server (NTRS)

A manipulator and its control system (modeled after a Stanford design) is being developed as part of an artificial intelligence project. This development includes an analytical study of the control system software. A comparison is presented of the computed torque method and the conventional position servo. No conclusion is made as to the perference of one system over the other, as it is dependent upon the application and the results of a sampled data analysis.

Markiewicz, B. R.

1973-01-01

447

The computation of MR image distortions caused by tissue susceptibility using the boundary element method.  

PubMed

Static field inhomogeneity in magnetic resonance (MR) imaging produces geometrical distortions which restrict the clinical applicability of MR images, e.g., for planning of precision radiotherapy. The authors describe a method to compute distortions which are caused by the difference in magnetic susceptibility between the scanned object and the surrounding air. Such a method is useful for understanding how the distortions depend on the object geometry, and for correcting for geometrical distortions, and thereby improving MR/CT registration algorithms. The geometric distortions in MR can be directly computed from the magnetic field inhomogeneity and the applied gradients. The boundary value problem of computing the magnetic field inhomogeneity caused by susceptibility differences is analyzed. It is shown that the boundary element method (BEM) has several advantages over previously applied methods to compute the magnetic field. Starting from the BEM and the assumption that the susceptibilities are very small (typically O(10(-5)) or less), a formula is derived to compute the magnetic field directly, without the need to solve a large system of equations. The method is computationally very efficient when the magnetic field is needed at a limited number of points, e.g., to compute geometrical distortions of a set of markers or a single surface. In addition to its computational advantage the method proves to be efficient to correct for the lack of data outside the scan which normally causes large artifacts in the computed magnetic field. These artifacts can be reduced by assuming that at the scan boundary the object extends to infinity in the form of a generalized cylinder. With the adaptation of the BEM this assumption is equivalent to simply omitting the scan boundary from the computations. To the authors' knowledge, no such simple correction method exists for other computation methods. The accuracy of the algorithm was tested by comparing the BEM solution with the analytical solution for a sphere. When the applied homogeneous field is 1.5 T the agreement between both methods was within 0.11.10(-6) T. As an example, the method was applied to compute the displacement vector field of the surface of a human head, derived from an MR imaging data set. This example demonstrates that the distortions can be as large as 3 mm for points just outside the head when a gradient strength of 3 mT/m is used. It was also observed that distortion within the head can be described accurately as a linear scaling in the axial direction. PMID:18215943

de Munck, J C; Bhagwandien, R; Muller, S H; Verster, F C; Van Herk, M B

1996-01-01

448

Combining associative computing and distributed arithmetic methods for efficient implementation of multiple inner products  

NASA Astrophysics Data System (ADS)

Many multimedia processing algorithms as well as communication algorithms implemented in mobile devices are based on intensive implementation of linear algebra methods, in particular, implying implementation of a large number of inner products in real time. Among most efficient approaches to perform inner products are the Associative Computing (ASC) approach and Distributed Arithmetic (DA) approach. In ASC, computations are performed on Associative Processors (ASP), where Content-Addressable memories (CAMs) are used instead of traditional processing elements to perform basic arithmetic operations. In the DA approach, computations are reduced to look-up table reads with respect to binary planes of inputs. In this work, we propose a modification of Associative processors that supports efficient implementation of the DA method. Thus, the two powerful methods are combined to further improve the efficiency of multiple inner product computation. Computational complexity analysis of the proposed method illustrates significant speed-up when computing multiple inner products as compared both to the pure ASC method and to the pure DA method as well as to other state-of the art traditional methods for inner product calculation.

Guevorkian, David; Yli-Pietilä, Timo; Liuha, Petri; Egiazarian, Karen

2012-02-01

449

Scenario-based design: A method for connecting information system design with public health operations and emergency management  

PubMed Central

Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Methods: Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Results: Interview analysis identified twenty-five information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create twenty-five scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. Conclusion: The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design.

Reeder, Blaine; Turner, Anne M

2011-01-01

450

A Computationally Efficient Modification of QRM-MLD Signal Detection Method  

NASA Astrophysics Data System (ADS)

In this letter, we propose a novel signal detection method, reduced complexity QRM-MLD, which achieves almost identical error performance to that of the conventional QRM-MLD while significantly reducing the computational complexity.

Im, Tae-Ho; Kim, Jaekwon; Cho, Yong-Soo

451

Modeling Explosive/Rock Interaction During Presplitting Using ALE Computational Methods.  

National Technical Information Service (NTIS)

Arbitrary Lagrangian Eulerian (ALE) computational techniques allow treatment of gases, liq- uids, and solids in the same simulation. ALE methods include the ability to treat shockwaves in gases, liquids, and solids and the interaction of shockwaves with e...

R. P. Jensen D. S. Preece

1999-01-01

452

Application of Computer-Assisted Learning Methods in the Teaching of Chemical Spectroscopy.  

ERIC Educational Resources Information Center

Discusses the application of computer-assisted learning methods to the interpretation of infrared, nuclear magnetic resonance, and mass spectra; and outlines extensions into the area of integrated spectroscopy. (Author/CMV)

Ayscough, P. B.; And Others

1979-01-01

453

Improved Computer-Assisted Digoxin Therapy -- A Method Using Feedback of Measured Serum Digoxin Concentrations.  

National Technical Information Service (NTIS)

Automated feedback control methods were applied to a medical problem, in a computer program that used measured serum digoxin concentrations (as feedback) to predict future concentrations and to achieve desired concentrations. The system was validated by c...

L. B. Sheiner H. Halkin C. Peck B. Rosenberg K. L. Melmon

1974-01-01

454

Computational Methods for Stability and Control (COMSAC): The Time Has Come  

NASA Technical Reports Server (NTRS)

Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.

Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.

2005-01-01

455

An overview of computational simulation methods for composite structures failure and life analysis  

NASA Technical Reports Server (NTRS)

Three parallel computational simulation methods are being developed at the LeRC Structural Mechanics Branch (SMB) for composite structures failure and life analysis: progressive fracture CODSTRAN; hierarchical methods for high-temperature composites; and probabilistic evaluation. Results to date demonstrate that these methods are effective in simulating composite structures failure/life/reliability.

Chamis, Christos C.

1993-01-01

456

Computational complexity and parallelization of the meshless local Petrov–Galerkin method  

Microsoft Academic Search

The computational complexity of the meshless local Petrov–Galerkin method (MLPG) has been analyzed and compared with the finite difference (FDM) and finite element methods (FEM) from the user point of view. Theoretically, MLPG is the most complex of the three methods. Experimental results show that MLPG, with appropriately selected integration order and dimensions of support and quadrature domains, achieves similar

Roman Trobec; Marjan Šterk; Borut Robi?

2009-01-01

457

A Numerov-type Method for Computing Eigenvalues and Resonances of the Radial Schrödinger Equation  

Microsoft Academic Search

A two-step method is developed for computing eigenvalues and resonances of the radial Schrödinger equation. Numerical results obtained for the integration of the eigenvalue and the resonance problems for several potentials show that this new method is better than other similar methods.

Tom E. Simos; G. Tougelidis

1996-01-01

458

A coarse-constrained multiscale method for accelerating incompressible flow computations  

Microsoft Academic Search

We present a coarse-constrained multiscale (CCM) method for accelerating incompressible flow computations. Reducing the number of degrees of freedom of the Poisson solver by powers of two in the primitive variable fractional-step method, or the vorticity-stream function formulation of the problem accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine-resolution

Omer San; Anne E. Staples

459

Stock analysis method, computer program product, and computer-readable recording medium  

US Patent & Trademark Office Database

In a stock analysis method for performing an analysis on stocks to select target ones to be bought/sold from the stocks, each stock is grouped into a corresponding group based on stock return data thereof, market return data and industry return data of each corresponding classified industry. Clustering data for each stock corresponding to each time interval and associated with the groups is obtained based a clustering mode. Analysis data for each stock corresponding to a coming time interval is estimated based on the corresponding clustering data. Any ones of the stocks, whose analysis data matches predetermined selection criteria, are determined as the target stocks.

2014-04-29

460

Reducing computation time in DFP (Davidon, Fletcher & Powell) update method for solving unconstrained optimization problems  

NASA Astrophysics Data System (ADS)

Solving the unconstrained optimization problems is not easy and DFP update method is one of the methods that we can work with to solve the problems. In unconstrained optimization, the time computing needed by the method's algorithm to solve the problems is very vital and because of that, we proposed a hybrid search direction for DFP update method in order to reduce the computation time needed for solving unconstrained optimization problems. Some convergence analysis and numerical results of the hybrid search direction were analyzed and the results showed that the proposed hybrid search direction strictly reduce the computation time needed by DFP update method and at the same time increase the method's efficiency which is sometimes fail for some complicated unconstrained optimization problems.

Sofi, A. Z. M.; Mamat, M.; Ibrahim, M. A. H.

2013-04-01

461

Moving finite elements: A continuously adaptive method for computational fluid dynamics  

SciTech Connect

Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor fabrication, biophysics, automobile and aircraft design, industrial fluid processing, chemical engineering, and combustion research. The improvements made possible by the new method could thus have substantial economic impact. This paper describes the mathematical formulation and illustrates its use.

Glasser, A.H. (Los Alamos National Lab., NM (USA)); Miller, K.; Carlson, N. (California Univ., Berkeley, CA (USA))

1991-01-01

462

A combined direct/inverse three-dimensional transonic wing design method for vector computers  

NASA Technical Reports Server (NTRS)

A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

Weed, R. A.; Carlson, L. A.; Anderson, W. K.

1984-01-01

463

Numerical methods and comparison for computing dark and bright solitons in the nonlinear Schrödinger equation  

NASA Astrophysics Data System (ADS)

In this paper, we propose new efficient and accurate numerical methods for computing dark solitons and review some existing numerical methods for bright and/or dark solitons in the nonlinear Schrödinger equation (NLSE), and compare them numerically in terms of accuracy and efficiency. We begin with a review of dark and bright solitons of NLSE with defocusing and focusing cubic nonlinearities, respectively. For computing dark solitons, to overcome the nonzero and/or non-rest (or highly oscillatory) phase background at far field, we design efficient and accurate numerical methods based on accurate and simple artificial boundary conditions or a proper transformation to rest the highly oscillatory phase background. Stability and conservation laws of these numerical methods are analyzed. For computing interactions between dark and bright solitons, we compare the efficiency and accuracy of the above numerical methods and different existing numerical methods for computing bright solitons of NLSE, and identify the most efficient and accurate numerical methods for computing dark and bright solitons as well as their interactions in NLSE. These numerical methods are applied to study numerically the stability and interactions of dark and bright solitons in NLSE. Finally, they are extended to solve NLSE with general nonlinearity and/or external potential and coupled NLSEs with vector solitons.

Bao, Weizhu; Tang, Qinglin; Xu, Zhiguo

2013-02-01

464

A new computational method for computing flow over complex aerodynamic configurations and its application to rotor/body computation using Cartesian grids  

NASA Astrophysics Data System (ADS)

A new numerical method for efficiently computing vortex-dominated flows over complex aerodynamic configurations is developed. This method uses only a fixed, uniform Cartesian grid, no body conforming grid is required. The complex geometry surface is described by a smooth scalar function "F", which is defined at each grid node. By using Vorticity Confinement, this method effectively confines the vorticity to a narrow region even on coarse computational grids and for low order discretization schemes. The flow both inside and outside the configuration is considered, although in aerodynamic applications, the internal flow is fictitious. The no-slip boundary condition is imposed on solid body surfaces by eliminating the flow inside the configuration. Unlike other general Cartesian methods, no specific logic is needed to determine the body surface in the present method. Also, the vorticity can be shed from smooth surface as well as surfaces with sharp corners. Vorticity Confinement involves adding a simple term to the Navier-Stokes fluid dynamic equations. When discretized and solved, these modified equations admit convecting, concentrated vortices which maintain a fixed size and do not spread, even if there is numerical diffusion. Numerical results are presented for flows around simple and complex configurations which were investigated with the present method. As an application of this method, preliminary numerical flow solutions of a combined helicopter blade and real helicopter body are presented.

Wenren, Yonghu

465

Computation of the diffracted field of a toothed occulter by the semi-infinite rectangle method.  

PubMed

To observe the solar corona, stray light in the coronagraph, arising primarily from an external occulter and diaphragm illuminated directly by the Sun, should be strongly suppressed. A toothed occulter and diaphragm can be used to suppress stray light because they diffract much less light in the central area than a circular disk. This study develops a method of computing the light diffracted by a toothed occulter and diaphragm, obtaining the optimum shape using this method. To prove the method's feasibility, the diffracted fields of circular and rectangular disks are computed and compared with those calculated by a conventional method. PMID:24322869

Sun, Mingzhe; Zhang, Hongxin; Bu, Heyang; Wang, Xiaoxun; Ma, Junlin; Lu, Zhenwu

2013-10-01

466

Computing bivariate splines in scattered data fitting and the finite-element method  

NASA Astrophysics Data System (ADS)

A number of useful bivariate spline methods are global in nature, i.e., all of the coefficients of an approximating spline must be computed at one time. Typically this involves solving a system of linear equations. Examples include several well-known methods for fitting scattered data, such as the minimal energy, least-squares, and penalized least-squares methods. Finite-element